I am experiencing an odd problem that I cannot find any solution for the last week.
Wednesday 16/4 ~17.00 suddenly my APIs deployed in an Azure Function app have become much slower.
The architecture is the following: Airflow runs .py file that writes on Azure SQL Server database. Then an Azure Function app deployed API selects data from that Azure SQL database.
- We did not change any of the APIs code
- We did not change
.pyfiles - We did not change the table formats in Azure SQL Server
Also, from Wednesday evening on, a daily scheduled job in Airflow has experienced these errors:
'life_cycle_state': 'TERMINATED', 'result_state': 'FAILED', 'state_message': 'Workload failed, see run output for details'} and with the errors [{'task_key': 'upload_to_tmp_tables', 'run_id': 672367321749800, 'error': 'org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 77.0 failed 4 times, most recent failure: Lost task 3.3 in stage 77.0 (TID 132) (10.148.6.142 executor 1): com.microsoft.sqlserver.jdbc.SQLServerException: The elastic pool has reached its storage limit. The storage usage for the elastic pool cannot exceed (256000) MBs.'}]
[2025-04-21, 23:24:40 EEST] {taskinstance.py:3093} ERROR - Received SIGTERM. Terminating subprocesses
{'life_cycle_state': 'TERMINATED', 'result_state': 'FAILED', 'state_message': 'Workload failed, see run output for details'} and with the errors [{'task_key': 'upload_to_tmp_tables', 'run_id': 401438450064268, 'error': "OperationalError: (20047, b'DB-Lib error message 20047, severity 9:\nDBPROCESS is dead or not enabled\n')"}]
Right now, my Azure database has 187 GB space used /600gb. Elastic Pool has 188 GB space used / 250GB.
Data space used in Azure DB is almost the same the last 3 months (+- 100 mb). Data space in the other databases is always the same.
The biggest table I try to write to in the Azure SQL database is 370 MB with a clustered columnstore index and non-unique non-clustered indexes.