-
Notifications
You must be signed in to change notification settings - Fork 211
Description
Description
We are running an Azure Durable Function App on AKS with KEDA using a custom Docker image based on the official Azure Functions runtime. We are consistently observing high CPU utilization caused by a very large number of file watcher threads (600–800) being spawned inside the function pod.
This occurs intermittently:
- Sometimes in one pod
- Sometimes across two pods
- Even when traffic is low or idle
Despite explicitly disabling file watching via environment variables and host.json, the issue persists.
Environment
- Platform: Azure Kubernetes Service (AKS)
- Scaling: KEDA
- Function type: Durable Functions
- Runtime: Azure Functions v4
- .NET version: .NET 8
- OS: Linux container
Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS installer-env
FROM mcr.microsoft.com/azure-functions/dotnet:4-dotnet8.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true \
DOTNET_HOSTBUILDER__RELOADCONFIGONCHANGE=false \
DOTNET_USE_POLLING_FILE_WATCHER=1 \
AzureFunctionsJobHost__fileWatchingEnabled=false \
WEBSITE_RUN_FROM_PACKAGE=1 \
ASPNETCORE_URLS=http://+:8080 \
WEBSITE_HOSTNAME=localhost:8080host.json
{
"version": "2.0",
"logging": {
"fileLoggingMode": "never",
"console": {
"isEnabled": "false"
},
"logLevel": {
"default": "Information",
"Host": "Error"
},
"applicationInsights": {
"samplingSettings": {
"isEnabled": false,
"maxTelemetryItemsPerSecond": 500
}
}
},
"extensions": {
"durableTask": {
"storageProvider": {
"controlQueueBatchSize": 32,
"controlQueueBufferThreshold": 256,
"controlQueueVisibilityTimeout": "00:05:00",
"maxQueuePollingInterval": "00:00:05",
"partitionCount": 4,
"workItemQueueVisibilityTimeout": "00:05:00",
"useAppLease": true
},
"maxConcurrentActivityFunctions": 40,
"maxConcurrentOrchestratorFunctions": 40
}
},
"watchFiles": false,
"watchDirectories": false,
"fileWatchingEnabled": false
}Observed Behavior
-
CPU spikes to high levels even during low or no load
-
Thread dump shows 600–800 file watcher–related threads
-
High CPU correlates directly with these file watcher threads
-
Scaling events (scale down / scale up via KEDA) seem to increase the likelihood
-
Issue reproduces even with:
watchFiles=falsefileWatchingEnabled=falseDOTNET_HOSTBUILDER__RELOADCONFIGONCHANGE=falseWEBSITE_RUN_FROM_PACKAGE=1
Expected Behavior
- File watching should be fully disabled when explicitly configured
- Durable Function pods should not spawn hundreds of file watcher threads
- CPU utilization should remain low during idle or low-traffic periods
Additional Notes
- This setup is fully containerized (not App Service)
- Running Azure Functions on AKS with KEDA
- Looks like file watching might still be enabled internally or re-enabled during scaling events
- This causes unnecessary CPU consumption and impacts cluster cost and stability
Request
-
Clarification on why file watcher threads are still being created
-
Guidance on fully disabling file watching in Azure Functions running on AKS
-
Confirmation if this is a known issue with:
- Azure Functions v4
- Durable Functions
- .NET 8
- KEDA-based scaling