Using excessive file descriptors
See original GitHub issuePromitor uses too many Linux file descriptors, mainly sockets. We’re running Promitor on Kubernetes and this has resulted in the Node running out of file descriptors a few times, where other Pods will crash with too many open files error.
Expected Behavior
Promitor should re-use sockets or release them after use.
Actual Behavior
Promitor opens sockets and doesn’t seem to release them in a timely fashion.
Steps to Reproduce the Problem
- Run a Promitor container, with a scrape schedule of once per minute.
- Attach to the container and count the open file descriptors regularly, this is an example of a container that’s been running for 6 days:
/app # lsof | wc -l 376504 /app # lsof | wc -l 376845 - The number of open file descriptors should grow over time, nearing the limit of
810243.
Configuration
Provide insights in the configuration that you are using:
- Configured scraping schedule:
*/1 * * * *
Used scraping configuration
This is our staging configuration, which scrapes less resources but still encounters the same issue.
version: v1
azureMetadata:
tenantId:
subscriptionId:
resourceGroupName:
metricDefaults:
aggregation:
interval: "00:05:00"
scraping:
schedule: "*/1 * * * *"
metrics:
- name: "azure_sql_cpu_percent_average"
description: "'cpu_percent' with aggregation 'Average'"
resourceType: "Generic"
labels:
component: "sql-database"
azureMetricConfiguration:
metricName: "cpu_percent"
aggregation:
type: "Average"
resources:
- resourceGroupName: "groupName"
resourceUri: "Microsoft.Sql/servers/serverName/databases/dbName"
# 7 more databases
# 11 more sql metrics
# 6 loadbalancer metrics
# 12 redis metrics
# 10 service bus metrics
Specifications
- Version: 1.0.0 (image tag)
- Platform: Docker (Linux)
- Subsystem:
Issue Analytics
- State:
- Created 4 years ago
- Comments:10 (7 by maintainers)
Top Results From Across the Web
Allocating a lot of file descriptors
I am interested in bringing a system down (for, say 15 minutes) by allocating a lot of file descriptors and causing Out-of-File-Descriptor ......
Read more >How to deal with high file descriptor usage due ...
This documents shows how to deal with high file descriptor usage scenarios due to orphaned sockets and shows how to tune the TCP...
Read more >What are the ramifications of increasing the maximum ...
Ubuntu seems to have a default limit of 1024 open file descriptors. Nginx was complaining during some load testing I was doing that...
Read more >Limits on the number of file descriptors
According to the kernel documentation, /proc/sys/fs/file-max is the maximum, total, global number of file handles the kernel will allocate ...
Read more >Ghost file descriptors take over my machine.
Things ran smoothly this time and no excessive file descriptors were open on my machine. I decided to do one more thing.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Might have found something in #844 @ResDiaryLewis.
Looks like Azure SDK create a ton of HttpClients.
Thanks for the additional information and sorry for that!
Will have a look but since we don’t use
HttpClientdirectly it must be one of our dependencies so all information/telemetry is welcome.