Poor performance in MultiProcessCollector with frequently changing PIDs
See original GitHub issueI’m using django-prometheus in multiprocess mode, and I’ve noticed the time to fetch metrics increases the longer the server has been running. Currently I’ve got 5414 *.db
files in prometheus_multiproc_dir
.
MultiProcessCollector.collect
reads all db files which I suspect is the bottleneck.
mark_process_dead
only removes gauge files.
Do you think it’d be feasible to remove all files by copying the contents of type_{pid}.db
files into a single type_old.db
file? Or is this something that should be solved in django-prometheus?
Issue Analytics
- State:
- Created 6 years ago
- Reactions:3
- Comments:7 (5 by maintainers)
Top Results From Across the Web
4. Exposition - Prometheus: Up & Running [Book] - O'Reilly
Multiprocess Mode Under the Covers. Performance is vital for client libraries. This excludes designs where work processes send UDP packets or any other...
Read more >PDF - BentoML
Tutorial: Intro to BentoML A simple example of using BentoML in action. In under 10 minutes, you'll be able to.
Read more >Prometheus: Up & Running - The Swiss Bay
INFRASTRUCTURE AND APPLICATION PERFORMANCE MONITORING ... Now change the file called prometheus.yml to contain the following text: global:.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
This is a relatively new feature, so there’s no specific guidance yet. I’d say balance the disruption of a restart against the cost of handling more data.
We are running into the same issue - is there a fix already?