Multiprocessing: missing counter and summary
See original GitHub issueI am experimenting with client_python
and I am trying to set metrics on multiprocessing application. Imitating the README and tests (test_multiprocess) I ended with this code which doesn’t work:
from prometheus_client import multiprocess
from prometheus_client import generate_latest, Gauge, Counter, Summary, REGISTRY
import os
import time
import tempfile
import shutil
class Metrics(object):
def __init__(self, idx):
# make temp dir
self._tempdir = tempfile.mkdtemp()
# set os environment variable
os.environ['prometheus_multiproc_dir'] = self._tempdir
self._registry = REGISTRY
self._multiprocess_registry = multiprocess.MultiProcessCollector(self._registry, self._tempdir)
idx = str(idx)
self.GAUGE_THREAD_POOL = Gauge('context_thread_pool_' + idx, 'Number of threads used', multiprocess_mode='all')
self.COUNTER_APP_CALLS = Counter('context_app_calls_' + idx, 'Number of calls', registry=None, namespace='app_')
self.COUNTER_APP_SUCCESS = Counter('context_app_success_' + idx, 'Number of successful calls', registry=None, namespace='app_')
self.COUNTER_APP_FAILURE = Counter('context_app_failure_' + idx, 'Number of failed calls', registry=None, namespace='app_')
self.SUMMARY_APP = Summary('context_app_' + idx, 'Time spent on app', registry=None, namespace='app_')
def shutdown(self):
# remove the temporary directory
shutil.rmtree(self._tempdir)
def collect(self):
return generate_latest(self._registry).decode()
m = Metrics(idx=1)
m.GAUGE_THREAD_POOL.inc(3)
m.COUNTER_APP_CALLS.inc(1)
m.SUMMARY_APP.observe(3.14)
print(m.collect())
The last print misses counter and summary:
# HELP context_thread_pool_1 Number of threads used
# TYPE context_thread_pool_1 gauge
context_thread_pool_1 3.0
Version:
> pip3 show prometheus_client
Name: prometheus-client
Version: 0.0.19
> python3 --version
Python 3.5.3
Can you please suggest any change that will make the above code produce the correct summary of the registry?
Issue Analytics
- State:
- Created 6 years ago
- Comments:6 (3 by maintainers)
Top Results From Across the Web
Python: multiprocess workers, tracking tasks completed ...
The default multiprocessing.Pool code includes a counter to keep track of the number of tasks a worker has completed:
Read more >multiprocessing — Process-based parallelism — Python 3.11 ...
task_done() for each task removed from the queue or else the semaphore used to count the number of unfinished tasks may eventually overflow,...
Read more >Python Multiprocessing: The Complete Guide
The count is reported, which is shown as five, as we expect, then the details of each process are then reported. This highlights...
Read more >Why your multiprocessing Pool is stuck (it's full of sharks!)
You're using multiprocessing to run some code across multiple processes, and it just—sits there. It's stuck.
Read more >arcpy - python.multiprocessing and "FATAL ERROR (INFADI ...
Delete_management(tempMem) if __name__ == '__main__': cores = 3 pool = multiprocessing.Pool(cores) count = 0 for foo in bar: pool.apply_async(run,(foo,c)) ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
There isn’t currently. The right number coming out is the important thing.
Are you using multiprocess mode as described in the README?
Also, please open a new Github discussion or ask on the prometheus-users mailing-list rather than asking on a closed issue.