question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Metric help text is ignored in multiprocess mode

See original GitHub issue

When using prometheus client in multiprocess mode as in the docs example, metric description always says Multiprocess metric:

# HELP inprogress_requests Multiprocess metric
# TYPE inprogress_requests gauge
inprogress_requests 0.0
# HELP num_requests Multiprocess metric
# TYPE num_requests counter
num_requests 81634.0

Issue Analytics

  • State:open
  • Created 6 years ago
  • Reactions:5
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

4reactions
HannesMvWcommented, Aug 27, 2018

Is this perhaps in the pipe, to preserve help texts in multiprocess mode?

1reaction
tsibleycommented, Mar 24, 2021

@csmarchbanks Great, thanks! I expect I’ll make a PR sometime in the next few weeks unless someone beats me to it (but given this issue was opened over 3 years ago, it seems unlikely! 🙃)

In the case of conflicting help text, two options I see are arbitrarily keeping one of them or falling back to the existing placeholder text. The first seems more helpful to me since it’ll still be more specific than the placeholder, but I’d be curious to hear your thoughts.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Developers - Metric help text is ignored in multiprocess mode -
When using prometheus client in multiprocess mode as in the docs example, metric description always says Multiprocess metric : # HELP inprogress_requests ...
Read more >
generate_latest() for MultiProcessing writes both the ...
However, I would like to only see the MultiProcess Metrics and ignore the per Process Metrics. Is there a way to do that?...
Read more >
Processes — Nextflow 22.10.2 documentation
str is ignored. Shell scripts support the use of the Template mechanism. The same rules are applied to the variables defined in the...
Read more >
LightningModule - PyTorch Lightning - Read the Docs
put model in train mode and enable gradient calculation model.train() ... If you want to calculate epoch-level metrics and log them, use log()...
Read more >
YOLOv4 — TAO Toolkit 4.0 documentation
Note: YOLOv4 does not support loading a pruned QAT model and retraining it with QAT ... Whether to use multiprocessing mode of keras...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found