/metrics endpoint is way too slow under Wildfly 8
See original GitHub issueThis was earlier reported on #175, which was closed by adding a footnote in the documentation.
I’m running Wildfly instances in a few Docker containers and the /metrics entpoint takes way too long. From inside the container:
$ time curl http://localhost:28686/metrics
## clipped output (metrics are returned as filtered)
real 1m28.087s
user 0m0.020s
sys 0m0.024s
We are having better results using Glassfish 3.1.2.2 or Wildfly 10. Any particular reason why this is taking so long under Wildfly 8? Any tips on how to filter metrics to make the endpoint faster?
Otherwise, we’ll have to configure Prometheus with a scrape_interval
of at least 2 minutes!
Issue Analytics
- State:
- Created 6 years ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
WildFly Admin Guide
It is perfectly possible to launch multiple standalone server instances and have them form an HA cluster, just like it was possible with...
Read more >Problem with Microprofile Metrics on Wildfly 26 - Stack Overflow
The /metrics endpoint is queried using an anonymous user usually(prometheus scraper), this means you may have to enable the MONITOR role to ...
Read more >Monitoring WildFly Archives - Mastertheboss
Learn how to monitor WildFly using Prometheus and Alert Manager. How to display metrics with Graphana, How to capture insights with the ELK...
Read more >Chapter 8. Setting up metrics and dashboards for AMQ Streams
The metrics data is used, for example, to help identify slow consumers. Lag data is exposed as Prometheus metrics, which can then be...
Read more >Infinispan performance considerations and tuning guidelines
Deploying an in-memory, distributed data store requires careful planning. It also requires you to measure performance over time to learn ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I’m having the same same issue as yours where the usual scrape time takes about 43 seconds even with the latest code. In my case, I already the know the object names in advance so I set
whitelistObjectNames
accordingly like as follows:This improves scrape time from 43s to only 0.028s.
I have submitted a PR to add whitelistObjectNames, see #284
Thanks @n3v3rf411