Unbounded memory usage
See original GitHub issueI’m noticing issues where Node processes consuming histograms (and surprisingly) meters are growing memory in a linear, unbounded fashion. Removing measured fixes this issue. I’m not very surprised that histograms are memory-hungry given they’re backed by a binary heap, I am much more surprised the meters seem to be as well.
Is this something you’ve encountered before?
Issue Analytics
- State:
- Created 7 years ago
- Comments:12 (4 by maintainers)
Top Results From Across the Web
1797436 – unbounded memory usage in collectd when it's not ...
Bug 1797436 - unbounded memory usage in collectd when it's not configured with any write plugin. Summary: unbounded memory usage in collectd when...
Read more >Unbound: Memory consumption very high, gets killed
unbound consumes more than 600MB with a blocklist file size of 88MB. I would like to try this format, which also blocks subdomains:...
Read more >unbounded memory usage in unyielding JS jobs creating ...
See this minimal reproduction repository. When creating several instances of a C++ object that is wrapped by ObjectWrap, memory usage goes ...
Read more >Learning to Transduce with Unbounded Memory
In this paper we use a number of linguistically-inspired synthetic transduction tasks to explore the ability of RNNs to learn long-range reorderings and ......
Read more >KAFKA Stream OOM (Out of Memory) - Stack Overflow
For untilWindowClose() you should use unbounded () . If you want to bound memory, you should not use untilWindowClose() .
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Update: the thing is, that if you use meters (which use a timer to periodically aggregate the data), and then you swap them periodically to avoid a meter going out of range - you get dangling timers.
In addition if you use many meters, let’s say 100 in an app, then you have 100 timers running - does that affect performance?
I know I already closed this issue. Just incase anyone finds themselves here.
My suspicion based on some of the comments above. Is that users where creating Meters regularly and not managing their lifecycles properly.
When a Meter is created you must call Meter.end() when you are done with it. The end method clears the interval which would allow the garbage collector to fully destroy the object.
When you use the Collection or Self Reporting Metrics Registry, shutdown methods to properly end their and the registered metrics lifecycle are exposed. SelfReportingMetricsRegistry#shutdown, Collection#end