question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Potential Memory Leak in BlockCacheStrategy

See original GitHub issue

We noticed a steady increase in memory usage on our server after upgrading the metamask provider engine lib from 11.0.0 to the latest version. In NewRelic we saw the heap size increasing at a pace of about 2mb every hour.

We went on to diff heap dumps and it became clear the memory being held was the BlockCacheStrategy keeping track of eth_getBlockByNumber objects, interestingly enough only the fork strategy, not the block strategy. Seeing we have little to no background on what the initial intention was of these caches and we don’t really have a need to upgrade from 11.0.0 to current version we can’t spend more time on this at this point in time.

It looks like it’s intended behaviour, but what’s strange is that both fork and block caches get rolled off at the same time, so at first sight I suspected the fork cache to not keep growing in size. Perhaps BlockCacheStrategy should be extended to LRUBlockCacheStrategy in the case of fork, keeping the cache at a fixed max size removing the lru block every time it hits its limit?

We didn’t have enough time and background here to go ahead and submit a PR but it looks like something that should definitely be addressed. Please feel free to contact me any time at peter-jan@settlemint.com if you want me to send you the heap dumps or have stuff that needs clarification.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:3
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
peterjancommented, Apr 26, 2018

Looks like your fixes did the trick @kumavis, nice! Been running now for 2 hours without an increase in heap size. I’ll monitor a little while longer and close this issue if the situation remains stable.

1reaction
kumaviscommented, Apr 26, 2018

@peterjanbrone peterjanbrone looked through the code today, my best guess was a hex encoding mismatch for block numbers for odd-length block numbers 0x01 vs 0x1. Made 2 changes:

  • now store block caches (including fork strategy) using decimal strings instead of hex strings to avoid potential formatting issues
  • now for rolloff we remove all caches where the block number is lower than the latest block

this may not have covered your bug

the fork cacheable methods

its not about keeping them around forever until a fork (chain reorg), but rather those caches are viable into the future unless there is a block reorg for the block they are stored in. but we never added fork detection + and we (try to) drop the cache on the next block so it behaves the same as a block strategy cache. ¯_(ツ)_/¯

build a simple LRU around it

thats certainly the better solution

in the current sprint for metamask we’re optimizing our hits against infura, and this includes a more lazy block-tracker, which will require a re-think on our block cache

im trying to deprecate provider-engine in favor of the more composable json-rpc-engine. we’re using the new system in some places but are still using provider-engine for most of it. doesn’t yet have a replacement for all primary provider-engine middleware. see: https://github.com/kumavis/json-rpc-engine https://github.com/MetaMask/eth-json-rpc-middleware

Read more comments on GitHub >

github_iconTop Results From Across the Web

Potential memory leak when training? · Issue #3756 - GitHub
This bug was originally seen in AutoML where we create many models and much of the memory usage was from (apparently) memory leaks....
Read more >
Understanding Memory Leaks in Java - Baeldung
A memory leak is bad because it blocks memory resources and degrades system performance over time.
Read more >
Potential memory leak? - Stack Overflow
But I am wondering whether there is a memory leak here? Arguably, yes, this is still a leak since it (temporarily) uses more...
Read more >
Memory leak in C++ and How to avoid it? - GeeksforGeeks
The best way to avoid memory leaks in C++ is to have as few new/delete calls at the program level as possible –...
Read more >
IT35074: Potential memory leak during of cached resources in ...
A race condition exists whereby a shared memory block may be leaked if two threads compete to free up the resources at the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found