question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Native memory excessive consumption and native memory leaks

See original GitHub issue

Description

When running Teku with -Xmx1G the java process resident memory tend to be of ~2.5G which looks like an overhead

When running Teku with e.g. -Xmx1G, the total JVM memory consumption is estimated to be around 1.5G (including ByteBuffers which are mostly Netty owned). It’s also expected for RocksDB to consume around 200Mb with the current defaults:

  • 128Mb for write buffer
  • 8Mb for block cache
  • Other native structures

Thus we having ~0.8G of unknown native memory consumption. RocksDB is the major native lib used in Teku so it is the primary suspect.

Here is the process memory consumption over time: изображение

From the graph it doesn’t looks like the memory is leaking

Acceptance Criteria

Teku process with -Xmx1G shouldn’t consume more than 2G

Steps to Reproduce (Bug)

Run Teku node with -Xmx1G on Onyx (or other) testnet for about 2-3 hours

Expected behavior: process consumes < 2Gb

Actual behavior: process consumes 2.5G

Frequency: Always

Versions (Add all that apply)

commit d98d802cb416819a22081be693ffb130c052e425 (HEAD -> master, origin/master, origin/HEAD)
Author: Adrian Sutton <adrian.sutton@consensys.net>
Date:   Wed Jul 1 15:03:46 2020 +1000

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:31 (25 by maintainers)

github_iconTop GitHub Comments

1reaction
benjaminioncommented, Feb 8, 2021

@tzapu This is most like related to https://github.com/ConsenSys/teku/issues/3495 - (excessive memory consumption by RocksDB). We are actively looking at alternative DBs at the moment, and will hopefully have news soon. It would definitely be good to resolve this!

0reactions
ajsuttoncommented, Jul 12, 2021

Ah great to hear you’ve found the issue. j9 isn’t particularly common and IBM tends to focus on big iron server deployments of it so I’m not entirely surprised it winds up using a lot of memory. Likely it could be controlled by setting the right options but it may not be setting defaults as intelligently as Hotspot.

Read more comments on GitHub >

github_iconTop Results From Across the Web

2.7 Native Memory Tracking
1 Use NMT to Detect a Memory Leak. Follow these steps to detect a memory leak. Start the JVM with summary or detail...
Read more >
Native Memory May Cause Unknown Memory Leaks - DZone
Recently I came across a strange case: the memory usage of my program exceeded the maximum value intended for the heap.
Read more >
Troubleshooting Native Memory Leaks in Java Applications
The first step in detecting memory leaks is to monitor the memory usage of our application to confirm if there is any growth...
Read more >
Solving a Native Memory Leak. Close those resources - Medium
This presentation describes how to enable native memory tracking. Following that advice, I added the JVM parameter to the app and then ran...
Read more >
The story of a Java 17 native memory leak - Nick Ebbitt
The direct memory looked fine too. This meant we were dealing with other native memory usage being consumed by the process.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found