question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Continuously increasing off-heap 'Tracing' memory

See original GitHub issue

We use the latest Datadog Java Tracer (fetched from https://dtdg.co/latest-java-tracer) included in our Java/SpringBoot application using the following JVM parameters

-javaagent:/opt/dd-java-agent.jar 
-Ddd.profiling.enabled=true 
-XX:FlightRecorderOptions=stackdepth=256 
-Ddd.logs.injection=true 
-Ddd.trace.sample.rate=1
java -version
openjdk version "18.0.2" 2022-07-19
OpenJDK Runtime Environment (build 18.0.2+9-61)
OpenJDK 64-Bit Server VM (build 18.0.2+9-61, mixed mode, sharing)

The application is running within a Docker container in AWS Elastic Beanstalk.

We observed, that docker.mem.rss is continuously increasing over time, whereas jvm.heap_memory and jvm.non_heap_memory stay constant (after ~1d of ‘warm-up’ period). After ~10-15 days, the container RSS reaches a configured memory limit and the container is killed and restarted.

Further investigation (using java native memory tracking) revealed, that it is the off-heap memory area called ‘Tracing’ that gets bigger and bigger over time. We observed up to ~130MB of allocated memory in that area after ~10 days.

With -Ddd.profiling.enabled=false the problem does not occur (‘Tracing’ memory stays constant at 32KB).

In the Datadog Agents (v7.38.2, Docker) logs we see no obvious problems (except lots of CPU threshold exceeded warnings).

What can we do to prevent this ‘Tracing’ memory leak with activated profiling?

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
jbachorikcommented, Aug 31, 2022

Hello, thank you for reporting this. Datadog continuous Java profiler is using JFR behind the scene it seems that where the leak would happen.

I have filed and OpenJDK ticket tracking this problem.

0reactions
OleBilleAtBScommented, Oct 27, 2022

@richardstartin Nice! 👍

Read more comments on GitHub >

github_iconTop Results From Across the Web

Off-Heap memory reconnaissance - Brice Dutheil
I faced a few incidents, where the JVM settings and the Kubernetes memory limit were seemingly appropriate, yet the apps were constantly OOM ......
Read more >
Managing Off-Heap Memory | Geode Docs
Geode can be configured to store region values in off-heap memory, which is memory within the JVM that is not subject to Java...
Read more >
Non heap memory and No of loaded classes keeps on ...
During the test, we find that non heap memory and total no of loaded classes keeps on increasing over the time. Our guess...
Read more >
Monitor Java memory management with runtime metrics, APM ...
Learn how to detect memory management issues with JVM runtime metrics, garbage collection logs, and alerts.
Read more >
Solving a Native Memory Leak. Close those resources - Medium
This presentation describes how to enable native memory tracking. ... found out that these persistent state stores provide off-heap storage.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found