question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Memory leak in 7.0.1 when logging from a thread coming out of a "ForkJoinPool.common()"

See original GitHub issue

Describe the bug Some of our applications have been updated to the latest version 7.0.1 from 6.6 version. Later we found out one of the applications have a memory leak. We used heap dump to analyze it with Eclipse Memory Analyzer (https://www.eclipse.org/mat/) and found out that potentially this memory leak was introduced in the latest version of logstash-logback-encoder,

  • logstash-logback-encoder version: 7.0.1
  • logback version: 1.2.3
  • jackson version: 2.12.5
  • java version: 17.0.1

This is our logstash logback configuration:

    <property name="LOG_FILE" value="${LOG_FILE:-${LOG_DIR}/spring.log}"/>

    <appender name="FILE"
              class="ch.qos.logback.core.rolling.RollingFileAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <timeZone>UTC</timeZone>
            <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                <maxDepthPerThrowable>30</maxDepthPerThrowable>
                <exclude>sun\.reflect\..*\.invoke.*</exclude>
                <exclude>net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
                <rootCauseFirst>true</rootCauseFirst>
                <inlineHash>true</inlineHash>
            </throwableConverter>
        </encoder>
        <file>${LOG_FILE}</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <cleanHistoryOnStart>${LOG_FILE_CLEAN_HISTORY_ON_START:-false}</cleanHistoryOnStart>
            <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.%i.gz</fileNamePattern>
            <maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
            <maxHistory>${LOG_FILE_MAX_HISTORY:-7}</maxHistory>
            <totalSizeCap>${LOG_FILE_TOTAL_SIZE_CAP:-0}</totalSizeCap>
        </rollingPolicy>
    </appender>

And here are some memory leak reports:

Screenshot 2021-12-20 at 22 10 51 Screenshot 2021-12-20 at 22 11 27

Please let me know if I need to provide any additional information.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:3
  • Comments:14 (2 by maintainers)

github_iconTop GitHub Comments

3reactions
spkrkacommented, May 3, 2022

I’ve been thinking a bit on how to make ThreadLocal in Java 17 behave the same way as in older Java and I ended up with a subclass of ThreadLocal that uses a fallback map keyed on thread id and keeps track of thread deaths to clean up the fallback map (both to prevent assigning old values to a new thread, and to avoid leaking data)

If you think this seems useful, feel free to use it as is or modify it: https://github.com/spotify/sparkey-java/pull/55

2reactions
brenuartcommented, May 6, 2022

Hi @caesar-ralf , I haven’t had the time yet to have a look at your approach and think about the issue. However I checked the issue you created against openjdk and it looks like the observed behaviour of the ForkJoinPool is as expected. This means we definitely have something to do to adresse this use case: provide a workaround or add something about it in the documentation. I’ll keep updating this issue with our decision when I have the time to come back to it. Thanks for your investigations and feedback.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How Fork Join Pool caused Memory Leak!!
Solution. So the solution is pretty easy so you either shutdown the fork join pool using .shutdown() function before exiting that function or ......
Read more >
Managing an OutOfMemory Exception Caused by Thread ...
A thread leak is causing a memory shortage at the server, which will cause the JVM process to throw out an OOM error....
Read more >
Log thread memory leak - java - Stack Overflow
Anyway whenever I set logging on and it runs the writetolog() after a while I get heapoutofmemory exception. This is caused by the...
Read more >
[JDK-8172726] ForkJoin common pool retains a reference to ...
ForkJoin common pool retains a reference to the thread context class loader ... The console reports that a memory leak is detected and...
Read more >
JDK-8172726 ForkJoin common pool retains a ... - Bug ID
This is highly likely to cause a memory leak. 2. The InnocuousForkJoinWorkerThread does not modify Thread.contextClassLoader so undesirable class loader ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found