Memory leak in 7.0.1 when logging from a thread coming out of a "ForkJoinPool.common()"
See original GitHub issueDescribe the bug
Some of our applications have been updated to the latest version 7.0.1
from 6.6
version. Later we found out one of the applications have a memory leak. We used heap dump to analyze it with Eclipse Memory Analyzer (https://www.eclipse.org/mat/) and found out that potentially this memory leak was introduced in the latest version of logstash-logback-encoder
,
- logstash-logback-encoder version: 7.0.1
- logback version: 1.2.3
- jackson version: 2.12.5
- java version: 17.0.1
This is our logstash logback configuration:
<property name="LOG_FILE" value="${LOG_FILE:-${LOG_DIR}/spring.log}"/>
<appender name="FILE"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<timeZone>UTC</timeZone>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<exclude>sun\.reflect\..*\.invoke.*</exclude>
<exclude>net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
<rootCauseFirst>true</rootCauseFirst>
<inlineHash>true</inlineHash>
</throwableConverter>
</encoder>
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<cleanHistoryOnStart>${LOG_FILE_CLEAN_HISTORY_ON_START:-false}</cleanHistoryOnStart>
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.%i.gz</fileNamePattern>
<maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
<maxHistory>${LOG_FILE_MAX_HISTORY:-7}</maxHistory>
<totalSizeCap>${LOG_FILE_TOTAL_SIZE_CAP:-0}</totalSizeCap>
</rollingPolicy>
</appender>
And here are some memory leak reports:
Please let me know if I need to provide any additional information.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:3
- Comments:14 (2 by maintainers)
Top Results From Across the Web
How Fork Join Pool caused Memory Leak!!
Solution. So the solution is pretty easy so you either shutdown the fork join pool using .shutdown() function before exiting that function or ......
Read more >Managing an OutOfMemory Exception Caused by Thread ...
A thread leak is causing a memory shortage at the server, which will cause the JVM process to throw out an OOM error....
Read more >Log thread memory leak - java - Stack Overflow
Anyway whenever I set logging on and it runs the writetolog() after a while I get heapoutofmemory exception. This is caused by the...
Read more >[JDK-8172726] ForkJoin common pool retains a reference to ...
ForkJoin common pool retains a reference to the thread context class loader ... The console reports that a memory leak is detected and...
Read more >JDK-8172726 ForkJoin common pool retains a ... - Bug ID
This is highly likely to cause a memory leak. 2. The InnocuousForkJoinWorkerThread does not modify Thread.contextClassLoader so undesirable class loader ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’ve been thinking a bit on how to make ThreadLocal in Java 17 behave the same way as in older Java and I ended up with a subclass of ThreadLocal that uses a fallback map keyed on thread id and keeps track of thread deaths to clean up the fallback map (both to prevent assigning old values to a new thread, and to avoid leaking data)
If you think this seems useful, feel free to use it as is or modify it: https://github.com/spotify/sparkey-java/pull/55
Hi @caesar-ralf , I haven’t had the time yet to have a look at your approach and think about the issue. However I checked the issue you created against openjdk and it looks like the observed behaviour of the ForkJoinPool is as expected. This means we definitely have something to do to adresse this use case: provide a workaround or add something about it in the documentation. I’ll keep updating this issue with our decision when I have the time to come back to it. Thanks for your investigations and feedback.