Encoutering memory issues post Spring boot 2.7.1 version
See original GitHub issueAfter upgrading our applications to Spring boot 2.7.1 version, we have started encountering memory issues.
Our Observations
Changes were made in 2.7.1 in the org.springframework.boot.loader.jar package around closing jar files. A list of nested jars were added and that is what is holding on to objects in our heap dump.
We can see 22,000 JarFile objects in the heap dump where we’d expect about 118 as that is how many nested jars there are in the application. Note that apart from the expected jar files, the other 22,000-odd are all url = jar:file:/deployments/application-web-1.0.0.jar!/BOOT-INF/lib/swagger-ui-4.11.1.jar!/
Please find below the heap dump and Dynatrace screenshot as reference


Note :- No changes were made in org.springframework.boot.loader.jar package in 2.7.2 so I suspect the problem will still be there.
Can someone please look into this.
Issue Analytics
- State:
- Created a year ago
- Comments:10 (5 by maintainers)

Top Related StackOverflow Question
I believe we’re using the JarLauncher - we’re just using the spring-boot-maven-plugin and we’re not overriding the ‘layout’.
Hi @philwebb - I work with @loveshjain and am looking at this. Our problem is that we haven’t been able to replicate this in non-prod. I’m going to see what I can do in that regard in the next couple of days.
I’m speculating that this might have to do with the JarFile objects being removed by the GC (as they are soft referenced?) if there is memory pressure, and then being reloaded later, but never getting removed from the nested jars list? I’ve never really looked at custom class loader code though so I’m a bit in the dark. Does that sound reasonable though - to maybe try to get the JVM close to the heap limit and see if that causes it? I’m just looking for things to try really.
It was fairly odd behaviour in production - it went for several days without us noticing an issue, and when we did hit the issue it wasn’t in a high-load period and it seemed to self-correct. In the tenured gen picture above it looks like it started to have a GC issue about 10am, but we didn’t notice until it started affecting response time badly at about 9:45pm.
We have had similar issues on multiple services though and rolling back to 2.7.0 resolved it.