Netty causes ClassLoader leak in container environments
See original GitHub issueI’m running Netty in a container-like environment where modules can be loaded and unloaded. Unloading a module that has used Netty causes the ClassLoader to be leaked.
I’m able to reproduce with this minimal example (running in the container environment, obviously):
public class GatewayHook extends AbstractGatewayModuleHook {
private final Logger logger = LoggerFactory.getLogger(getClass());
private final NioEventLoopGroup eventLoop = new NioEventLoopGroup(0);
@Override
public void setup(GatewayContext gatewayContext) {
logger.info("setup()");
}
@Override
public void startup(LicenseState licenseState) {
logger.info("startup()");
try {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(eventLoop)
.channel(NioSocketChannel.class)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000)
.option(ChannelOption.TCP_NODELAY, true)
.handler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
logger.info("initChannel");
}
});
bootstrap.connect("localhost", 1234).get(2, TimeUnit.SECONDS);
} catch (Throwable t) {
logger.error("failed getting un-gettable endpoints: {}", t.getMessage(), t);
}
}
@Override
public void shutdown() {
logger.info("shutdown()");
try {
eventLoop.shutdownGracefully().get();
} catch (Throwable e) {
logger.error("Error waiting for event loop shutdown: {}", e.getMessage(), e);
}
try {
GlobalEventExecutor.INSTANCE.shutdownGracefully().get();
} catch (Throwable e) {
logger.error("Error waiting for GlobalEventExecutor shutdown: {}", e.getMessage(), e);
}
try {
DefaultAddressResolverGroup.INSTANCE.close();
} catch (Throwable e) {
logger.error("Error closing DefaultAddressResolverGroup: {}", e.getMessage(), e);
}
InternalThreadLocalMap.destroy();
FastThreadLocal.removeAll();
}
}
I’ve tried shutting down as many global looking Netty resources as I could find but it still leaks. I’ve uploaded a JProfiler heap dump here. If you group by ClassLoader and then select the only NonLockingURLClassLoader
instance you’ll see the Netty classes still hanging around.
I have of course tested this without invoking any Netty code and it does not leak.
Netty version
Netty 4.1.9.Final
JVM version (e.g. java -version
)
java version “1.8.0_112”
OS version (e.g. uname -a
)
macOS 10.12.3
Issue Analytics
- State:
- Created 6 years ago
- Comments:21 (11 by maintainers)
Top Results From Across the Web
What is classloader leak? - java - Stack Overflow
A quite common cause of class loader leaks is the use of shared libraries between multiple applications within one application server.
Read more >Re: Classloader memory leak on job restart (FlinkRunner)
First of all your initial issue is unrelated to Beam but caused by the Metaspace growing and Yarn killing your application [2].
Read more >Classloader-Releated Memory Issues - Dynatrace
Classloader Leaks. Especially in application servers and OSGi containers, there is another form of memory leak: the classloader leak. As classes are referenced ......
Read more >Index (Netty API Reference (4.1.85.Final))
AbstractBootstrap is a helper class that makes it easy to bootstrap a Channel ... When the channel goes inactive, release all frames to...
Read more >How Is Netty Used to Write a High-Performance Distributed ...
On the Linux system, EPollArrayWrapper.epollWait returns a bug of 100% CPU usage caused by empty polling. Netty helps you work around by ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@kevinherron go for 4.1.22 (4.0.x has been declared EOL).
@normanmaurer do you need anything else?