Apparent memory leak when writing a TextWebSocketFrame on websocket connect.
See original GitHub issueExpected behavior
I’m trying to write a Netty based Web Socket server that expected to receive high load. The basic reproducible case involves simply opening a web socket connection and the server sending back a welcome message to the client.
Actual behavior
While load testing during development, I’m seeing excessive memory utilization by the process that cannot be attributed to any identifiable source (at least, not identifiable by me, thus the issue). Under heavy load, the process eventually consumes all available memory on the host before it is killed by a separate oom-reaper process.
This memory utilization only occurs when the server starts writing new, non-empty,TextWebSocketFrame
s.
Steps to reproduce
Modify the example Web Socket server to send back any non-empty greeting on connect and send heavy load. (Note: the definition of heavy load will vary based on the available memory for the service).
Minimal yet complete reproducer code (or URL to code)
Sample addition to WebSocketFrameHandler
@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
super.userEventTriggered(ctx, evt);
if (evt instanceof HandshakeComplete) {
websocketConnected(ctx);
}
}
private void websocketConnected(ChannelHandlerContext ctx) {
ctx.channel().writeAndFlush(new TextWebSocketFrame("{}"));
}
For a fully functioning app that should exhibit the behavior in question, see this repo: https://github.com/jcohen/netty-ws-leak
Netty version
4.1.43.Final
JVM version (e.g. java -version
)
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (IcedTea 3.12.0) (Alpine 8.212.04-r0)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
OS version (e.g. uname -a
)
Linux a1d94bdbf936 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 Linux
Issue Analytics
- State:
- Created 4 years ago
- Comments:17 (9 by maintainers)
Top GitHub Comments
@jcohen alright I did spent some time to look into it and sadly there is not much we can do on our side in netty…
The problem is that with compression enabled it will use
permessage-deflate
withserver_no_context_takeover
not used. For more infos on what this actual means check: See https://tools.ietf.org/html/rfc7692#section-7.1.1.1.So with the configuration we need to create a new
Deflater
(which is part of the JDK: https://docs.oracle.com/javase/8/docs/api/java/util/zip/Deflater.html ) and keep it “alive” till the connection is closed. The problem here is thatDeflater
is implemented usingJNI
and so reserve native memory that will not be released until the connection is closed and so thedeflater.end()
method can be called by us.So the only way to “guard” against such a problem is to limit the max number of concurrent connections which use the extension.
Ok thanks … looking into it now