A lot of ChannelInputShutdownReadComplete events fired for NIO channel with ALLOW_HALF_CLOSURE
See original GitHub issueExpected behavior
Single ChannelInputShutdownReadComplete
event will be fired for closed socket.
Actual behavior
ChannelInputShutdownReadComplete
event fired on each NIO event loop run.
NioEventLoop
begin consuming a lot of CPU.
Steps to reproduce
Create upstream connection channel with enabled ALLOW_HALF_CLOSURE
and disabled AUTO_READ
, do a single HTTP keep-alive request and wait until connection being closed by remote server.
Netty version
4.1.16.Final
JVM version (e.g. java -version
)
Java™ SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot™ 64-Bit Server VM (build 25.131-b11, mixed mode)
OS version (e.g. uname -a
)
Darwin 16.7.0 Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64 x86_64
macOS Sierra 10.12.6
Also confirmed on Linux.
Suspicious code fragment
from io.netty.channel.nio.AbstractNioByteChannel::read
readPending
not cleared when allocHandle.lastBytesRead() <= 0
.
Negative read bytes indicates socket closed state, but each consecutive selection of this socket key with OP_READ
will succeed immediately (may be related to http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4531726).
Fix proposal
Call removeReadOp()
on closed sockets.
Issue Analytics
- State:
- Created 6 years ago
- Comments:11 (6 by maintainers)
Top GitHub Comments
@normanmaurer, I’ve checked against @khitrin’s source with 4.1.16.Final where issue is present on Linux and with 4.1.17.Final-SNAPSHOT from your PR #7259 which behaves same as on MacOSX. So, PR fixes issue same way as on Mac.
@khitrin how do you find the Suspicious code?