Performance Issue / Ktor & Netty
See original GitHub issueKtor Version
1.2.0
Ktor Engine Used(client or server and name)
Netty Server
JVM Version, Operating System and Relevant Context
11.0.2, Debian (docker)
4 Cores -Xmx2G (parallelism = 4)
Default settings
Feedback
For most of the cases performance is pretty good, we get < 50ms including db save (all done asynchronously). Unfortunately from time to time, we are getting requests which stuck for > 1s, in worst cases even > 20 seconds. Server got the request, but processing route wasn’t invoked yet. In below there is part of NewRelic instrumentation stuck for such slow call, the problem is that Ktor is not yet well instrumented, and the only consecutive calls I see taking that time are below:
0 | 0.00% | Truncated: NettyUpstreamDispatcher | | 0.000 s
-- | -- | -- | -- | --
0 | 0.00% | HttpServerExpectContinueHandler.channelRead() | Async | 0.000 s
0 | 0.00% | HttpServerExpectContinueHandler.channelRead() | Async | 0.000 s
0 | 0.00% | RequestBodyHandler.channelRead() | Async | 0.000 s
16.0 | 0.08% | NettyApplicationCallHandler.channelRead() | Async | 20.025 s
16.0 | 0.08% | NewRelicFeature.wrapIntoNewRelicTransaction() | | 20.025 s
16.0 | 0.08% | com.revolut.eventstore.api.write.EventsControllerKt/saveEvent | | 20.025 s
1.0 | 0.00% | Application code (in com.*.api.write.EventsControllerKt/saveEvent)
What I am trying to understand is - what could happen between:
0 | 0.00% | RequestBodyHandler.channelRead() | Async | 0.000 s
-- | -- | -- | -- | --
16.0 | 0.08% | NettyApplicationCallHandler.channelRead() | Async | 20.025 s
Why it took such long? It’s pretty hard to understand where may be any blocking/under sourced part - so I would really appreciate help with it
Issue Analytics
- State:
- Created 4 years ago
- Comments:13 (2 by maintainers)
Top GitHub Comments
See #1124 - should be implemented soon!
Your code may have blocked code.