question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

netty4 engine causing Response entity too large for some requests

See original GitHub issue

Response entity too large: DefaultHttpResponse(decodeResult: success, version: HTTP/1.1) In the config we have set maxResponseKB: 102400 and the response < 100MB

tried running linkerd 0.8.6 using docker image buoyantio/linkerd:0.8.6, and downloaded various files served by nginx:latest, and found out that:

  • netty3 does not seem to care about maxResponseKB. I was able to download 50MB files both without any maxResponseKB config and with maxResponseKB: 20000
  • netty4 does not seem to care about maxResponseKB either, even with maxResponseKB: 20000 it will refuses to work with any files >8192 bytes (which happens to be the default value of maxChunkKB)
  • netty4 does respect maxChunkKB. With maxChunkKB: 20000, I was able to download 10MB without problems, and 50MB download failed (like it kinda should)

as I understand it, maxChunkKB should only matter when responses are streamed and/or when the responses use HTTP chunked encoding With streamingEnabled: false netty4 backend works with maxResponseKB as expected

This comment hints that netty4 may result in those “entity too large” errors randomly: https://github.com/linkerd/linkerd/pull/714#issuecomment-251841032

Here log from a cURL using netty4:

curl -v http://X/static/graphlib-dot.min.js
*   Trying x.x.x.x...
* Connected to X (x.x.x.x) port 80 (#0)
> GET /static/graphlib-dot.min.js HTTP/1.1
> User-Agent: curl/7.40.0
> Host: X
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Date: Wed, 25 Jan 2017 10:37:47 GMT
< Content-Type: text/plain
< Content-Length: 694
< Connection: keep-alive
< l5d-err: Response+entity+too+large%3A+DefaultHttpResponse%28decodeResult%3A+success%2C+version%3A+HTTP%2F1.1%29%0AHTTP%2F1.1+200+OK%0ADate%3A+Wed%2C+25+Jan+2017+10%3A37%3A47+GMT%0AAccept-Ranges%3A+bytes%0AContent-Type%3A+application%2Fjavascript%0ALast-Modified%3A+Tue%2C+19+Jul+2016+14%3A21%3A14+GMT%0AContent-Length%3A+114757%0AServer%3A+Jetty%289.2.z-SNAPSHOT%29+at+remote+address%3A+%2FX.X.X.X%3AX+from+service%3A+%23%2Fio.l5d.serversets%2Faurora%2F<ROLE>%2F<STAGE>%2F<JOB>.+Remote+Info%3A+Upstream+Address%3A+%2FX.X.X.X%3A28255%2C+Upstream+Client+Id%3A+Not+Available%2C+Downstream+Address%3A+%2FX.X.X.X%3AX%2C+Downstream+Client+Id%3A+%23%2Fio.l5d.serversets%2Faurora%2F<ROLE>%2F<STAGE>%2F<TASK>%2C+Trace+Id%3A+4bbcdd91e901ae91.17ec09cce1b2bae3%3C%3A4bbcdd91e901ae91
<
Response entity too large: DefaultHttpResponse(decodeResult: success, version: HTTP/1.1)
HTTP/1.1 200 OK
Date: Wed, 25 Jan 2017 10:37:47 GMT
Accept-Ranges: bytes
Content-Type: application/javascript
Last-Modified: Tue, 19 Jul 2016 14:21:14 GMT
Content-Length: 114757
* Connection #0 to host X left intact
Server: Jetty(9.2.z-SNAPSHOT) at remote address: /X.X.X.X:X from service: #/io.l5d.serversets/aurora/<ROLE>/<STAGE>/<TASK>. Remote Info: Upstream Address: /X.X.X.X:X, Upstream Client Id: Not Available, Downstream Address: /X.X.X.X:X, Downstream Client Id: #/io.l5d.serversets/aurora/<ROLE>/<STAGE>/<TASK>, Trace Id: 4bbcdd91e901ae91.17ec09cce1b2bae3<:4bbcdd91e901ae91

and linkerd error log:

E 0125 19:38:12.565 THREAD75 TraceId:a56865b4720655b7: service failure
com.twitter.finagle.UnknownChannelException: Response entity too large: DefaultHttpResponse(decodeResult: success, version: HTTP/1.1)
HTTP/1.1 200 OK
Date: Wed, 25 Jan 2017 19:38:12 GMT
Accept-Ranges: bytes
Content-Type: application/javascript
Last-Modified: Tue, 19 Jul 2016 14:21:14 GMT
Content-Length: 114757
Server: Jetty(9.2.z-SNAPSHOT) at remote address: /x.x.x.x:x from service: #/io.l5d.serversets/aurora/role/prod/task. Remote Info: Upstream Address: /
.119.221:33609, Upstream Client Id: Not Available, Downstream Address: /x.x.x.x:x, Downstream Client Id: #/io.l5d.serversets/aurora/role/prod/task, Trace Id: a56865b4720655b7.103a316a7ec98d4e<:a56865b4720655b7
	at com.twitter.finagle.ChannelException$.apply(Exceptions.scala:259)
	at com.twitter.finagle.netty4.transport.ChannelTransport$$anon$1.exceptionCaught(ChannelTransport.scala:185)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:275)
	at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:267)
	at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:861)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:375)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:435)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:280)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:396)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:250)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
	at com.twitter.finagle.netty4.channel.DirectToHeapInboundHandler$.channelRead(DirectToHeapInboundHandler.scala:24)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
	at com.twitter.finagle.netty4.channel.ChannelRequestStatsHandler.channelRead(ChannelRequestStatsHandler.scala:42)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
	at com.twitter.finagle.netty4.channel.ChannelStatsHandler.channelRead(ChannelStatsHandler.scala:92)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at com.twitter.finagle.util.ProxyThreadFactory$$anonfun$newProxiedRunnable$1$$anon$1.run(ProxyThreadFactory.scala:19)
	at java.lang.Thread.run(Thread.java:745)
Caused by: io.netty.handler.codec.TooLongFrameException: Response entity too large: DefaultHttpResponse(decodeResult: success, version: HTTP/1.1)
HTTP/1.1 200 OK
Date: Wed, 25 Jan 2017 19:38:12 GMT
Accept-Ranges: bytes
Content-Type: application/javascript
Last-Modified: Tue, 19 Jul 2016 14:21:14 GMT
Content-Length: 114757
Server: Jetty(9.2.z-SNAPSHOT)
	at io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:245)
	at io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:85)
	at io.netty.handler.codec.MessageAggregator.invokeHandleOversizedMessage(MessageAggregator.java:383)
	at io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:238)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
	... 43 more

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Reactions:2
  • Comments:7 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
adleongcommented, Feb 11, 2017

I believe I can explain the observed behavior. While it’s confusing and not totally consistent, I believe it is more or less correct.

  • netty3 seemingly ignoring maxResponseKB. I’m guessing that the response was chunk encoded. I believe that maxResponseKB only applies to non-chunked responses.
  • netty4 seemingly ignoring maxResponseKB. What is the chunk size of the response? If the chunk size is greater than the default maxChunkKB, then this error will occur no matter how big maxResponseKB is.

Our best guidance is to never turn streaming off (there isn’t really a good reason to ever do this) and make sure maxChunkKB is set large enough for the chunks you will be receiving. This represents the maximum amount of a message linkerd will buffer and is a much more useful parameter than maxResponseKB. We may even consider deprecating maxResponseKB in the future.

0reactions
adleongcommented, May 5, 2017

I believe this is working “as intended”.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Java client: TooLongFrameException: Response entity too large
According to the exception, the problem seems to be that the response's actual content is longer than what is indicated in the Content-Length...
Read more >
How to Solve the "413 Request Entity Too Large" Error - Kinsta
Are you experiencing the "413 Request Entity Too Large" error? In this article, learn how to resolve this error!
Read more >
RESTEasy JAX-RS - JBoss.org
RESTEasy allows you to inject Atom links directly inside the entity objects you are sending to the client, via auto-discovery. Warning. This is...
Read more >
Netty4 HTTP - Apache Camel
Netty HTTP server and client using the Netty 4.x library. ... Setting to set endpoint as one-way or request-response. true. boolean. tcpNoDelay (common)....
Read more >
Chapter 230. Netty4 HTTP Component Red Hat Fuse 7.1
So make sure to look at the Netty4 documentation as well. ... the request and reply messages so you can properly correlate the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found