chunked encoding + GZIP prevents keep-alive
See original GitHub issueGZIPInputStream.read() starts returning -1 once it finishes reading the gzip trailer. Depending on the underlying InputStream, this may occur before it has finished consuming the underlying InputStream.
In particular, if the underlying InputStream is a ChunkedInputStream, the ChunkedInputStream never reads the final chunk header (which indicates a chunk of length 0).
When ChunkedInputStream.close() is called, it is in a non-STATE_DONE state, and does not call HttpClient.finished(), which is required for the HttpClient object to get into the keep alive cache.
One possible fix is here: https://github.com/google/google-http-java-client/blob/acdaeacc3c43402d795266d2ebcb584abe918961/google-http-client/src/main/java/com/google/api/client/http/HttpResponse.java#L363
Rather than returning a GZIPInputStream, return a proxy input stream that retains references to both the gzip stream and the underlying stream. The proxy’s close() method first consumes the underlying stream by calling read() and then calls close() on the gzip stream.
Issue Analytics
- State:
- Created 6 years ago
- Comments:6 (2 by maintainers)

Top Related StackOverflow Question
I believe the solution implemented in #840 is inadequate. Consider the following test case, which is very similar to the code used in the
NetHttpTransport:I expect the above code to open only 1 connection due to keep-alive, but it actually opens 3:
The solution in #840 simply wraps the
GZIPInputStreamin aConsumingInputStream:But this won’t change the behavior, because
ConsumingInputStreamonly does this:Since
ByteStreams.exhaust()is simply trying to read any remaining bytes from theGZIPInputStream, it doesn’t really help clean up the underlying theChunkedInputStream.To get the desired keep-alive behavior we need to do something like the following in the test case:
Note that the hack we put in place for google-cloud-datastore is going to become increasingly inconvenient in future JDK releases.