res.write not sending chunks until res.end() is called
See original GitHub issueI’m using node v.0.12.0 with express (v4.4.1) and compression (v1.6.0)
I’m sending back about 80MB of dynamically generated data (not from FS or DB) in multiple res.write()
calls. When I add the compression middle ware (with no options passed), I don’t see any traffic (using WireShark) from the Node server until the res.end()
is called. When res.end()
is called, there is a sudden burst of chunked data.
But when the compression module is not used, I see chunked responses going on the wire. The size of each of these chunked fragments seems to be between 10K and 16K (based on WireShark).
The only header information I am setting happens before the res.write()
calls and is:
res.setHeader('Content-Type', 'application/json');
Any reason why the data is getting buffered before the res.end()
call?
Issue Analytics
- State:
- Created 8 years ago
- Reactions:1
- Comments:13 (8 by maintainers)
Top Results From Across the Web
node.js - res.write not sending big data until res.end() is called ...
res.write not sending big data until res.end() is called after res.write but don't want to end response because it is SSE connection.
Read more >Stream with Node.js doesn't work | by Jamie Munro - Medium
Seems that I have to have res.end() after the last res.write() to be able to send the data to the browser. Actually, it...
Read more >Top 10 Most Common Node.js Developer Mistakes - Toptal
Mistake #2: Invoking a Callback More Than Once ... Notice how there is a return statement every time “done” is called, up until...
Read more >HTTP | Node.js v18 API
For efficiency reasons, Node.js normally buffers the request headers until request.end() is called or the first chunk of request data is written.
Read more >Anatomy of an HTTP Transaction | Node.js
(Though it's probably best to send some kind of HTTP error response. ... To do this, there's a method called writeHead , which...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
So I figured out what is happening. The same issue exists in the last version of Node too. The way I am sending data is something like below
A look at the zlib.js source https://github.com/nodejs/node/blob/master/lib/zlib.js line 448 shows that the flush() call is not synchronous. Instead, it sets up a ‘drain’ listener to execute later. But with my for-loop consuming the thread, this listener has no chance to run. In fact, the loop will keep adding the listeners for ’ drain’ and that explains why I was getting the error below: (node) warning: possible EventEmitter memory leak detected. 11 drain listeners added
I changed my code to allow the execution of the listener by making use of the callback that the Zlib.flush() takes and trampoline between data generation and sending it. So it looks something like below:
In order to get this to work, I had to change the flush implementation to take a callback
Where opts is the variable that captures the options passed into the compression function. The Zlib flush implementation takes the flush mode but I’m not aware of a practical use case of using multiple flush modes for a single compression run. Or if there is one, the function could take the addition flush argument but the user has to make sure the right mode is passed in.
This is because it’s how compression works with gzip: to get small sizes, gzip needs to accumulate the payload so it can do substring search and replacements.
If you want to stream, you can use the
res.flush()
command between writes, but the compression will be much less efficient.An example can be found at the bottom of the readme.