Proposal Stream.buffer(highWaterMark)
See original GitHub issueI propose Stream.buffer as a new method that mimics the functionality of the node stream writev function
const deleteKeys = new Writable({
highWaterMark: 50,
write: (key, encoding, callback) => {
//handle 1 item in stream
},
writev: (keys, callback) => {
//keys contains all keys that have been buffered since the last iteration started
},
});
This fills a similar role to batch. the difference is that batch waits for the provided count to be pushed to it before it pushes the array downstream. buffer would collect items until downstream requests another batch up to a limit
_(source)
.buffer(50)
.tap(batch => {
//batch contains all items that have flown downstream while tap was processing its last iteration (limit 50)
});
I am willing to write this assuming it is something the project is willing to accept.
Issue Analytics
- State:
- Created 5 years ago
- Comments:8
Top Results From Across the Web
node.js - Stream highWaterMark misunderstanding
The data is buffered. However, excessive calls to write() without allowing the buffer to drain will cause high memory usage, poor garbage ...
Read more >Stream | Node.js v19.3.0 Documentation
For streams operating in object mode, the highWaterMark specifies a total number of objects. Data is buffered in Readable streams when the implementation...
Read more >19.37.170 Standard stream and lake buffer requirements
Buffers shall be measured from the top of the upper bank or, if that cannot be determined, from the ordinary high water mark...
Read more >30.62A.320 Standards and requirements for buffers and ...
(i) the buffer for streams, lakes and marine waters shall be measured from the ordinary high-water mark extending horizontally in a landward direction...
Read more >Attachment A Development Regulations - City of Shoreline
or may be identified in the City of Shoreline Comprehensive Plan. ... also include stream areas and buffers which provide important habitat corridors; ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Yes. That should be the case.
BTW, I suggest you use flatMap(…) instead of .map(…).mergeWithLimit(1). They are equivalent, but flatMap is clearer, and potentially more efficient.
On Thu, Oct 18, 2018 at 9:32 AM Stephen Dahl notifications@github.com wrote:
I’m not sure why you see an infinite loop, but I left some comments in your commit.
The other problem is that you can’t add arguments to transforms. They get exported as a top-level transform that is curried, and adding additional arguments breaks that API. That is, you can run code like
_.batchWithTimeOrCount(1000)(2)(stream)
.I think merging it into
batchWithTimeOrCount
make sense, but we need to come up with another name. MaybebatchWithTimeOrCountRange
? Not great, I know, but this is how things are.