question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Flatten with user-defined concurrency: flattenParallel

See original GitHub issue

There currently is no middle ground between flattenConcurrently and flattenSequentially. It would be nice to be able to specify the amount of concurrency desired.

Here’s a half-baked snippet that I call flattenParallel, which handles a variable number (n) of streams concurrently:

function flattenParallel(n) {
  return function(input$$) {
    const pending = [];
    let active = 0;
    return xs.create({
      start: out => {
        function onNext(input$) {
          if (active > n) pending.push(input$);
          else {
            active++;
            input$.addListener({
              next: item => out.next(item),
              complete: () => {
                active--;
                if (pending.length > 0) onNext(pending.shift());
              }
            });
          }
        }
        input$$.addListener({
          next: onNext
        });
      },
      stop: () => {}
    });
  };
}

This provides a flexible abstraction of flattenSequentially and flattenConcurrently:

  • flattenSequentially is flattenParallel(0)
  • flattenConcurrently is flattenParallel(Infinity)

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:10 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
staltzcommented, Jan 12, 2017

Is there any reason xstream cannot or should not be used in a backend?

Because there are better options. xstream with its very small size is clearly meant for browsers, because kB size doesn’t matter much in node.js backends. most.js is almost always a better choice in that case, with hyper performance in node.js.

Here are my use cases, which are all from the browser client. Uploading large files (up to 20 GB). We have logic to split a zip into chunks, verify each chunk, and upload the chunks using range requests. We can speed up each portion of this process by doing them in parallel up to a limit. Due to the amount of chunks we do not want to upload all at once as it would run into browser locks and prevent other, potentially higher priority requests from occurring. Downloading large pdfs for PDFjs. We have implemented a strategy around prefetching PDFs since a portion of our app is navigating through a list of PDFs. Again, we need the ability to prioritize requests so mutation operations on a document are not blocked by prefetching a pdf that is not in focus. At the same time, fetching multiple chunks at once can significantly improve transfer speed.

Really good use cases to report. I wouldn’t have been able to predict these types of use alone. Thanks. I’ll consider adding this feature then. It would probably be named flattenConcurrentlyAtMost(n).

0reactions
xtianjohnscommented, Jul 24, 2018

I’ve got this implemented but the tests are pretty thin, I’ve just copied the tests for flattenConcurrently and flattenSequentially and substituted flattenConcurrentlyAtMost(Infinity) and flattenConcurrentlyAtMost(0) respectively, then added a couple more to verify the “throttling” behavior.

As I was working on this I realized that it was kinda funky to think about ...atMost(0) as meaning “only listen to 1 stream at a time”. Conceptually, the number of streams the operator will imitate will always be n + 1, not n. I don’t know if this is ideal or not, but to change the behavior in my implementation just requires changing a <= to <, so… 🤷‍♂️

PR incoming.

Also, this is a heck of an operator to write docs for, haha. I’ll welcome any feedback about how to make it clearer.

Read more comments on GitHub >

github_iconTop Results From Across the Web

8. Regular Flattening - Parallel Programming in Futhark
In this chapter, we introduce the concept of regular moderate flattening [HSE+17], which is the essential technique used for making regular nested parallel ......
Read more >
flatMap iterates sequentially instead of in parallel #244 - GitHub
In RxJS, when the source emits items, the map function is invoked immediately and the inner emissions flattened out.
Read more >
FLATTEN View (Flattening Data Structures) — VQL Guide
Consume data from tools that do not support compound types. To “flatten” data, use the operation FLATTEN : Syntax of the FLATTEN operation¶....
Read more >
Overcoming Limitations with flatMap() for Java Parallel Streams
Programming with Java Structured Concurrency · ChatGPT Introduction - What It Is, How to Use It & Why It Matters · Applying Key...
Read more >
Reactor - Transforming elements by using flatMap() method
The flatMap() methods transform the elements asynchronously into Publishers, then flatten these inner publishers into a single Flux through ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found