Load balancing / throttling
See original GitHub issuePromise approach has a great potential to solving the load balancing / throttling problem, which is yet missing. This is a proposal to add such feature to the library.
As an example, use a question + my answer from StackOverflow for massive amounts of queries.
In essence, we have a number of promises that need to be resolved sequentially. Typically, this can be achieved by using promise.all()
. However, in cases where the number of promise objects is so large that a single call to promise.all()
is impossible, we need to implement a complex workaround, like the one link to which I provided earlier (perhaps not a very good one at that).
What we need is something called promise.page
or promise.throttle
that would adhere to the idea of splitting one huge chunk of promises into smaller chunks and page through them all till they are all resolved. In addition, we need a flexibility in the resolution strategy:
- specify that each chunk/page of promises is to be resolved one after another. This is very-very important, because in many cases an attempt to execute a chunk of promises before the previous one is finished will lead to running out of memory.
- by default, the whole request should reject, if one chunk rejects, but we need an option where it is possible to either just stop or carry on with other chunks of promises.
- such a method would need to request each next chunk of data through a promise, because typically the data would never reside in memory in its entirety.
It should help greatly if you think if this all as an almost-infinite queue that got terminated and now needs to be processed entirely, fast and in huge bulks.
And you don’t need to think of it as something that deals only with copious amounts of promises. Totally not. Here’s an example:
We have 100 requests to be sent to a particular service in one go. However, the service accepts only up to 10 at a time, so we need to send it in 10 blocks. Promise support for throttling would shine in this case.
P.S. I’ve been tempted to implement it in my own library - pg-promise, but it’s just truly a generic promise task, and not just database-related.
Issue Analytics
- State:
- Created 8 years ago
- Comments:18 (10 by maintainers)
Is there a definitive reason this was closed? It continues to be implemented by hand, even though a more general function would be helpful.
In particular see the approach listed at the end of this answer: http://stackoverflow.com/a/28223454/2348750
Its a somewhat specialized feature best implemented as a stand-alone module (independent of promise implementation).
There are many more strategies on how to enqueue things, including limiting to N requests per M seconds (instead of just controlling how often a single request is made), limiting to N concurrent requests and so on. Each of those has a different implementation.
Note: the implementation (from stackoverflow) may not do what the linked stackoverflow question asks unless the calls to the throttled functions are done in series. This one will work regardless: