question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to process requests evenly with RateLimiterQueue?

See original GitHub issue

I’m sending requests to a heavily rate-limited service but want them all to get through eventually, so I have a RateLimiterQueue with the default maximum size, wrapping a RateLimiterMySQL which is using the Sequelize backend for MySQL.

The issue is that whilst we have configured the service to have 10 points per hour, only 5 requests are being executed per hour.

I have the limiter set with the following config (as well as database specific config):

{
  "duration": 3600,
  "points": 10,
  "execEvenly": true
}

And the queue wrapping the limiter has no additional configuration. Is this a known issue?

EDIT: use an example from Consume points evenly article. At the time of this edit, execEvenly option doesn’t work with RateLimiterQueue.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:11 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
sam-lordcommented, May 7, 2021

This is all working perfectly. The code is really simple now:

const {RateLimiterMySQL, RateLimiterMemory} = require('rate-limiter-flexible');
const RateLimiterAbstract = require('rate-limiter-flexible/lib/RateLimiterAbstract');

module.exports = class RateLimiterCustom extends RateLimiterAbstract {
    constructor(options = {}, callback) {
        super(options);

        const limiterType = (options.storeClient)
            ? RateLimiterMySQL
            : RateLimiterMemory;

        const mainLimiter = new limiterType(Object.assign({}, options, {
            keyPrefix: 'main'
        }), callback);

        const intervalLimiter = new limiterType(Object.assign({}, options, {
            keyPrefix: 'interval',
            points: 1,
            duration: Math.ceil(this.duration / this.points)
        }));

        this._mainLimiter = mainLimiter;
        this._intervalLimiter = intervalLimiter;
    }

    async consume(key, points = 1, options = {}) {
        await this._intervalLimiter.consume(key, points, options);
        return await this._mainLimiter.consume(key, points, options);
    }

    penalty(key, points = 1) {
        return this._mainLimiter.penalty(key, points);
    }

    reward(key, points = 1) {
        return this._mainLimiter.reward(key, points);
    }

    get(key) {
        return this._mainLimiter.get(key);
    }

    set(key, points, secDuration) {
        return this._mainLimiter.set(key, points, secDuration);
    }

    block(key, secDuration) {
        return this._mainLimiter.block(key, secDuration);
    }

    delete(key) {
        return this._mainLimiter.delete(key);
    }
};

Thanks once again for the help with this - hopefully this ends up being useful to others with this type of use-case

0reactions
animircommented, May 8, 2021

@sam-lord You’re welcome. It looks nice.

There is a separate issue for execEvenly option bug: https://github.com/animir/node-rate-limiter-flexible/issues/113

And I created a knowledge base article with slightly changed example: Consume points evenly

Read more comments on GitHub >

github_iconTop Results From Across the Web

Issues · animir/node-rate-limiter-flexible - GitHub
Count and limit requests by key with atomic increments in single process or ... RateLimiterQueue doesn't process requests evenly when execEvenly option ...
Read more >
Performance issue with RateLimiterQueue and execEvenly
Performance issue with RateLimiterQueue and execEvenly ... I'm sending requests to a heavily rate-limited service but want them all to get through ...
Read more >
Rate-limiting strategies and techniques - Google Cloud
By adding a random offset (jitter) to the time of the initial request or to the delay time, the requests and retries can...
Read more >
Optimize your Azure Cosmos DB application using rate limiting
Your application processes an ingestion job that contains 10 K records, ... Instead if you send those requests evenly across 5 seconds, ...
Read more >
How to Design a Scalable Rate Limiting Algorithm with Kong API
A rate limiting algorithm helps automate the process. rate limiting enabled at 2 requests/min. In the example chart, you can see how rate ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found