question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Rate limiting of NS1 provider

See original GitHub issue

Getting on really well with OctoDNS, checking out a number of providers whilst planning a DNS migration. But I have a problem with the NS1 provider because it gets rate limited. I don’t think I am hitting it that hard (my dataset contains 1022 records across 11 zones).

I have been able to get my dataset into NS1 but it’s taken me a few hours of retrying to do so. This is in contrast to other providers I have also tested which work pretty-much instantaneously.

Having discussed this with NS1, this is what I was told…

Each API call that makes a change to a zone or record causes a large number of messages to be generated internally in order to extremely quickly propagate changes to our edge nodes and is computationally expensive. In order to manage this computational cost and to protect the platform we have a token based ratelimitting system in place.

First let me break down how the ratelimitting system works: Basically for each HTTP method (PUT, GET, POST, DELETE) you have an independent bucket of tokens and an independent replenishment period for said bucket. Whenever you make an API request a token is consumed from the appropriate bucket. In your case the API rate limits by method are:

GET
< X-RateLimit-Limit: 900
< X-RateLimit-Period: 300
POST
< X-RateLimit-Limit: 300
< X-RateLimit-Period: 300
PUT
< X-RateLimit-Limit: 200
< X-RateLimit-Period: 300
DELETE
< X-RateLimit-Limit: 100
< X-RateLimit-Period: 200

DELETE has a bucket of 100 tokens meaning you can instantaneously send 100 delete requests without being rate limited. Those 100 tokens continuously replenish and it takes 200 seconds to fully replenish (approx 0.5 tokens/second). Whereas GET has 900 tokens in it’s bucket and a replenishment period of 300 seconds (approx 3 tokens/sec). This is approximately correlated to how computationally expensive each call is for us. The current status of your token buckets are returned in the response header from the API and our recommendation is to incorporate those headers as part of your rate limiting.

This means that in order to make changes to 900 records you can instantaneously burst through 300 changes, then apply the next 600 changes over the course of 600 seconds (10 minutes).

The problem with OctoDNS is that it is sending an individual PUT call to create each and every record on the platform and it is doing so without any ratelimitting. The ideal thing for it to do would be for it to use the zone file upload endpoint (docs: https://ns1.com/api#put-import-a-zone-from-a-zone-file) and supply an appropriately formatted plain text file. This is a single API call that would upload an entire zone which is much better for both the user and our platform. Another possibility is enforcing a rate limit within Octo when it is interacting with the NS1 API. In the case of zone/record creations this would be 1.5 PUTs per second (5,400 creations per hour). Another possibility yet would be to do the zone upload independently of Octo (using the zonefile upload endpoint) and then sync’ing with OctoDNS afterwards (I believe this is possible using Octo).

I’m just posting for interest / information and possible future enhancement to the NS1 provider.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
rupacommented, Mar 4, 2020

I got the go-ahead to work on this yesterday, so as of now it’s my priority and I hope to have a PR ready within the next couple of days.

2reactions
rupacommented, Feb 14, 2020

Hey, yeah, I can confirm we have some improvements to handling rate-limiting in the works, and we’re planning to make a PR to octodns about that pretty soon. The changes are mostly around preventing 429s from happening, but things can still take a while with large numbers of zones and records due to the number of API hits.

The zone import endpoint does seem like it could be useful to cut down on some of the traffic around initial setups, I’m going to look into it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

About API rate limiting - NS1 Help Center
Like all modern web platforms, NS1 employs rate limiting to control the number of API calls users can issue to the platform over...
Read more >
Rate limiting of NS1 provider · Issue #490 - GitHub
This is a single API call that would upload an entire zone which is much better for both the user and our platform....
Read more >
NS1 Provider - Terraform Registry
The NS1 provider exposes resources to interact with the NS1 REST API. The provider needs ... NS1 uses a token-based method for rate...
Read more >
Ingress Gateway | Tetrate Documentation
RateLimitDimension is a condition to match HTTP requests that should be rate limited. Field, Description, Validation Rule. remoteAddress. tetrateio.api.tsb.
Read more >
Configure rate limiting rules with Terraform - Cloudflare Docs
This example creates a rate limiting rule in zone with ID <ZONE_ID> with: A custom counting expression that includes a response field (...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found