question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Saas Backend Rate Limiting Errors

See original GitHub issue

Please mark the type framework used:

  • ASP.NET MVC
  • ASP.NET Web API (OWIN)
  • ASP.NET Core
  • WPF
  • WinForms
  • Xamarin
  • Other:

Please mark the type of the runtime used:

  • .NET Framework
  • Mono
  • .NET Core
  • Version: 3.1

Please mark the NuGet packages used:

  • Sentry
  • Sentry.Serilog
  • Sentry.NLog
  • Sentry.Log4Net
  • Sentry.Extensions.Logging
  • Sentry.AspNetCore
  • Version: 2.1.6

I’m getting Sentry reporting failures when trying to report a list of event’s to Sentry.

I cannot find any documentation about there being rate limit outside of the billing quota.

As an attempt to avoid hitting the limit I have 100ms delay between the SentrySdk.Capture...() calls. What is an appropriate event rate?

Ex:

warn: Sentry.ISentryClient[0]
      The attempt to queue the event failed. Items in queue: 100

...

fail: Sentry.ISentryClient[0]
      Sentry rejected the event 970d918dad84422193143617622621cc. Status code: TooManyRequests. Sentry response: No message

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
bruno-garciacommented, Dec 18, 2020

That comes with an increased expectation of reliability. That said, it is certainly reasonable to have limits (both in the Saas product & the SDK) - but to have those limits being hit (and thus events being dropped) without the user being informed about it is, in my opinion, an undesirable user experience.

I don’t believe there’s any failure in reliability in here. There seem to be an issue in transparency and expectation of what rate the server is designed to support and how the user can be aware if things are being dropped. I agree with that.

This isn’t an issue with the SDK since the SDK is logging out the response from the server, which is 429 TooManyRequests. As a back pressure mechanism it buffers up to MaxQueueItems as mentioned before, and drops the rest.

I suggest you raise an issue with support because 10 events per seconds should not trigger a 429 from Sentry. Please send a link to this issue for context.

0reactions
Cooksaucecommented, Dec 18, 2020

I’m not sure I agree with it. Why is it a bug that he sees he’s being rate limited? Also, he sees it because the SDK is in debug mode. Sentry has a Stats page that shows how many events Sentry dropped due to rate limiting.

I did not know about the rate limits on Stats page. However, looking at that page, it actually says that no events were rate limited. image

From your description, the SDK did what it was designed and instead of keeping all queued events until it crashed your process due to out of memory, it started dropping events when the number if queued items was larger than the MaxQueueItems

Right, crash is not the best descriptor. What I meant was just that the SDK was failing to report the events. The thing is, this is a paid service for the sole purpose of monitoring errors in a production application. That comes with an increased expectation of reliability. That said, it is certainly reasonable to have limits (both in the Saas product & the SDK) - but to have those limits being hit (and thus events being dropped) without the user being informed about it is, in my opinion, an undesirable user experience.

If the SDK gets a TooManyRequests response, is it supposed to wait & retry or just drop the event?

There’s a general abuse limit at the proxy level of 1k/s I believe, and there’s Spike Protection (so you don’t burn all your quota), you could be really over quota so Sentry will start dropping at some point.

The event rate was definitely not anywhere near 1k/s. With a 100ms delay, that puts it somewhere near 10/s. Everything in the Saas platform also indicates there is plenty of room in our quota.

Edit: Spike Protection is off as well

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to avoid hitting rate limits in API integration
We'll go through different API rate limiting techniques as well as strategies and workarounds to recover from them or generally avoid them.
Read more >
Best Practices for API Rate Limits and Quotas with ...
Adding rate limiting is a defensive measure which can protect your API from being overwhelmed with requests and improve general availability ...
Read more >
Manage Rate Limits and Quotas - TechDocs - Broadcom Inc.
Rate limiting can protect your API or back-end resources from being overwhelmed with requests and improve general availability by limiting ...
Read more >
How to handle API rate limits: Do your integrations work at ...
An API rate limit might enforce, say, 100 requests per minute. Once requests exceed that number, it generates an error message to alert...
Read more >
Part 1: Rate Limiting: A Useful Tool with Distributed Systems
This article outlines some of the implementations, benefits, and challenges with rate limiting in modern distributed applications.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found