question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cached errors get dropped when count >20

See original GitHub issue

Environment

How do you use Sentry? Sentry SaaS (sentry.io)

Which SDK and version? Sentry.Extensions.Logging, 3.13.0

Steps to Reproduce

  1. Put the system into an offline state (disable network device, or airplane mode)
  2. Run the console program below, which is configured to cache errors and it captures 100 errors:
    using Microsoft.Extensions.DependencyInjection;
    using Microsoft.Extensions.Hosting;
    using Microsoft.Extensions.Logging;
    using Sentry;
    
    using var host = Host.CreateDefaultBuilder()
        .ConfigureLogging(builder => builder.AddSentry(options =>
            {
                options.Dsn = "https://123123123@123123123.ingest.sentry.io/123123123";
                options.TracesSampleRate = 1.0;
                options.AutoSessionTracking = true;
                options.CacheDirectoryPath = "Sentry";
                options.InitCacheFlushTimeout = TimeSpan.FromSeconds(200);
                options.MaxCacheItems = 500;
                options.MaxBreadcrumbs = 500;
            }))
        .Build();
    
    
    for (int i = 0; i < 100; i++)
    {
        using var scope = host.Services.CreateScope();
        var sentryHub = scope.ServiceProvider.GetRequiredService<IHub>();
        var message = $"Message #{i + 1}";
        sentryHub.WithScope(scope =>
        {
            scope.SetFingerprint(message);
            if (sentryHub.CaptureMessage(message) == SentryId.Empty)
            {
                throw new Exception(message);
            }
        });
        Console.WriteLine(message);
        await Task.Delay(TimeSpan.FromSeconds(i % 4));
    }
    
    Console.WriteLine("Done");
    await Task.Delay(TimeSpan.FromMinutes(1));
    
  3. Check that running the program produces 100 envelopes in the sentry cache folder.
  4. Bring the system back online.
  5. Run the console program again without capturing new errors.
  6. Check that all envelopes are gone in the cache folder.

Expected Result

I would expect all 100 errors to be available in Sentry.io.

Actual Result

Sentry.io displays only the first 20 errors, the rest are shown as being dropped in Sentry.io Stats page.

Additional notes

  • There is a delay of a few seconds between capturing errors. Running the same console program while being online produces all 100 errors in Sentry.io.
  • Removing the delays and running while online results in the same 20 errors limit and the rest being dropped.

I assume the dropping is to avoid spikes or just to manage against being flooded with information. But in this situation, given that a system might be offline for a long time and collect many errors, the SDK should somehow handle pushing the cached errors out into the server without triggering the dropping behavior.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
bruno-garciacommented, Mar 8, 2022

Thanks for pointing it out. We suggest now that offline caching features should slow down (sleep) when draining the disk cache. See #1504

0reactions
lawrence-lazcommented, Mar 7, 2022

Ok, so apparently there are two ways for rate limiting in Sentry: spike protection and simple rate limiting. The former is what was causing issues for me with an offline application scenario. The spike protection for new app is 20 requests/minute, which is very low and having an app be offline for a frew days can quickly collect 20 events, which are then immediately sent to sentry.io once app is back online, then spike protection takes effect drops the events.

So for me the solution was to disable spike protection and set up a simple more permissive rate limiting.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Why drop caches in Linux? - memory
Normally the kernel will clear the cache when the available RAM is depleted. It frequently writes dirtied content to disk using pdflush. Share....
Read more >
Cmake Error: could not load cache
I'm using Cmake to try to build a project for Eclipse. When I try running Cmake, I get the following error: Error: could...
Read more >
kubelet counts active page cache against memory. ...
A pod was evicted due to memory pressure on the node, when it appeared to me that there shouldn't have been sufficient memory...
Read more >
8.10.3.4 Query Cache Status and Maintenance
The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0. ... It counts the number of queries that...
Read more >
Using cached query results | BigQuery
The Use cached results option reuses results from a previous run of the same query unless the tables being queried have changed. Using...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found