SentryEvent memory leak with large exception object graphs
See original GitHub issuePackage
Sentry
.NET Flavor
.NET Core
.NET Version
7.0
OS
Linux
SDK Version
3.34.0
Self-Hosted Sentry Version
No response
Steps to Reproduce
I’m having an issue where a process in which I’ve enabled the Sentry Microsoft.Extensions.Logging integration is consistently leaking memory. The issue seems to be most obvious in cases where the SentryEvent
exception instance references a large object graph - say, a Microsoft.EntityFrameworkCore.DbUpdateConcurrencyException
that references a DbContext
with a large number of entities loaded.
Expected Result
I would expect the SentryEvent
instances to be processed/sent/etc. and eventually freed.
Actual Result
Based on the process memory consumption (and memory dumps) it seems like the SentryEvent
instances are never freed:
I’m wondering if the Sentry SDK is trying to serialize and send the DbUpdateConcurrencyException
along with everything in the associated DbContext
, fails due to limitations on event size, and ends up retrying forever.
A typical gcroot
from dotnet-dump analyze
looks like this:
-> 7fa86fe3bc78 System.Threading.Thread
-> 7fa8421108a0 System.Threading.ExecutionContext
-> 7fa842110870 System.Threading.AsyncLocalValueMap+TwoElementAsyncLocalValueMap
-> 7fa8420ab168 System.Collections.Generic.KeyValuePair<Sentry.Scope, Sentry.ISentryClient>[]
-> 7fa84209c4e0 Sentry.SentryClient
-> 7fa84209c5f0 Sentry.Internal.BackgroundWorker
-> 7fa8420a72f8 System.Collections.Concurrent.ConcurrentQueue<Sentry.Protocol.Envelopes.Envelope>
-> 7fa8420a7338 System.Collections.Concurrent.ConcurrentQueueSegment<Sentry.Protocol.Envelopes.Envelope>
-> 7fa8420a7420 System.Collections.Concurrent.ConcurrentQueueSegment<Sentry.Protocol.Envelopes.Envelope>+Slot[]
-> 7fa8663d47d0 Sentry.Protocol.Envelopes.Envelope
-> 7fa8663d4668 System.Collections.Generic.List<Sentry.Protocol.Envelopes.EnvelopeItem>
-> 7fa8663d4798 Sentry.Protocol.Envelopes.EnvelopeItem[]
-> 7fa8663d4778 Sentry.Protocol.Envelopes.EnvelopeItem
-> 7fa8663d4760 Sentry.Protocol.Envelopes.JsonSerializable
-> 7fa866bde340 Sentry.SentryEvent
-> 7fa866bdc618 Microsoft.EntityFrameworkCore.DbUpdateConcurrencyException
-> 7fa866bdc8f0 System.Collections.Generic.List<Microsoft.EntityFrameworkCore.ChangeTracking.EntityEntry>
-> 7fa866bdc930 Microsoft.EntityFrameworkCore.ChangeTracking.EntityEntry[]
-> 7fa866bdc910 Microsoft.EntityFrameworkCore.ChangeTracking.EntityEntry
-> 7fa87a64b3a8 Microsoft.EntityFrameworkCore.ChangeTracking.Internal.InternalEntityEntry
-> 7fa87967c120 Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager
-> 7fa8796723d0 <MyApplicationDbContext>
Issue Analytics
- State:
- Created 2 months ago
- Comments:7 (2 by maintainers)
Top GitHub Comments
It looks like I may be able to work around this problem to some degree on my end. When the
TestJob
throws aDbUpdateConcurrencyException
, EF Core logs it, then Quartz.NET catches it and throws aJobExecutionException
, which also gets logged. I believe the fact that both exceptions are logged back-to-back may be what triggers the memory leak in the runtime. When I modifyTestJob
to catch theDbUpdateConcurrencyException
, wrap it in aJobExecutionException
, and throw that, the memory leak seems to go away. It’s possible that wrapping the exception changes the code path in a way that avoids triggering theConcurrentQueue
memory leak. I have to admit I’m not completely satisfied with that answer (how exactly is that code path different??), but it is nice to have a workaround for now.Huge thanks for the repro! We’ll look into this.