Possible memory leak when run under ts-node
See original GitHub issueI’m investigating a memory leak in a production app that seems to be related to this library, but I’m not sure if it’s the cause yet.
Unfortunately, we have been migrating infrastructure over the last few months and our monitoring wasn’t tweaked right so I’m unable to say with certainty if this is a new issue, but I don’t believe it is.
The app is an express based API that runs ts-node in production, not pre-transpiled.
I spun up an instance in a lower environment, took a heap snapshot, ran its acceptance tests, then took another heap snapshot to compare.
The tests generate a couple hundred requests and increases the memory ~17Mb. The vast majority of the retained space is instances of TraceSegment
and Transaction
.
Has anyone else had this issue? And what other info can I provide to help debugging?
Digging around the code, it’s unclear to me why the wrapped nextTick
scopes would be retaining refs to Transactions.
Node: 10.19.0 ts-node: 8.8.1 typescript: 3.8.3 newrelic: 6.5.0
Issue Analytics
- State:
- Created 3 years ago
- Comments:8 (3 by maintainers)
Top GitHub Comments
@mastermatt Good to hear from you – sorry it’s always under “what the heck are these computers doing” circumstances.
Also, (I think you know the drill here), if you need “official” New Relic support for this you’ll want to reach out via our official support channels: https://support.newrelic.com – we’re always happy to help here when we can – but this isn’t an official support channel and we might be silently pulled away on things. Submitting a ticket to support is the best way to ensure the right eyes get on an issue.
Thanks for digging into this and for providing a clear description of the behavior you’re seeing so far. There’s nothing obvious that jumps out as being the root cause of this behavior. We’re definitely interested in getting to the bottom of it.
I’m going to give a little context on what the
Transaction
andTraceSegment
objects are for, take some wild guesses as to what’s going on along the way, and then ask you a few more questions about your environment to further debugging. Apologies if this is redundant information – but it sometimes helps to frame things for other folks who are newer to the agent and following along.Context
Speaking broadly a
Transaction
represents, more or less, a single HTTP(s) request/response cycle and everything that happens within it. Transactions can represent any two arbitrary points in time in the life of your application, but in practice it’s usually a single HTTP(s) request/response cycle.A
TraceSegment
represents a smaller unit of time within a transaction. It’s usually measuring some specific thing that happened – the amount of time an individual function took to execute is a common example. At the end of a transaction eachTraceSegment
in a transaction is synthesized into other datatypes (nodes of a Transaction Trace, Span Event, etc.) that are eventually sent to New Relic.Memory wise, a
TraceSegment
should be cleaned up by the time a transaction ends. If they’re not, then something is holding a reference to them open. One example we’ve seen in the past is segments created in promises, and the promise is never resolved or rejected. If a promise does some work that’s instrumented, but never resolves, the trace segments can be held open while node waits for the promise to resolve.So – sometimes these issues are bugs in the agent, sometimes they’re poorly behaving application code that the agent makes worse by doing more work (a small leak might become a bigger leak, etc.).
We’re definitely interested in tracking these down, but will need more context/help from you.
Questions
First, you mentioned you’re running an application in production using
ts-node
. This raises a few question for usts-node
with when you run the application?The main ts-node maintainer seems hesitant to commit to whether ts-node is production ready or not, so it would be good to pinpoint whether
ts-node
is the problem, or of it’s something elseSecond –
Our end goal here is to get a reproduction of the issue that anyone can run. Once we can reproduce the issue, we can usually determine pretty quickly if it’s a bug in the agent, or some application interaction that’s causing the problem and advise from there.
OK – that’s a mouthful, I’ll stop there 😃 If you have any follow up questions let us know – otherwise we’ll keep an eye out for those answers.
My conclusion was that NR was not the cause of the memory leak itself. There was something else that was holding on to HTTP requests. However, the NR agent tack objects onto the
IncomingMessage
instances making the memory leak much more noticeable. Even to the point of OOMing.@raoulus I’d be willing to bet that you still have a memory leak when NR is not loaded, but it’s small enough that it’s hard to detect unless you really put your app under load for a while.