question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Possible memory leak when run under ts-node

See original GitHub issue

I’m investigating a memory leak in a production app that seems to be related to this library, but I’m not sure if it’s the cause yet. Unfortunately, we have been migrating infrastructure over the last few months and our monitoring wasn’t tweaked right so I’m unable to say with certainty if this is a new issue, but I don’t believe it is. The app is an express based API that runs ts-node in production, not pre-transpiled. I spun up an instance in a lower environment, took a heap snapshot, ran its acceptance tests, then took another heap snapshot to compare. The tests generate a couple hundred requests and increases the memory ~17Mb. The vast majority of the retained space is instances of TraceSegment and Transaction.

Screen Shot 2020-04-07 at 6 29 29 PM

Has anyone else had this issue? And what other info can I provide to help debugging? Digging around the code, it’s unclear to me why the wrapped nextTick scopes would be retaining refs to Transactions.

Node: 10.19.0 ts-node: 8.8.1 typescript: 3.8.3 newrelic: 6.5.0

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
astormnewreliccommented, Apr 8, 2020

@mastermatt Good to hear from you – sorry it’s always under “what the heck are these computers doing” circumstances.

Also, (I think you know the drill here), if you need “official” New Relic support for this you’ll want to reach out via our official support channels: https://support.newrelic.com – we’re always happy to help here when we can – but this isn’t an official support channel and we might be silently pulled away on things. Submitting a ticket to support is the best way to ensure the right eyes get on an issue.

Thanks for digging into this and for providing a clear description of the behavior you’re seeing so far. There’s nothing obvious that jumps out as being the root cause of this behavior. We’re definitely interested in getting to the bottom of it.

I’m going to give a little context on what the Transaction and TraceSegment objects are for, take some wild guesses as to what’s going on along the way, and then ask you a few more questions about your environment to further debugging. Apologies if this is redundant information – but it sometimes helps to frame things for other folks who are newer to the agent and following along.

Context

Speaking broadly a Transaction represents, more or less, a single HTTP(s) request/response cycle and everything that happens within it. Transactions can represent any two arbitrary points in time in the life of your application, but in practice it’s usually a single HTTP(s) request/response cycle.

A TraceSegment represents a smaller unit of time within a transaction. It’s usually measuring some specific thing that happened – the amount of time an individual function took to execute is a common example. At the end of a transaction each TraceSegment in a transaction is synthesized into other datatypes (nodes of a Transaction Trace, Span Event, etc.) that are eventually sent to New Relic.

Memory wise, a TraceSegment should be cleaned up by the time a transaction ends. If they’re not, then something is holding a reference to them open. One example we’ve seen in the past is segments created in promises, and the promise is never resolved or rejected. If a promise does some work that’s instrumented, but never resolves, the trace segments can be held open while node waits for the promise to resolve.

So – sometimes these issues are bugs in the agent, sometimes they’re poorly behaving application code that the agent makes worse by doing more work (a small leak might become a bigger leak, etc.).

We’re definitely interested in tracking these down, but will need more context/help from you.

Questions

First, you mentioned you’re running an application in production using ts-node. This raises a few question for us

  • What flags are you invoking ts-node with when you run the application?
  • If you manually compile/transpile the code to native javascript and run with node, is the leak present?

The main ts-node maintainer seems hesitant to commit to whether ts-node is production ready or not, so it would be good to pinpoint whether ts-node is the problem, or of it’s something else

Second –

  • What version of express are you using?
  • What are your express API endpoints doing?
  • Are you using any third party promise libraries?
  • Is the set of endpoints you have small enough that it would be possible to run some tests and determine if there’s a specific endpoint with this leak, or its all of them?
  • What other libraries are you using to fetch data?

Our end goal here is to get a reproduction of the issue that anyone can run. Once we can reproduce the issue, we can usually determine pretty quickly if it’s a bug in the agent, or some application interaction that’s causing the problem and advise from there.

OK – that’s a mouthful, I’ll stop there 😃 If you have any follow up questions let us know – otherwise we’ll keep an eye out for those answers.

0reactions
mastermattcommented, Jul 13, 2020

My conclusion was that NR was not the cause of the memory leak itself. There was something else that was holding on to HTTP requests. However, the NR agent tack objects onto the IncomingMessage instances making the memory leak much more noticeable. Even to the point of OOMing.

@raoulus I’d be willing to bet that you still have a memory leak when NR is not loaded, but it’s small enough that it’s hard to detect unless you really put your app under load for a while.

Read more comments on GitHub >

github_iconTop Results From Across the Web

ts-node RAM consumption - Medium
It turns out that running ts-node-dev / ts-node is constantly consuming hundreds of megabytes of RAM even for small and simple applications. In...
Read more >
Node.js Memory Leak Detection: How to Debug & Avoid Them
A quick way to fix Node.js memory leaks in the short term is to restart the app. Make sure to do this first...
Read more >
Debugging Memory Leaks in Node.js Applications - Toptal
Memory leaks in long running Node.js applications are like ticking time bombs that, if left unchecked in production environments, can result in devastating ......
Read more >
How to Reduce RAM Consumption by X6 When Using ts-node
It turns out that running ts-node-dev / ts-node is constantly consuming hundreds of megabytes of RAM even for small and simple applications.
Read more >
Understanding memory leaks in Node.js apps - LogRocket Blog
The above example is likely to cause a memory leak because the variable requests , which holds a new instance of the Map...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found