question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

High memory usage on Kubernetes (~400MB with close to zero traffic)

See original GitHub issue

Might be related to #1343.

I’m running two linkerd instances: one as a sidecar instance, and the other one as an edge proxy.

            +-----------------------------------+                  +---------------------------------+
            |                                   |                  |                                 |
            |                                   |                  |                                 |
            |                                   |                  |  +-----------+       +--------+ |
            |          LinkerD Edge             |                  |  |           |       |        | |
+----------->                                   +---------------------> LinkerD   +------->  App   | |
            |                                   |                  |  +-----------+       +--------+ |
            |                                   |                  |                                 |
            |                                   |                  |                                 |
            +-----------------------------------+                  +---------------------------------+

Those two instance seem to take way too much memory, even there is little to no traffic. I made a test where I hit the linkerd edge instance (itself hitting the linkerd sidecar instance) at a rate of 2 requests per second, and I’m getting a memory usage of 300-400 MB (which obviously isn’t viable):

image LinkerD sidecar instance memory usage

image LinkerD edge instance memory usage

I’m not sure this is normal, but according to some other benchmarks I’ve seen, I think the memory footprint of LinkerD is supposed to be much lower. Any idea of what could be the cause?

Additional details:

  • Environment: OpenShift v1.5.1 running Kubernetes v1.5.2
  • LinkerD version: 1.3.0
  • Configuration files: see here
  • JVM stats from /admin/metrics.json: see here

The Kubernetes cluster I’m running LinkerD on will be upgraded to a more recent Kubernetes version shortly, I’ll let you know if that changes anything.

Thanks!

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
siggycommented, Nov 1, 2017

Hi @christophetd, thanks for the detailed report. We’re are currently investigating possibly related memory usage/leak issues, follow along at #1685 and #1690. Stay tuned.

0reactions
wmorgancommented, Apr 11, 2018

@christophetd I’m going to close this issue, since I believe we’ve fixed the underlying problem. If you still see this on 1.3.7 or beyond, please feel free to reopen.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Kubernetes Logging: Tips to Avoid Memory and Resource ...
Setting both memory resource requests and limits for a pod helps Kubernetes manage RAM usage more efficiently. But doing so doesn't solve all...
Read more >
How to Prevent 'Out of Memory' Errors in Java-Based ...
When the pod is running, we can then use the 'kubectl top' command which is available through the Metrics API to reveal information...
Read more >
Assign Memory Resources to Containers and Pods - Kubernetes
This page shows how to assign a memory request and a memory limit to a Container. A Container is guaranteed to have as...
Read more >
The importance of getting resource requests and limits right.
Pod E has increased its memory usage, but is still within its limits. Allocatable is now 0, and it has crept into the...
Read more >
Spring Boot Memory Performance
Then we look at some comparison points: plain Java apps, apps that use Spring but not Spring Boot, an app that uses Spring...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found