question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Linkerd,kuberenetes: possible memory leak

See original GitHub issue

From: https://discourse.linkerd.io/t/linkerd-high-memory-usage-on-kubernetes/72/8

Setup

I’m running Linkerd 1.0.2 as Ingress for Kubernetes cluster I’m using nghttpx as the edge router to route all the traffic to Linkerd, which then distributes the traffic to the necessary recipients. Linkerd is listening for both http and h2 protocols. The actual configuration of linkerd is this:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
  namespace: engine-stage
data:
  config.yaml: |-
    #  Enable admin panel to monitor liveness and readiness
    admin:
      ip: 0.0.0.0
      port: 9990
    namers:
    - kind: io.l5d.k8s
      experimental: true
    usage:
      enabled: false
    routers:

    - protocol: http
      identifier:
        kind: io.l5d.ingress
        namespace: engine-stage
      servers:
        - port: 8080 
          ip: 0.0.0.0
          clearContext: true
      dtab: /svc => /#/io.l5d.k8s ;
      client:
        kind: io.l5d.global
        loadBalancer:
          kind: ewma
          # number of retries before a node is marked as unavailable
          maxEffort: 10
          decayTimeMs: 10000

    - protocol: h2
      experimental: true
      identifier:
        kind: io.l5d.ingress
        namespace: engine-stage
      servers:
        - port: 8081
          ip: 0.0.0.0
          clearContext: true
      dtab: /svc => /#/io.l5d.k8s ;

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: l5d
  name: l5d
  namespace: engine-stage
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:

      - name: l5d
        image: buoyantio/linkerd:1.0.2
        livenessProbe:
          httpGet:
            path: /admin/ping
            port: 9990
            httpHeaders:
          initialDelaySeconds: 5
          periodSeconds: 3
        readinessProbe:
          httpGet:
            path: /admin/ping
            port: 9990
            httpHeaders:
          initialDelaySeconds: 5
          periodSeconds: 5
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: http
          containerPort: 8080
          hostPort: 8080
        - name: h2
          hostPort: 8081
          containerPort: 8081
        - name: admin
          hostPort: 9990
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: 

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args: ["proxy", "-p", "8001"]

---
apiVersion: v1
kind: Service
metadata:
  name: l5d
  namespace: engine-stage
spec:
  selector:
    app: l5d
  ports:
  - name: http
    port: 8080
  - name: h2
    port: 8081

Test setup

  • Start with a fresh linkerd pod.
  • Issue 500gRPC requests per worker, running 10 workers, each worker using a separate connection.

Results

  • Memory usage before serving the requests: before

  • Memory usage after serving the requests: after

  • Linkerd metrics

Issue

As expected, when serving the traffic the linkerd memory increases, however once the traffic stops, the memory usage does not decrease and stays high until the pod is restarted.

Also, linkerd and kubernetes provide two different results for memory usage: image

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:16 (11 by maintainers)

github_iconTop GitHub Comments

1reaction
jamessharpcommented, Jun 7, 2017

I’ve sent @siggy a slack message with the config I’ve been using. Let me know if there’s anything else you need

0reactions
siggycommented, Aug 17, 2017

@jamessharp closing this for now as the issue seems to be resolved. please reopen if you see the issue again.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Linkerd pods in strange state - possible memory leak
Once the clients started using our application, the memory of our linkerd pods started increasing one by one. None of the pods were...
Read more >
Quality-of-Service for Memory Resources | Kubernetes
CPU is considered a "compressible" resource. If your app starts hitting your CPU limits, Kubernetes starts throttling your container, giving ...
Read more >
Linkerd 1.2.0 is here! Features, bugfixes, and migration | Linkerd
Finally, we'd like to thank community member Marcin Mejran (@mejran), who fixed a memory leak in JSON stream parsing which could impact Kubernetes...
Read more >
eBPF, sidecars, and the future of the service mesh - Buoyant.io
(Linkerd's proxies have a 2-3MB memory footprint at low traffic levels.) Kubernetes's existing mechanisms for managing resource consumption, ...
Read more >
Kubernetes hpa can't get memory metrics (when it is clearly ...
It seems that the Linkerd sidecar container in your Pod does not define a memory request (it might have a CPU request).
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found