question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Ambassador not load balancing across all pods in a service

See original GitHub issue

Describe the bug I have a deployment that originally only had 1 replica. When I first set up Ambassador with it, it was routing to just the one pod just fine. But when I scale out the deployment to 5 replicas, requests are still only being routed to the first pod.

To Reproduce

  1. Have an installation of Ambassador with the following config:
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        service: ambassador
      name: ambassador
      namespace: ambassador
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
        service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
        getambassador.io/config: |
          ---
          apiVersion: ambassador/v0
          kind:  Module
          name:  ambassador
          config:
            use_proxy_proto: true
            use_remote_address: true
    spec:
      type: LoadBalancer
      ports:
      - name: https
        port: 443
        targetPort: 80
      - name: http
        port: 80
        targetPort: 80
      selector:
        service: ambassador
    
  2. Create a deployment and a service with the following config:
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      labels:
        app: hello
      name: hello
    spec:
      selector:
        matchLabels:
          app: hello
      template:
        metadata:
          labels:
            app: hello
          name: hello
        spec:
          containers:
          - env:
            - name: VERSION
              value: "1753554"
            image: robinjoseph08/hello:1753554
            livenessProbe:
              httpGet:
                path: /health
                port: web
                scheme: HTTP
            name: web
            ports:
            - containerPort: 4721
              name: web
            readinessProbe:
              httpGet:
                path: /health
                port: web
                scheme: HTTP
    ---
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        getambassador.io/config: |
          ---
          apiVersion: ambassador/v0
          kind: Mapping
          name: "hello"
          host: "hello.example.com"
          service: http://hello.default:4721
          prefix: /
      labels:
        app: hello
      name: hello
    spec:
      ports:
      - name: web
        port: 4721
        targetPort: 4721
      selector:
        app: hello
    
  3. In a separate terminal session, run:
    while true; do curl -H "Host: hello.example.com" http://<ambassador-elb-dns>/hello; done
    
  4. Confirm that requests are being forwarded to the app.
    {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
    
  5. Scale out to 5 replicas with:
    kubectl scale --replicas 5 deploy/hello
    
  6. Confirm that the new pods are running:
    kubectl get po
    
  7. Confirm that all requests are still being routed to the original pod.
    {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
    {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
    {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
    {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
    {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
    

Expected behavior When the deployment scales up and adds more replicas, it should automatically add those new pods to start receiving requests.

Versions:

  • Ambassador: v0.33.1
  • Kubernetes environment: AWS (provisioned through kops)
  • Version: v1.9.3

Additional context I think this has to do with Ambassador layer and not the underlying Kubernetes because if I SSH into a node in the cluster, get the cluster IP of the service, and curl it directly, it’s round-robining as expected:

local$ kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE       SELECTOR
hello        ClusterIP   100.68.57.54   <none>        4721/TCP   4m        app=hello
kubernetes   ClusterIP   100.64.0.1     <none>        443/TCP    16d       <none>kubectl get svc -o wide
local$ ssh admin@node1
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-dl4xw","version":"1753554"}
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-dlwpp","version":"1753554"}
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-tp8gk","version":"1753554"}
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-2927m","version":"1753554"}

This could also easily be a misconfiguring of Ambassador and not a bug, but I couldn’t find any documentation on it.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:2
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
stale[bot]commented, Jun 5, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

0reactions
ysaakprcommented, Oct 4, 2019

I am also facing the same issues on the Amabassador, Are there a fix for this.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Ambassador not load balancing across all pods in a service
When I first set up Ambassador with it, it was routing to just the one pod just fine. But when I scale out...
Read more >
Load balancing | Edge Stack
Load balancing configuration can be set for all Ambassador Edge Stack mappings in the ambassador Module , or set per Mapping . If...
Read more >
Advanced Load Balancing and Sticky Sessions with ...
Configuring sticky sessions makes Ambassador route requests to the same backend service in a given session. In other words, requests in a session...
Read more >
ambassador service stays "pending" - Stack Overflow
Again, I want to use ambassador as an ingress component here - with my setup (only one machine), "real" loadbalancing might not be...
Read more >
Ambassador load-balancer is empty - Server Fault
One big problem is that I do not know how to access the logs. All of my pods seem to be operating as...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found