Ambassador not load balancing across all pods in a service
See original GitHub issueDescribe the bug I have a deployment that originally only had 1 replica. When I first set up Ambassador with it, it was routing to just the one pod just fine. But when I scale out the deployment to 5 replicas, requests are still only being routed to the first pod.
To Reproduce
- Have an installation of Ambassador with the following config:
apiVersion: v1 kind: Service metadata: labels: service: ambassador name: ambassador namespace: ambassador annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp" service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" getambassador.io/config: | --- apiVersion: ambassador/v0 kind: Module name: ambassador config: use_proxy_proto: true use_remote_address: true spec: type: LoadBalancer ports: - name: https port: 443 targetPort: 80 - name: http port: 80 targetPort: 80 selector: service: ambassador
- Create a deployment and a service with the following config:
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: hello name: hello spec: selector: matchLabels: app: hello template: metadata: labels: app: hello name: hello spec: containers: - env: - name: VERSION value: "1753554" image: robinjoseph08/hello:1753554 livenessProbe: httpGet: path: /health port: web scheme: HTTP name: web ports: - containerPort: 4721 name: web readinessProbe: httpGet: path: /health port: web scheme: HTTP --- apiVersion: v1 kind: Service metadata: annotations: getambassador.io/config: | --- apiVersion: ambassador/v0 kind: Mapping name: "hello" host: "hello.example.com" service: http://hello.default:4721 prefix: / labels: app: hello name: hello spec: ports: - name: web port: 4721 targetPort: 4721 selector: app: hello
- In a separate terminal session, run:
while true; do curl -H "Host: hello.example.com" http://<ambassador-elb-dns>/hello; done
- Confirm that requests are being forwarded to the app.
{"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
- Scale out to 5 replicas with:
kubectl scale --replicas 5 deploy/hello
- Confirm that the new pods are running:
kubectl get po
- Confirm that all requests are still being routed to the original pod.
{"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"} {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"} {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"} {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"} {"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
Expected behavior When the deployment scales up and adds more replicas, it should automatically add those new pods to start receiving requests.
Versions:
- Ambassador: v0.33.1
- Kubernetes environment: AWS (provisioned through kops)
- Version: v1.9.3
Additional context
I think this has to do with Ambassador layer and not the underlying Kubernetes because if I SSH into a node in the cluster, get the cluster IP of the service, and curl
it directly, it’s round-robining as expected:
local$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hello ClusterIP 100.68.57.54 <none> 4721/TCP 4m app=hello
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 16d <none>kubectl get svc -o wide
local$ ssh admin@node1
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-dl4xw","version":"1753554"}
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-x5wsc","version":"1753554"}
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-dlwpp","version":"1753554"}
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-tp8gk","version":"1753554"}
node1$ curl http://100.68.57.54:4721/hello
{"env":"development","hello":"world","host":"hello-7c6cc868d6-2927m","version":"1753554"}
This could also easily be a misconfiguring of Ambassador and not a bug, but I couldn’t find any documentation on it.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:6 (3 by maintainers)
Top GitHub Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I am also facing the same issues on the Amabassador, Are there a fix for this.