Kubernetes readiness prevents the pod to start
See original GitHub issueBug Description I launched predator in kubernetes and when the pod starts, it perform an autocheck on its on kubernetes service. Having the readiness probe in the chart deployment, the pod is not attached to the service unless it is reported as ready, and predator inside the pod kills itself.
Steps to reproduce the behavior
- Deploy predator in Kubernetes with the helm chart
- Check the logs.
Expected behavior Predator should starts as expected
Actual behavior Predator does not start.
Logs
{"level":30,"time":1602251263266,"pid":1,"hostname":"predator-9f5598885-hhtc9","name":"predator","msg":"kubernetes token from env var was not provided. will use: /var/run/secrets/kubernetes.io/serviceaccount/token"}
{"level":30,"time":1602251263294,"pid":1,"hostname":"predator-9f5598885-hhtc9","name":"predator","msg":"Predator listening on port 80"}
{"level":30,"time":1602251263295,"pid":1,"hostname":"predator-9f5598885-hhtc9","name":"predator","msg":"Checking http://predator.utils:80/v1/config to verify predator-runners will be able connect to Predator"}
{"level":50,"time":1602251264324,"pid":1,"hostname":"predator-9f5598885-hhtc9","name":"predator","name":"RequestError","message":"Error: connect ECONNREFUSED 172.20.153.82:80","cause":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"172.20.153.82","port":80},"error":{"errno":"ECONNREFUSED","code":"ECONNREFUSED","syscall":"connect","address":"172.20.153.82","port":80},"options":{"json":true,"simple":false,"resolveWithFullResponse":true,"timeout":5000,"uri":"http://predator.utils:80/v1/config","method":"GET","transform2xxOnly":false},"stack":"RequestError: Error: connect ECONNREFUSED 172.20.153.82:80\n at new RequestError (/usr/node_modules/request-promise-core/lib/errors.js:14:15)\n at Request.plumbing.callback (/usr/node_modules/request-promise-core/lib/plumbing.js:87:29)\n at Request.RP$callback [as _callback] (/usr/node_modules/request-promise-core/lib/plumbing.js:46:31)\n at self.callback (/usr/node_modules/request/request.js:185:22)\n at Request.emit (events.js:310:20)\n at Request.onRequestError (/usr/node_modules/request/request.js:877:8)\n at ClientRequest.emit (events.js:322:22)\n at Socket.socketErrorListener (_http_client.js:426:9)\n at Socket.emit (events.js:310:20)\n at emitErrorNT (internal/streams/destroy.js:92:8)\n at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)\n at processTicksAndRejections (internal/process/task_queues.js:84:21)","type":"Error","msg":"Encountered an error during start up"}
Versions:
- Predator: 1.5.4
- Predator-runner: 1.5.4
- Database: mysql
Additional context Removing the readiness probe from the deployment manifest, the pod starts correcetly.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5
Top Results From Across the Web
Configure Liveness, Readiness and Startup Probes
The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A Pod is considered ready when all...
Read more >How to disable liveness and readiness probe for pods ? [Video]
You can disable the liveness and readiness probe for pods, so that it will stop crashing and restarting. Doing this will let you...
Read more >Health Checks, Readiness Probe and Pod State - CTO.ai
If a readiness probe starts to fail, Kubernetes will stop sending traffic to the pod until it passes again. Readiness Probes indicate whether ......
Read more >Kubernetes : Configure Liveness and Readiness Probes
This tutorial teaches you about the Liveness and Readiness probes to help ensure your Pods run smoothly.
Read more >Kubernetes Startup Probe | Practical Guide - ContainIQ
It's important to add startup probes in conjunction with these other probe types. Otherwise, a container could be targeted by a liveness or...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Sorry @enudler , I didn’y notice the change in the configmap.
It should work perfectly.
Thank you for your time. (BTW: Amazing tool! You’re doing a great job!)
Hey, @colandre in 1.5.2 template is the same, but I have added this to configMap
SKIP_INTERNAL_ADDRESS_CHECK: {{ skipInternalAddressCheck | quote }}
With
skipInternalAddressCheck=true
by default.it should disable this check which was intended mainly for Predator that runs with docker-engine