Liveness probe for REST Pod should be switched off
See original GitHub issueIn the TP3 release of Syndesis, both readiness and liveness probes continue to cause the REST pod to restart and redeploy. Below are the log entries arising from the probe.
5:26:57 PM | Warning | Unhealthy | Readiness probe failed: Get http://10.1.7.122:8181/health: dial tcp 10.1.7.122:8181: getsockopt: connection refused28 times in the last 10 minutes -- | -- | -- | --
5:25:07 PM | Warning | Unhealthy | Liveness probe failed: Get http://10.1.7.122:8080/api/v1/version: dial tcp 10.1.7.122:8080: getsockopt: connection refused
These health checks are unnecessary and only causes long wait times for creation and testing of integrations.
Issue Analytics
- State:
- Created 6 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
Configure Liveness, Readiness and Startup Probes
This page shows how to configure liveness, readiness and startup probes for containers. The kubelet uses liveness probes to know when to restart ......
Read more >Kubernetes Liveness and Readiness Probes: How to Avoid ...
For a liveness probe, giving up means the pod will be restarted. For a readiness probe, giving up means not routing traffic to...
Read more >Liveness and Readiness Probes - Red Hat Hybrid Cloud
One of the obvious differences between a liveness probe and a readiness probe is that the pod is still running after a readiness...
Read more >Readiness vs liveliness probes: How to set them up and when ...
As I mentioned above, a liveness probe failure causes the pod to restart. You need to make sure the probe doesn't start until...
Read more >Clear Some Doubts on Health Probes for Pod - Zhimin Wen
The liveness probe handler is listed as below. It will log the request's remote address so that we will be able to know...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Liveness probes are important and essential for proper self-healing. So we should tackle this for sure (i suspect some spring-boot percularities here).
wrt. to OOM error, we should (a) investigate our configured limits and (b) verify that we dont have a memory leak.
We’re getting awful experience because of pod restarts due to liveness probes. We should either remove them or configure them correctly.
Let me dial in some OpenShift/Kubernetes support by calling @chirino, @iocanel and @rhuss for help.