High Kube API Server Usage 600+ req/sec
See original GitHub issueWhat happened:
I’m seeing extremely high CPU usage across multiple clusters that have this deployed. High request rate to the Kube API Server for WATCH pod and WATCH namespaces.
600-700 requests/sec, 8 weeks ago I was at 1.2k requests/second and 12 weeks ago over 2k requests/sec.
I’ve also witnessed that the fluentd CPU usage has been high during this same period, likely correlated.
What you expected to happen:
Not for it to make so many request/per second
How to reproduce it (as minimally and precisely as possible):
Unsure yet, still investigating. As far as I can tell my configuration is standard, nothing out of the ordinary beyond the basic setup.
Anything else we need to know?:
A restart of the fluentd-hec container fixes the problem. It appears something is in a tight failure loop and is constantly spawning requests to the kube-apiserver
Environment:
- Kubernetes version (use
kubectl version
): 1.5.10 - Ruby version (use
ruby --version
): - OS (e.g:
cat /etc/os-release
): rancher-os - Splunk version:
- Others:
Issue Analytics
- State:
- Created 3 years ago
- Comments:9 (3 by maintainers)
Dupe of #359. 1.4.1 seems to have fixed. Thanks.
yep, known issue with 1.4.0, see https://github.com/splunk/splunk-connect-for-kubernetes/issues/359