Missing 'pod' tag from kube-state-metrics reported metrics
See original GitHub issueThe pod
tag is missing for some pod related metrics reported by kube-state-metrics:
dd metric | based on ksm metrics |
---|---|
kubernetes_state.container.status_report.count.terminated | kube_pod_container_status_terminated |
kubernetes_state.container.status_report.count.waiting | kube_pod_container_status_waiting |
For reference, the list of tags for each pod metric is documented here: https://github.com/kubernetes/kube-state-metrics/blob/v1.3.1/Documentation/pod-metrics.md
Output of the info page
[0;31mWarning: Known bug in Linux Kernel 3.18+ causes 'status' to fail.[0m
Calling 'info', instead...
====================
Collector (v 5.23.0)
====================
Status date[0m: 2018-04-18 12:23:39 (7s ago)[0m
Pid: 37
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/collector.log
Clocks
======
NTP offset[0m: -0.0016 s[0m
System UTC time: 2018-04-18 12:23:47.222280
Paths
=====
conf.d: /etc/dd-agent/conf.d
checks.d: Not found
Hostnames
=========
socket-hostname: dd-agent-*****
hostname: gke-******.internal
socket-fqdn: dd-agent-*****
Checks
======
system_core (1.0.0)
-------------------
- instance #0 [[32mOK[0m]
- Collected 41 metrics, 0 events & 0 service checks
network (1.5.0)
---------------
- instance #0 [[32mOK[0m]
- Collected 98 metrics, 0 events & 0 service checks
kubernetes (1.5.0)
------------------
- instance #0 [[32mOK[0m]
- Collected 363 metrics, 0 events & 3 service checks
ntp (1.2.0)
-----------
- Collected 0 metrics, 0 events & 0 service checks
kubernetes_state (2.4.0)
------------------------
- instance #0 [[32mOK[0m]
- Collected 2618 metrics, 0 events & 199 service checks
disk (1.2.0)
------------
- instance #0 [[32mOK[0m]
- Collected 58 metrics, 0 events & 0 service checks
kube_proxy (Unknown Wheel)
--------------------------
- Collected 0 metrics, 0 events & 0 service checks
docker_daemon (1.9.0)
---------------------
- instance #0 [[32mOK[0m]
- Collected 467 metrics, 0 events & 1 service check
http_check (2.0.0)
------------------
- instance #0 [[32mOK[0m]
- instance #1 [[32mOK[0m]
- Collected 8 metrics, 0 events & 4 service checks
Emitters
========
- http_emitter [[32mOK[0m]
====================
Dogstatsd (v 5.23.0)
====================
Status date[0m: 2018-04-18 12:23:41 (5s ago)[0m
Pid: 34
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/dogstatsd.log
Flush count: 6794
Packet Count: 56905
Packets per second: 2.1
Metric count: 51
Event count: 0
Service check count: 0
====================
Forwarder (v 5.23.0)
====================
Status date[0m: 2018-04-18 12:23:44 (3s ago)[0m
Pid: 33
Platform: Linux-4.4.111+-x86_64-with-debian-9.4
Python Version: 2.7.14, 64bit
Logs: <stderr>, /var/log/datadog/forwarder.log
Queue Size: 0 bytes
Queue Length: 0
Flush Count: 22052
Transactions received: 16684
Transactions flushed: 16684
Transactions rejected: 0
API Key Status: API Key is valid
======================
Trace Agent (v 5.23.0)
======================
Pid: 32
Uptime: 68050 seconds
Mem alloc: 3045832 bytes
Hostname: gke-******.internal
Receiver: 0.0.0.0:8126
API Endpoint: https://trace.agent.datadoghq.com
--- Receiver stats (1 min) ---
--- Writer stats (1 min) ---
Traces: 0 payloads, 0 traces, 0 bytes
Stats: 0 payloads, 0 stats buckets, 0 bytes
Services: 0 payloads, 0 services, 0 bytes
Additional environment details (Operating System, Cloud provider, etc):
- kube-state-metrics version 1.3.1
- Kubernetes 1.8.10 on GKE
Additional information you deem important (e.g. issue happens only occasionally):
- kubernetes_state.container.status_report.count.terminated tag processing: https://github.com/DataDog/integrations-core/blob/424f23f27e3dd1949ab86ca277723018e4c7cbaa/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py#L332
- kubernetes_state.container.status_report.count.waiting tag processing: https://github.com/DataDog/integrations-core/blob/424f23f27e3dd1949ab86ca277723018e4c7cbaa/kubernetes_state/datadog_checks/kubernetes_state/kubernetes_state.py#L350
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:10 (9 by maintainers)
Top Results From Across the Web
1950908 – kube_pod_labels metric does not contain k8s labels
We moved to kube-state-metrics v2 which is responsible for creating `kube_pod_labels` ... Is there any alternative metric that contains pod-k8s labels info?
Read more >Prometheus has amnesia or missing metrics
After upgrading to Domino 5, some metrics are lost I have labels but data is not reaching Prometheus? It seems that kube-state-metrics...
Read more >Missing MaxPods from HPA K8s Metrics - Infrastructure
I have daemonset, deployment, pod, container, etc metrics, but nothing about hpa. Can someone confirm the name of the HPA metrics and how...
Read more >The Admin Guide to Kube-State-Metrics - Densify
Used for displaying label information per node. Node Taints, The taint of a node in the cluster. Used to see how many nodes...
Read more >Newest 'kube-state-metrics' Questions - Stack Overflow
kube_pod_labels does populates pod_label_node_name and pod_label_host_ip for the given pod however the values don't correspond to the actual node name and ip of ......
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@pdecat Thanks for your feedback! I added this to your backlog, and we’ll get back to you here once we have the chance to come to this, hopefully soon
I had a look, and it seems like the pod label is indeed not gathered here as the metric would related precisely to containers, and not specifically pods. However, since the info is there (example) and would be useful, I’ll go see with the team about this