question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Cannot schedule PODs: NodeHasDiskPressure

See original GitHub issue

I am trying microk8s on Ubuntu 18.04 and it cannot run any POD, these are the statuses after each command:

after fresh install (no dns, no dashboard)

Name:               monotop
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=monotop
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Sat, 26 May 2018 11:09:11 -0600
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True    Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:21 -0600   KubeletHasDiskPressure       kubelet has disk pressure
  PIDPressure      False   Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.36.100
  Hostname:    monotop
Capacity:
 cpu:                4
 ephemeral-storage:  575354004Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             8074920Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  530246249209
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             7972520Ki
 pods:               110
System Info:
 Machine ID:                 813e56ef1c4f171bda95b46b5448007c
 System UUID:                CA17CB86-CBF2-E054-A153-18E4E8C4154B
 Boot ID:                    d69b0b06-7267-40bd-80e8-b53992bf96c5
 Kernel Version:             4.15.0-12-generic
 OS Image:                   Ubuntu 18.04 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.10.3
 Kube-Proxy Version:         v1.10.3
ExternalID:                  monotop
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  0 (0%)        0 (0%)      0 (0%)           0 (0%)
Events:
  Type    Reason                   Age                From                 Message
  ----    ------                   ----               ----                 -------
  Normal  Starting                 33s                kube-proxy, monotop  Starting kube-proxy.
  Normal  Starting                 30s                kubelet, monotop     Starting kubelet.
  Normal  NodeHasSufficientPID     28s (x5 over 30s)  kubelet, monotop     Node monotop status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  28s                kubelet, monotop     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    27s (x6 over 30s)  kubelet, monotop     Node monotop status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  27s (x6 over 30s)  kubelet, monotop     Node monotop status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    27s (x6 over 30s)  kubelet, monotop     Node monotop status is now: NodeHasNoDiskPressure

after microk8s.enable dns, node status

Name:               monotop
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=monotop
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Sat, 26 May 2018 11:09:11 -0600
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True    Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:11:10 -0600   KubeletHasDiskPressure       kubelet has disk pressure
  PIDPressure      False   Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:11:10 -0600   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.36.100
  Hostname:    monotop
Capacity:
 cpu:                4
 ephemeral-storage:  575354004Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             8074920Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  530246249209
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             7972520Ki
 pods:               110
System Info:
 Machine ID:                 813e56ef1c4f171bda95b46b5448007c
 System UUID:                CA17CB86-CBF2-E054-A153-18E4E8C4154B
 Boot ID:                    d69b0b06-7267-40bd-80e8-b53992bf96c5
 Kernel Version:             4.15.0-12-generic
 OS Image:                   Ubuntu 18.04 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.10.3
 Kube-Proxy Version:         v1.10.3
ExternalID:                  monotop
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  0 (0%)        0 (0%)      0 (0%)           0 (0%)
Events:
  Type     Reason                   Age              From                 Message
  ----     ------                   ----             ----                 -------
  Normal   Starting                 3m               kube-proxy, monotop  Starting kube-proxy.
  Normal   Starting                 3m               kubelet, monotop     Starting kubelet.
  Normal   NodeAllocatableEnforced  3m               kubelet, monotop     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientPID     3m (x5 over 3m)  kubelet, monotop     Node monotop status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientDisk    3m (x6 over 3m)  kubelet, monotop     Node monotop status is now: NodeHasSufficientDisk
  Normal   NodeHasNoDiskPressure    3m (x6 over 3m)  kubelet, monotop     Node monotop status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientMemory  3m (x6 over 3m)  kubelet, monotop     Node monotop status is now: NodeHasSufficientMemory
  Normal   NodeNotReady             1m               kubelet, monotop     Node monotop status is now: NodeNotReady
  Normal   Starting                 1m               kubelet, monotop     Starting kubelet.
  Normal   NodeHasSufficientDisk    1m               kubelet, monotop     Node monotop status is now: NodeHasSufficientDisk
  Normal   NodeHasNoDiskPressure    1m (x2 over 1m)  kubelet, monotop     Node monotop status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     1m               kubelet, monotop     Node monotop status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  1m               kubelet, monotop     Node monotop status is now: NodeHasSufficientMemory
  Normal   NodeAllocatableEnforced  1m               kubelet, monotop     Updated Node Allocatable limit across pods
  Normal   NodeHasDiskPressure      1m               kubelet, monotop     Node monotop status is now: NodeHasDiskPressure
  Normal   NodeReady                1m               kubelet, monotop     Node monotop status is now: NodeReady
  Warning  EvictionThresholdMet     39s              kubelet, monotop     Attempting to reclaim imagefs
  Warning  EvictionThresholdMet     9s (x8 over 1m)  kubelet, monotop     Attempting to reclaim nodefs

as you can see it reports DiskPressure, but there is around 30G free on my system, the Pod statuses are:

$ microk8s.kubectl get pods --all-namespaces
NAMESPACE     NAME                        READY     STATUS    RESTARTS   AGE
kube-system   kube-dns-598d7bf7d4-q26rl   0/3       Pending   0          2m

any advice is appreciated.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

7reactions
ktsakalozoscommented, May 27, 2018

Hi @edsiper ,

In microk8s your machine is also the node kubernetes is using. Disk pressure means you do not have enough resources for kubernetes to schedule pods. You can either free disk space by deleting files, or set the pod eviction limits as described in [1].

To set the pod eviction limits you have to set the --eviction-hard and --eviction-minimum-reclaim parameters in kubelet. The arguments used to start the daemons microk8s ships are under /var/snap/microk8s/current/args/. I suggest you append the following line in /var/snap/microk8s/current/args/kubelet

 --eviction-hard="memory.available<500Mi,nodefs.available<1Gi,imagefs.available<1Gi"

And then you will need to restart kubelet: sudo systemctl restart snap.microk8s.daemon-kubelet

You bring forward a valid point, the default eviction limits make sense for a server setup but may be to conservative for a standalone dev environment. We should set sensible defaults here: https://github.com/juju-solutions/microk8s/blob/master/microk8s-resources/default-args/kubelet

Thanks

[1] https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/

0reactions
rzrcommented, Aug 2, 2019

Can’t those arguments be passed by kubectl or API ?

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cannot schedule PODs: NodeHasDiskPressure #29 - GitHub
I am trying microk8s on Ubuntu 18.04 and it cannot run any POD, these are the statuses after each command: after fresh install...
Read more >
Kubernetes Node Disk Pressure | Troubleshooting w/ Example
In this article, you'll learn more about Kubernetes nodes experiencing disk pressure, including causes of disk pressure and a step-by-step guide to ...
Read more >
Node-pressure Eviction | Kubernetes
Node-pressure eviction is the process by which the kubelet proactively terminates pods to reclaim resources on nodes.
Read more >
diskpressure on node when deploying using kubernetes pod ...
I have checked a few things and increased the disk capacities on the master VM (and worker node though problem is shown only...
Read more >
How to Debug Kubernetes Pending Pods and Scheduling ...
Learn how to debug Pending pods that fail to get scheduled due to resource constraints, taints, affinity rules, and other reasons.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found