Pods stuck in ContainerCreating status, Failed create pod sandbox
See original GitHub issueWhen running “microk8s.enable dns dashboard”, the pods will stay in ContainerCreating status:
$ sudo snap install microk8s --beta --classic
microk8s (beta) v1.10.3 from 'canonical' installed
$ microk8s.kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 25s
$ microk8s.enable dns dashboard
Applying DNS manifest
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment.extensions "kube-dns" created
Restarting kubelet
Done
deployment.extensions "kubernetes-dashboard" created
service "kubernetes-dashboard" created
service "monitoring-grafana" created
replicationcontroller "monitoring-influxdb-grafana-v4" created
service "monitoring-influxdb" created
$ microk8s.kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/kube-dns-598d7bf7d4-f8lbm 0/3 ContainerCreating 0 9s
kube-system pod/kubernetes-dashboard-545868474d-ltkg8 0/1 Pending 0 4s
kube-system pod/monitoring-influxdb-grafana-v4-5qxm6 0/2 Pending 0 4s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicationcontroller/monitoring-influxdb-grafana-v4 1 1 0 4s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 1m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP 9s
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.204 <none> 80/TCP 4s
kube-system service/monitoring-grafana ClusterIP 10.152.183.115 <none> 80/TCP 4s
kube-system service/monitoring-influxdb ClusterIP 10.152.183.228 <none> 8083/TCP,8086/TCP 4s
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/kube-dns 1 1 1 0 9s
kube-system deployment.apps/kubernetes-dashboard 1 1 1 0 4s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/kube-dns-598d7bf7d4 1 1 0 9s
kube-system replicaset.apps/kubernetes-dashboard-545868474d 1 1 0 4s
The pods will stay in status ContainerCreating.
$ microk8s.kubectl describe pod/kubernetes-dashboard-545868474d-ltkg8 --namespace kube-system
Name: kubernetes-dashboard-545868474d-ltkg8
Namespace: kube-system
Node: <hostname>/192.168.1.17
Start Time: Tue, 12 Jun 2018 14:33:39 -0400
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=1014240308
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Pending
IP:
Controlled By: ReplicaSet/kubernetes-dashboard-545868474d
Containers:
kubernetes-dashboard:
Container ID:
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0
Image ID:
Port: 9090/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vxq5n (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-vxq5n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vxq5n
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned kubernetes-dashboard-545868474d-ltkg8 to <hostname>
Normal SuccessfulMountVolume 13m kubelet, <hostname> MountVolume.SetUp succeeded for volume "default-token-vxq5n"
Warning FailedCreatePodSandBox 13m kubelet, <hostname> Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin kubenet failed to set up pod "kubernetes-dashboard-545868474d-ltkg8_kube-system" network: Error adding container to network: failed to Statfs "/proc/6763/ns/net": permission denied
Normal SandboxChanged 3m (x40 over 13m) kubelet, <hostname> Pod sandbox changed, it will be killed and re-created.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:17 (4 by maintainers)
Top Results From Across the Web
Learn why your EKS pod is stuck in the ContainerCreating state
My Amazon Elastic Kubernetes Service (Amazon EKS) pod is stuck in the ContainerCreating state with the error "failed to create pod sandbox".
Read more >Troubleshooting the “Failed to Create Pod Sandbox” Error
If pods are stuck in the ContainerCreating state, your first step is to check the pod status and get more details with the...
Read more >Pods stuck in ContainerCreating status, Failed create ... - GitHub
When running "microk8s.enable dns dashboard", the pods will stay in ContainerCreating status: $ sudo snap install microk8s --beta --classic ...
Read more >PODs stuck in ContainerCreating state on TKGS Guest ...
We see a resulting failure to reference the Sandbox location in order to start the kube-proxy service, upon which the CNI and all...
Read more >Kubernetes runner - Pods stuck in Pending or ... - GitLab
Kubernetes runner - Pods stuck in Pending or ContainerCreating due to "Failed create pod sandbox". Summary. We're experiencing intermittent ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I got the same problem how I solve this problem?
After reinstalling
microk8s
, a different error started appearing upon first attempting toapply
thedeployment.yml
:I don’t understand why this deployment would suddenly be identified as invalid; as mentioned above, I made no changes to the file, and microk8s was already up to date (I ensured that I had run
snap refresh
). I tried removing the offending syntax from the deployment, but that subsequently triggered an error about a taint. So I inspected the system pods, and found that the dns pod itself is only2/3
ready with 66 restarts. Same goes forhostpath-provisioner
. I made sure that I had done the appropriatesudo iptables -P FORWARD ACCEPT
and disabled/re-enableddns
. Still the same problem. When Idescribe
the pods, they’re exhibiting the exact same symptoms (rpc errors followed byPod sandbox changed, it will be killed and re-created.
).