question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Power loss on node causes instability

See original GitHub issue

Currently running v1.23.3 and lost power. This has happened before, so it does not seem hard to reproduce, about 25%-50% of the time, a node that is running fails to start up again if you pull the power on it. Seems to be bootlooping, because the node never goes back to the Ready state.

Here’s a snipped of journalctl -f:

Jan 31 22:00:33 atlas sudo[31520]:     root : TTY=unknown ; PWD=/var/snap/microk8s/2948 ; USER=root ; ENV=LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/2948/lib:/snap/microk8s/2948/usr/lib:/snap/microk8s/2948/lib/x86_64-linux-gnu:/snap/microk8s/2948/usr/lib/x86_64-linux-gnu ; COMMAND=/snap/microk8s/2948/bin/sed -i s;^--start-control-plane=.*;--start-control-plane=true; /var/snap/microk8s/2948/args/kubelite
Jan 31 22:00:33 atlas sudo[31520]: pam_unix(sudo:session): session opened for user root by (uid=0)
Jan 31 22:00:33 atlas sudo[31520]: pam_unix(sudo:session): session closed for user root
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + '[' -e /var/snap/microk8s/2948/var/lock/stopped.lock ']'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + n=0
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + '[' 0 -ge 20 ']'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31522]: + ip route
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31523]: + grep default
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + break
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + '[' -e /var/snap/microk8s/2948/args/ha-conf ']'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31524]: ++ get_opt_in_config --storage-dir k8s-dqlite
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31524]: ++ local opt=--storage-dir
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31524]: ++ local config_file=/var/snap/microk8s/2948/args/k8s-dqlite
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31524]: ++ val=
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31525]: +++ grep -qE '^--storage-dir=' /var/snap/microk8s/2948/args/k8s-dqlite
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31527]: +++ grep -E '^--storage-dir' /var/snap/microk8s/2948/args/k8s-dqlite
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31528]: +++ cut -d= -f2
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31524]: ++ val='${SNAP_DATA}/var/kubernetes/backend/'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31524]: ++ echo '${SNAP_DATA}/var/kubernetes/backend/'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + storage_param='${SNAP_DATA}/var/kubernetes/backend/'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31529]: ++ eval echo '${SNAP_DATA}/var/kubernetes/backend/'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31529]: +++ echo /var/snap/microk8s/2948/var/kubernetes/backend/
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + storage_dir=/var/snap/microk8s/2948/var/kubernetes/backend/
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31530]: ++ grep -qE '^failure-domain' /var/snap/microk8s/2948/args/ha-conf
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31531]: ++ get_opt_in_config failure-domain ha-conf
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31531]: ++ local opt=failure-domain
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31531]: ++ local config_file=/var/snap/microk8s/2948/args/ha-conf
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31531]: ++ val=
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31532]: +++ grep -qE '^failure-domain=' /var/snap/microk8s/2948/args/ha-conf
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31534]: +++ grep -E '^failure-domain' /var/snap/microk8s/2948/args/ha-conf
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31535]: +++ cut -d= -f2
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31531]: ++ val=1
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31531]: ++ echo 1
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + val=1
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + echo 1
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31537]: ++ cat /var/snap/microk8s/2948/args/kubelet
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31538]: ++ grep pod-cidr
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31539]: ++ tr = ' '
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31540]: ++ gawk '{print $2}'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + pod_cidr=
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + '[' -z '' ']'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31542]: ++ jq .Network /var/snap/microk8s/2948/args/flannel-network-mgr-config
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31543]: ++ tr -d '\"'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + pod_cidr=10.1.0.0/16
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + '[' -z 10.1.0.0/16 ']'
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + iptables -C FORWARD -s 10.1.0.0/16 -m comment --comment 'generated for MicroK8s pods' -j ACCEPT
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31491]: + ufw version
Jan 31 22:00:33 atlas microk8s.daemon-kubelite[31546]: ++ ufw status
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + ufw='Status: inactive'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31549]: + echo Status: inactive
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31550]: + grep -q 'Status: active'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + grep -e '--address ' /var/snap/microk8s/2948/args/containerd
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31553]: ++ grep -e '--address ' /var/snap/microk8s/2948/args/containerd
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31554]: ++ gawk '{print $2}'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + socket='${SNAP_COMMON}/run/containerd.sock'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31555]: ++ eval echo '${SNAP_COMMON}/run/containerd.sock'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31555]: +++ echo /var/snap/microk8s/common/run/containerd.sock
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + socket_file_expand=/var/snap/microk8s/common/run/containerd.sock
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + n=0
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' 0 -ge 10 ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + test -S /var/snap/microk8s/common/run/containerd.sock
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + break
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' -e /proc/31491/cgroup ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31557]: ++ gawk -F '[:]' '(/cpu/ && !/cpuset/) || /memory/ {print $3}' /proc/31491/cgroup
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31558]: ++ uniq
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31559]: ++ wc -l
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + [[ 1 -eq 2 ]]
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' -L /var/lib/kubelet ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + echo '`/var/lib/kubelet` is a symbolic link'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: `/var/lib/kubelet` is a symbolic link
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + ls -l /var/lib/kubelet
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31560]: lrwxrwxrwx 1 root root 41 Dec 22 02:30 /var/lib/kubelet -> /var/snap/microk8s/common/var/lib/kubelet
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + grep -E '(--advertise-address|--bind-address)' /var/snap/microk8s/2948/args/kube-apiserver
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + rm -f /var/snap/microk8s/2948/external_ip.txt
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + n=0
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' 0 -ge 20 ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31563]: + ip route
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31564]: + grep default
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + break
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' -e /var/snap/microk8s/2948/args/cni-network/cni.yaml ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + ipvs='ipv4 ipv6'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + for ipv in $ipvs
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' -e /proc/sys/net/ipv4/conf/all/forwarding ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + grep -e 1 /proc/sys/net/ipv4/conf/all/forwarding
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31565]: 1
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + for ipv in $ipvs
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' -e /proc/sys/net/ipv6/conf/all/forwarding ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + grep -e 1 /proc/sys/net/ipv6/conf/all/forwarding
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31566]: 1
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' -f /var/snap/microk8s/2948/var/lock/host-access-enabled ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + grep -E lxc /proc/1/environ
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' -f /proc/sys/net/bridge/bridge-nf-call-iptables ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + '[' -f /proc/sys/net/bridge/bridge-nf-call-iptables ']'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + grep 0 /proc/sys/net/bridge/bridge-nf-call-iptables
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31569]: ++ cat /var/snap/microk8s/2948/args/kubelite
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + declare -a 'args=(--scheduler-args-file=$SNAP_DATA/args/kube-scheduler
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: --controller-manager-args-file=$SNAP_DATA/args/kube-controller-manager
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: --proxy-args-file=$SNAP_DATA/args/kube-proxy
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: --kubelet-args-file=$SNAP_DATA/args/kubelet
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: --apiserver-args-file=$SNAP_DATA/args/kube-apiserver
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: --kubeconfig-file=$SNAP_DATA/credentials/client.config
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: --start-control-plane=true)'
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: + exec /snap/microk8s/2948/kubelite --scheduler-args-file=/var/snap/microk8s/2948/args/kube-scheduler --controller-manager-args-file=/var/snap/microk8s/2948/args/kube-controller-manager --proxy-args-file=/var/snap/microk8s/2948/args/kube-proxy --kubelet-args-file=/var/snap/microk8s/2948/args/kubelet --apiserver-args-file=/var/snap/microk8s/2948/args/kube-apiserver --kubeconfig-file=/var/snap/microk8s/2948/credentials/client.config --start-control-plane=true
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: Starting kubelite
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:34.155098   31491 daemon.go:73] Waiting for the API server
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:34.156777   31491 daemon.go:65] Starting API Server
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:34.158354   31491 server.go:568] external host was not specified, using 10.0.0.241
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:34.158400   31491 authentication.go:523] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:34.158984   31491 server.go:175] Version: v1.23.3-2+d441060727c463
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:34.965833   31491 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:34.965850   31491 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:34.966916   31491 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan 31 22:00:34 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:34.966938   31491 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.025143   31491 genericapiserver.go:538] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:35.027205   31491 instance.go:274] Using reconciler: lease
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.378615   31491 genericapiserver.go:538] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.380922   31491 genericapiserver.go:538] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.411103   31491 genericapiserver.go:538] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.413370   31491 genericapiserver.go:538] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.421288   31491 genericapiserver.go:538] Skipping API networking.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.425202   31491 genericapiserver.go:538] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.433407   31491 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.433424   31491 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.435328   31491 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.435342   31491 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.441313   31491 genericapiserver.go:538] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.447448   31491 genericapiserver.go:538] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.455412   31491 genericapiserver.go:538] Skipping API apps/v1beta2 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.455428   31491 genericapiserver.go:538] Skipping API apps/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.458110   31491 genericapiserver.go:538] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:35.462787   31491 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:35.462807   31491 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan 31 22:00:35 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:35.493202   31491 genericapiserver.go:538] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.409824   31491 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/snap/microk8s/2948/certs/front-proxy-ca.crt"
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.409839   31491 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/snap/microk8s/2948/certs/ca.crt"
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.410259   31491 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/var/snap/microk8s/2948/certs/server.crt::/var/snap/microk8s/2948/certs/server.key"
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.410826   31491 secure_serving.go:266] Serving securely on [::]:16443
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.410939   31491 apf_controller.go:317] Starting API Priority and Fairness config controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.410929   31491 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411006   31491 available_controller.go:491] Starting AvailableConditionController
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411035   31491 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411180   31491 customresource_discovery_controller.go:209] Starting DiscoveryController
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411193   31491 establishing_controller.go:76] Starting EstablishingController
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.410970   31491 apiservice_controller.go:97] Starting APIServiceRegistrationController
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411273   31491 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411282   31491 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411333   31491 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411406   31491 autoregister_controller.go:141] Starting autoregister controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411437   31491 cache.go:32] Waiting for caches to sync for autoregister controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411492   31491 controller.go:83] Starting OpenAPI AggregationController
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411489   31491 crd_finalizer.go:266] Starting CRDFinalizer
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411528   31491 naming_controller.go:291] Starting NamingConditionController
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411614   31491 crdregistration_controller.go:111] Starting crd-autoregister controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411636   31491 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411537   31491 controller.go:85] Starting OpenAPI controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411776   31491 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/snap/microk8s/2948/certs/front-proxy-client.crt::/var/snap/microk8s/2948/certs/front-proxy-client.key"
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411854   31491 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/snap/microk8s/2948/certs/ca.crt"
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.412045   31491 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/snap/microk8s/2948/certs/front-proxy-ca.crt"
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.411799   31491 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.412161   31491 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:37.446134   31491 lease.go:233] Resetting endpoints for master service "kubernetes" to [10.0.0.244 10.0.0.240 10.0.0.243 10.0.0.242]
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.448016   31491 controller.go:611] quota admission added evaluator for: endpoints
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.480967   31491 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.511415   31491 cache.go:39] Caches are synced for AvailableConditionController controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.511458   31491 apf_controller.go:322] Running API Priority and Fairness config worker
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.511637   31491 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.511845   31491 shared_informer.go:247] Caches are synced for crd-autoregister
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.511872   31491 cache.go:39] Caches are synced for autoregister controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:37.512253   31491 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
Jan 31 22:00:37 atlas microk8s.daemon-kubelite[31491]: E0131 22:00:37.523931   31491 controller.go:155] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
Jan 31 22:00:38 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:38.410261   31491 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Jan 31 22:00:38 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:38.410317   31491 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Jan 31 22:00:38 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:38.423191   31491 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
Jan 31 22:00:38 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:38.497476   31491 lease.go:233] Resetting endpoints for master service "kubernetes" to [10.0.0.240 10.0.0.243 10.0.0.242 10.0.0.244 10.0.0.241]
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.165355   31491 daemon.go:55] Starting Kubelet
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --feature-gates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --eviction-hard has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.165434   31491 daemon.go:44] Starting Proxy
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.165695   31491 server.go:225] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.165712   31491 daemon.go:33] Starting Scheduler
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --address has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --bind-address instead.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.165892   31491 daemon.go:22] Starting Controller Manager
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: Flag --address has been deprecated, This flag has no effect now and will be removed in v1.24.
Jan 31 22:00:39 atlas systemd[1]: Started Kubernetes systemd probe.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.181204   31491 server.go:451] "Kubelet version" kubeletVersion="v1.23.3-2+d441060727c463"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.183208   31491 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/snap/microk8s/2948/certs/ca.crt"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.193920   31491 node.go:163] Successfully retrieved node IP: 10.0.0.241
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.193951   31491 server_others.go:138] "Detected node IP" address="10.0.0.241"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.193987   31491 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
Jan 31 22:00:39 atlas systemd[1]: run-r7fceb91a56794a15b7cf1b402e58885c.scope: Succeeded.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.200332   31491 server_others.go:206] "Using iptables Proxier"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.200373   31491 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.200387   31491 server_others.go:214] "Creating dualStackProxier for iptables"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.200403   31491 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.200793   31491 server.go:656] "Version info" version="v1.23.3-2+d441060727c463"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.211949   31491 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=1048576
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.212169   31491 config.go:317] "Starting service config controller"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.212187   31491 shared_informer.go:240] Waiting for caches to sync for service config
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.212248   31491 config.go:226] "Starting endpoint slice config controller"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.212268   31491 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.215358   31491 controller.go:611] quota admission added evaluator for: events.events.k8s.io
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.312506   31491 shared_informer.go:247] Caches are synced for endpoint slice config
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.312566   31491 shared_informer.go:247] Caches are synced for service config
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: E0131 22:00:39.356775   31491 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :30126: bind: address already in use" port={Description:nodePort for haproxy/haproxy-ingress:ssh-2222 IP: IPFamily:4 Port:30126 Protocol:TCP}
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: E0131 22:00:39.356911   31491 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :31038: bind: address already in use" port={Description:nodePort for haproxy/haproxy-ingress:http-80 IP: IPFamily:4 Port:31038 Protocol:TCP}
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: E0131 22:00:39.357003   31491 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :30101: bind: address already in use" port={Description:nodePort for haproxy/haproxy-ingress:https-443 IP: IPFamily:4 Port:30101 Protocol:TCP}
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: E0131 22:00:39.357201   31491 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :31932: bind: address already in use" port={Description:nodePort for haproxy/haproxy-ingress:ssh-22 IP: IPFamily:4 Port:31932 Protocol:TCP}
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.523273   31491 serving.go:348] Generated self-signed cert in-memory
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:39.970206   31491 authentication.go:316] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:39.970234   31491 authentication.go:340] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:39.970259   31491 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.984660   31491 server.go:143] "Starting Kubernetes Scheduler" version="v1.23.3-2+d441060727c463"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.986362   31491 secure_serving.go:200] Serving securely on [::]:10259
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.986427   31491 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Jan 31 22:00:39 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:39.995895   31491 serving.go:348] Generated self-signed cert in-memory
Jan 31 22:00:40 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:40.086709   31491 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
Jan 31 22:00:40 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:40.502725   31491 authentication.go:316] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
Jan 31 22:00:40 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:40.502746   31491 authentication.go:340] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
Jan 31 22:00:40 atlas microk8s.daemon-kubelite[31491]: W0131 22:00:40.502761   31491 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
Jan 31 22:00:40 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:40.502777   31491 controllermanager.go:196] Version: v1.23.3-2+d441060727c463
Jan 31 22:00:40 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:40.504139   31491 secure_serving.go:200] Serving securely on [::]:10257
Jan 31 22:00:40 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:40.504258   31491 tlsconfig.go:240] "Starting DynamicServingCertificateController"
Jan 31 22:00:40 atlas microk8s.daemon-kubelite[31491]: I0131 22:00:40.504363   31491 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...

Inspect tarball is attached inspection-report-20220131_222640.tar.gz .

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
jhughes2112commented, Feb 1, 2022

The process seemed to be sudo snap stop microk8s followed by rm -rf /var/snap/microk8s/common/run/containerd then sudo snap start microk8s and although that file was left behind, it seems to have started up okay.

For reference, the only thing in the /containerd folder after stopping was this:

.
./containerd
./containerd/last-start-date
./containerd.sock.ttrpc

So either the containerd.sock.ttrpc was blocking startup, or it is an internal race condition with microk8s on startup?

0reactions
jhughes2112commented, Apr 6, 2022

Ah, I figured it out, I think. In my config, I was trying to add mirrors for docker.io, since I have hit the pull limit before and left in a bad state:

sudo vi /var/snap/microk8s/current/args/containerd-template.toml
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
    endpoint = ["https://mirror.gcr.io","https://registry-1.docker.io"]

However, this interacts with the two default lines above it that defines a config_path. Once I removed the config_path, everything started up.

Read more comments on GitHub >

github_iconTop Results From Across the Web

The power loss sensitivity (PLS) of each node. - ResearchGate
At the second stage, ESS scheduling is conducted by solving an optimization problem with uncertainties caused by high penetration of renewable energies, where ......
Read more >
Voltage Stability - an overview | ScienceDirect Topics
Essentially, the voltage instability results from the fact that the reactive power supplied by the power system cannot meet demand or that the...
Read more >
Common Mistakes in DC/DC Converters and How to Fix Them
The causes of the design mistakes ... node voltage (VSW) and inductor current waveform ... more power dissipation in Part B. Higher RDS(ON)....
Read more >
Identification of Weakest Node in Radial Distribution System ...
power loss. The voltage instability causes voltage collapse which starts at weakest node first. The weakest node identified based on stability index method....
Read more >
How dead ends undermine power grid stability - Nature
Power outages can arise for various reasons, including line overload ... Figure 1: Basin stability of the generator in the one-node model.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found