microk8s.kubectl should be used for internal health checking as using the system kubectl may fail
See original GitHub issueSummary
microk8s enable <addon>
may fail especially when microk8s’ Kubernetes version and the system’s kubectl version are different.
What Should Happen Instead?
microk8s enable <addon>
works without an error.
Reproduction Steps
-
Install microk8s explicitly with 1.21 (for Kubeflow on microk8s)
sudo snap install microk8s --classic --channel=1.21
-
Install kubectl with the latest version (for other Kubernetes)
sudo snap install kubectl --classic
-
Enable addons
microk8s enable dns storage ingress metallb:10.64.140.43-10.64.140.49
$ microk8s enable dns storage ingress metallb:10.64.140.43-10.64.140.49
Traceback (most recent call last):
File "/snap/microk8s/3202/scripts/wrappers/enable.py", line 43, in <module>
enable(prog_name="microk8s enable")
File "/snap/microk8s/3202/usr/lib/python3/dist-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/snap/microk8s/3202/usr/lib/python3/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/snap/microk8s/3202/usr/lib/python3/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/snap/microk8s/3202/usr/lib/python3/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/snap/microk8s/3202/scripts/wrappers/enable.py", line 36, in enable
enabled_addons, _ = get_status(get_available_addons(get_current_arch()), True)
File "/snap/microk8s/3202/scripts/wrappers/status.py", line 157, in get_status
kube_output = kubectl_get("all")
File "/snap/microk8s/3202/scripts/wrappers/common/utils.py", line 169, in kubectl_get
return run("kubectl", kubeconfig, "get", cmd, "--all-namespaces", die=False)
File "/snap/microk8s/3202/scripts/wrappers/common/utils.py", line 39, in run
result.check_returncode()
File "/snap/microk8s/3202/usr/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '('kubectl', '--kubeconfig=/var/snap/microk8s/3202/credentials/client.config', 'get', 'all', '--all-namespaces')' returned non-zero exit status 1.
$ kubectl --kubeconfig=/var/snap/microk8s/3202/credentials/client.config get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-node-t7wtm 1/1 Running 0 17m
kube-system pod/calico-kube-controllers-f7868dd95-9klz2 1/1 Running 0 17m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 17m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 17m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 17m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-f7868dd95 1 1 1 17m
Error from server (NotFound): Unable to list "autoscaling/v2, Resource=horizontalpodautoscalers": the server could not find the requested resource
The error must be from the difference between the client version and the server version.
Error from server (NotFound): Unable to list "autoscaling/v2, Resource=horizontalpodautoscalers": the server could not find the requested resource
Introspection Report
Can you suggest a fix?
Isn’t this section supposed to use microk8s.kubectl
instead of the system kubectl
?
https://github.com/canonical/microk8s/blob/eb95dc13b66be3980f1123948308c409a92d8584/scripts/wrappers/common/utils.py#L189-L199
Are you interested in contributing with a fix?
Not immediately. Would be nice to be picked up by somebody else.
Issue Analytics
- State:
- Created a year ago
- Comments:5 (4 by maintainers)
Top GitHub Comments
hmm, after
remove
with--purge
and then re-install it again, I can’t reproduce it now.@ycliuhw - same thing happened to me and I didn’t think to try remove with purge until I read your comment. Worked fine after retrying.