The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
See original GitHub issueAfter joining cluster, and executed microk8s kubectl get node
:
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
ENV: host : mac
- multipass ubuntu:20.04 * 2 (one: microk8s-vm-0 the other one: microk8s-vm-1)
- microk8s 1.19/stable
OPERATION:
- multipass shell microk8s-vm-0 and
sudo snap install microk8s --classic --channel=1.19/stable
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
microk8s status
:
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
dashboard # The Kubernetes dashboard
dns # CoreDNS
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
ingress # Ingress controller for external access
istio # Core Istio service mesh services
jaeger # Kubernetes Jaeger operator with its simple config
knative # The Knative framework on Kubernetes.
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
multus # Multus CNI enables attaching multiple network interfaces to pods
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
storage # Storage class; allocates storage from host directory
microk8s insepct
:
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-control-plane-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Inspecting juju
Inspect Juju
Inspecting kubeflow
Inspect Kubeflow
Building the report tarball
Report tarball is at /var/snap/microk8s/1856/inspection-report-20210120_111529.tar.gz
- All the nodes are fine.
microk8s kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-node-vsrjz 1/1 Running 0 17m
kube-system pod/calico-kube-controllers-847c8c99d-rgt4j 1/1 Running 0 17m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 18m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 17m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 17m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-847c8c99d 1 1 1 17m
- Microk8s-vm-0 executed
microk8s add-node
.
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.xx.xx:25000/d67362036a00d3d44a0040a34e2e4f9d
If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.xx.xx:25000/d67362036a00d3d44a0040a34e2e4f9d
microk8s join 10.1.245.0:25000/d67362036a00d3d44a0040a34e2e4f9d
microk8s-vm-1 and microk8s-vm-0 can ping each other. microk8s-vm-1 execute
ubuntu@microk8s-vm-1:~$ microk8s join 192.168.xx.xx:25000/d67362036a00d3d44a0040a34e2e4f9d
Contacting cluster at 192.168.xx.xx
Waiting for this node to finish joining the cluster. .. .. .. .. .. .. .. .. .. ..
- execute result:
-
microk8s status:
microk8s is not running. Use microk8s inspect for a deeper inspection.
-
microk8s inspect:
Inspecting Certificates Inspecting services Service snap.microk8s.daemon-cluster-agent is running Service snap.microk8s.daemon-containerd is running Service snap.microk8s.daemon-apiserver is running Service snap.microk8s.daemon-apiserver-kicker is running Service snap.microk8s.daemon-control-plane-kicker is running Service snap.microk8s.daemon-proxy is running Service snap.microk8s.daemon-kubelet is running Service snap.microk8s.daemon-scheduler is running Service snap.microk8s.daemon-controller-manager is running Copy service arguments to the final report tarball Inspecting AppArmor configuration Gathering system information Copy processes list to the final report tarball Copy snap list to the final report tarball Copy VM name (or none) to the final report tarball Copy disk usage information to the final report tarball Copy memory usage information to the final report tarball Copy server uptime to the final report tarball Copy current linux distribution to the final report tarball Copy openSSL information to the final report tarball Copy network configuration to the final report tarball Inspecting kubernetes cluster Inspect kubernetes cluster Inspecting juju Inspect Juju Inspecting kubeflow Inspect Kubeflow Building the report tarball Report tarball is at /var/snap/microk8s/1856/inspection-report-20210120_112705.tar.gz
-
microk8s kubectl get node :
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
-
cluster.yaml
included two IP ofmicrok8s-vm-0
andmicrok8s-vm-1
.
-
I’m sorry for that I dont attach some logs ,because it happened last week and I donot find it.
-
the same as the previous phenomenon, you can recurrent it by joining itself.
-
ubuntu@microk8s-vm-1:~$ microk8s add-node From the node you wish to join to this cluster, run the following: microk8s join 192.168.xx.xx:25000/6f7b943c08560c366b7fc7ceaa66043a If the node you are adding is not reachable through the default interface you can use one of the following: microk8s join 192.168.xx.xx:25000/6f7b943c08560c366b7fc7ceaa66043a microk8s join 10.1.245.0:25000/6f7b943c08560c366b7fc7ceaa66043a ubuntu@microk8s-vm-1:~$ microk8s join 192.168.xx.xx:25000/6f7b943c08560c366b7fc7ceaa66043a Contacting cluster at 192.168.xx.xx Waiting for this node to finish joining the cluster. .. .. .. .. .. .. .. .. .. ..
-
specially,
cluster.yaml
is different from the previous situation.
cat /var/snap/microk8s/1856/var/kubernetes/backend/cluster.yaml
- Address: 192.168.64.4:19001
ID: 0
Role: 0
cat /var/snap/microk8s/1856/var/kubernetes/backend/info.yaml
Address: 192.168.64.4:19001
ID: 13153073102629576865
Role: 0
- maybe they have the same phenomenon, but the reason lead to the phenomenon is different.Once I recurrent the first situation, I update it in time.
- At present, I resolve it by removing all the files except
cluster.crt
andcluster.key
in the/var/snap/microk8s/1856/var/kubernetes/backend/
.And restart microk8s.
Issue Analytics
- State:
- Created 3 years ago
- Comments:41
Top GitHub Comments
Running
seems to have helped … maybe?
Thanks @balchua
Seems like microk8s is not entirely compatible with the RPi3 so I gave up and installed portainer. Perhaps someone should update the Ubuntu.com instructions to note that one must use the RPi4. It’s kinda frustrating as I have 2 x RPi3 and 4 x RPi4 boards just siting here, and I wasted time following the instructions unaware it was never going to work.
I will try again when I have a master that is a bit more capable. Thanks everyone for your help though.