question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?

See original GitHub issue

After joining cluster, and executed microk8s kubectl get node :

The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?

ENV: host : mac

  • multipass ubuntu:20.04 * 2 (one: microk8s-vm-0 the other one: microk8s-vm-1)
  • microk8s 1.19/stable

OPERATION:

  1. multipass shell microk8s-vm-0 and sudo snap install microk8s --classic --channel=1.19/stable
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
  1. microk8s status :
microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    ha-cluster           # Configure high availability on the current node
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    ingress              # Ingress controller for external access
    istio                # Core Istio service mesh services
    jaeger               # Kubernetes Jaeger operator with its simple config
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    prometheus           # Prometheus operator for monitoring and logging
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
  1. microk8s insepct :
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-control-plane-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting juju
  Inspect Juju
Inspecting kubeflow
  Inspect Kubeflow

Building the report tarball
  Report tarball is at /var/snap/microk8s/1856/inspection-report-20210120_111529.tar.gz
  1. All the nodes are fine. microk8s kubectl get all --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   pod/calico-node-vsrjz                         1/1     Running   0          17m
kube-system   pod/calico-kube-controllers-847c8c99d-rgt4j   1/1     Running   0          17m

NAMESPACE   NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   18m

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   17m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           17m

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-847c8c99d   1         1         1       17m
  1. Microk8s-vm-0 executed microk8s add-node .
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.xx.xx:25000/d67362036a00d3d44a0040a34e2e4f9d

If the node you are adding is not reachable through the default interface you can use one of the following:
 microk8s join 192.168.xx.xx:25000/d67362036a00d3d44a0040a34e2e4f9d
 microk8s join 10.1.245.0:25000/d67362036a00d3d44a0040a34e2e4f9d

microk8s-vm-1 and microk8s-vm-0 can ping each other. microk8s-vm-1 execute

ubuntu@microk8s-vm-1:~$ microk8s join 192.168.xx.xx:25000/d67362036a00d3d44a0040a34e2e4f9d
Contacting cluster at 192.168.xx.xx
Waiting for this node to finish joining the cluster. .. .. .. .. .. .. .. .. .. ..  
  1. execute result:
  • microk8s status: microk8s is not running. Use microk8s inspect for a deeper inspection.

  • microk8s inspect:

    Inspecting Certificates
    Inspecting services
      Service snap.microk8s.daemon-cluster-agent is running
      Service snap.microk8s.daemon-containerd is running
      Service snap.microk8s.daemon-apiserver is running
      Service snap.microk8s.daemon-apiserver-kicker is running
      Service snap.microk8s.daemon-control-plane-kicker is running
      Service snap.microk8s.daemon-proxy is running
      Service snap.microk8s.daemon-kubelet is running
      Service snap.microk8s.daemon-scheduler is running
      Service snap.microk8s.daemon-controller-manager is running
      Copy service arguments to the final report tarball
    Inspecting AppArmor configuration
    Gathering system information
      Copy processes list to the final report tarball
      Copy snap list to the final report tarball
      Copy VM name (or none) to the final report tarball
      Copy disk usage information to the final report tarball
      Copy memory usage information to the final report tarball
      Copy server uptime to the final report tarball
      Copy current linux distribution to the final report tarball
      Copy openSSL information to the final report tarball
      Copy network configuration to the final report tarball
    Inspecting kubernetes cluster
      Inspect kubernetes cluster
    Inspecting juju
      Inspect Juju
    Inspecting kubeflow
      Inspect Kubeflow
    
    Building the report tarball
      Report tarball is at /var/snap/microk8s/1856/inspection-report-20210120_112705.tar.gz
    
  • microk8s kubectl get node :

    The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?

  • cluster.yaml included two IP of microk8s-vm-0 and microk8s-vm-1.

  1. I’m sorry for that I dont attach some logs ,because it happened last week and I donot find it.

  2. the same as the previous phenomenon, you can recurrent it by joining itself.

  3. ubuntu@microk8s-vm-1:~$ microk8s add-node
    From the node you wish to join to this cluster, run the following:
    microk8s join 192.168.xx.xx:25000/6f7b943c08560c366b7fc7ceaa66043a
    
    If the node you are adding is not reachable through the default interface you can use one of the following:
     microk8s join 192.168.xx.xx:25000/6f7b943c08560c366b7fc7ceaa66043a
     microk8s join 10.1.245.0:25000/6f7b943c08560c366b7fc7ceaa66043a
     
     
     
     ubuntu@microk8s-vm-1:~$ microk8s join 192.168.xx.xx:25000/6f7b943c08560c366b7fc7ceaa66043a
    Contacting cluster at 192.168.xx.xx
    Waiting for this node to finish joining the cluster. .. .. .. .. .. .. .. .. .. ..  
    
  4. specially, cluster.yaml is different from the previous situation.

cat /var/snap/microk8s/1856/var/kubernetes/backend/cluster.yaml 
- Address: 192.168.64.4:19001
  ID: 0
  Role: 0

cat /var/snap/microk8s/1856/var/kubernetes/backend/info.yaml 
Address: 192.168.64.4:19001
ID: 13153073102629576865
Role: 0

  1. maybe they have the same phenomenon, but the reason lead to the phenomenon is different.Once I recurrent the first situation, I update it in time.
  2. At present, I resolve it by removing all the files except cluster.crt and cluster.key in the /var/snap/microk8s/1856/var/kubernetes/backend/.And restart microk8s.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:41

github_iconTop GitHub Comments

4reactions
davidbarrattcommented, Nov 21, 2021

Running

sudo microk8s refresh-certs

seems to have helped … maybe?

3reactions
jamesgeddescommented, Apr 17, 2021

Thanks @balchua

Seems like microk8s is not entirely compatible with the RPi3 so I gave up and installed portainer. Perhaps someone should update the Ubuntu.com instructions to note that one must use the RPi4. It’s kinda frustrating as I have 2 x RPi3 and 4 x RPi4 boards just siting here, and I wasted time following the instructions unaware it was never going to work.

I will try again when I have a master that is a bit more capable. Thanks everyone for your help though.

Read more comments on GitHub >

github_iconTop Results From Across the Web

The connection to the server <host>:6443 was refused
6.50:6443 was refused - did you specify the right host or port? This is what I am seeing in running docker logs in...
Read more >
did you specify the right host or port? error on Kubernetes
I get: The connection to the server localhost:8080 was refused - did you specify the right host or port? Why does the command...
Read more >
The connection to the server 127.0.0.1:6443 was refused - did ...
0.1:6443 was refused - did you specify the right host or port? Steps To Reproduce: After installing K3S and executing the command, the...
Read more >
The connection to the server localhost:8080 was refused
The connection to the server localhost:8080 was refused - did you specify the right host or port? Learn how to fix this error...
Read more >
The connection to the server x.x.x.x:6443 was refused – did ...
This document describes steps to troubleshoot kubectl error: The connection to the server x.x.x.x:6443 was refused - did you specify the right host...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found