This article is about fixing The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port? in canonical microk8s
  • 21-Feb-2023
Lightrun Team
Author Lightrun Team
Share
This article is about fixing The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port? in canonical microk8s

The connection to the server 127.0.0.1:16443 was refused – did you specify the right host or port? in canonical microk8s

Lightrun Team
Lightrun Team
21-Feb-2023

Explanation of the problem

Upon joining the cluster and executing microk8s kubectl get node, the following error message was received: “The connection to the server 127.0.0.1:16443 was refused – did you specify the right host or port?” The environment consists of a host running on a Mac, with two instances of Ubuntu 20.04 running as multipass virtual machines: microk8s-vm-0 and microk8s-vm-1. Microk8s version 1.19/stable is being used. The following operations were performed: multipass shell microk8s-vm-0, followed by sudo snap install microk8s --classic --channel=1.19/stable, sudo usermod -a -G microk8s $USER, and sudo chown -f -R $USER ~/.kube. Running microk8s status returned that microk8s was running and high-availability was not enabled. Multiple microk8s addons were disabled, including the dashboard, dns, fluentd, gpu, helm, ingress, istio, jaeger, knative, kubeflow, linkerd, metallb, metrics-server, multus, prometheus, rbac, and registry. Running microk8s inspect revealed the certificates and services being used, and gathered system information, including process and disk usage, memory usage, and network configuration. Finally, running microk8s kubectl get all --all-namespaces returned information on the pods, services, daemonsets, deployments, and replicasets running in the cluster. The command microk8s add-node was then executed from microk8s-vm-0 to add a new node to the cluster.

Troubleshooting with the Lightrun Developer Observability Platform

Getting a sense of what’s actually happening inside a live application is a frustrating experience, one that relies mostly on querying and observing whatever logs were written during development.
Lightrun is a Developer Observability Platform, allowing developers to add telemetry to live applications in real-time, on-demand, and right from the IDE.

  • Instantly add logs to, set metrics in, and take snapshots of live applications
  • Insights delivered straight to your IDE or CLI
  • Works where you do: dev, QA, staging, CI/CD, and production

Start for free today

Problem solution for The connection to the server 127.0.0.1:16443 was refused – did you specify the right host or port? in canonical microk8s

The error message “The connection to the server 127.0.0.1:16443 was refused – did you specify the right host or port?” typically indicates that there is an issue with connecting to the Kubernetes API server. Here are a few steps you can try to resolve the issue:

1. Check if the Kubernetes API server is running: You can do this by running the following command in your terminal:

sudo microk8s kubectl cluster-info

If the API server is running, you should see a message that says “Kubernetes control plane is running at https://127.0.0.1:16443“. If the API server is not running, you may need to start it using the following command:

sudo microk8s start

Check if your kubeconfig file is configured correctly: The kubeconfig file is used to specify the location of the Kubernetes API server, as well as any authentication credentials that are required. You can check if your kubeconfig file is configured correctly by running the following command:

sudo microk8s kubectl config view

2. This will display the contents of your kubeconfig file. Make sure that the “server” field is set to “https://127.0.0.1:16443“.

3. Check if the Kubernetes API server is reachable: You can test if the API server is reachable by running the following command:

curl https://127.0.0.1:16443

If the API server is running and reachable, you should see a response that contains information about the Kubernetes API server.

4. Check if any firewall rules are blocking the connection: The error message could be caused by a firewall rule that is blocking the connection to the Kubernetes API server. You may need to configure your firewall to allow traffic on port 16443.

If none of the above steps resolve the issue, you may want to check the logs for the Kubernetes API server to see if there are any error messages that could provide additional information about the issue. You can view the logs for the API server by running the following command:

sudo microk8s kubectl logs -n kube-system kube-apiserver-<hostname>

Replace <hostname> with the hostname of the Kubernetes API server.

Other popular problems with canonical microk8s

Problem: “The connection to the server 127.0.0.1:16443 was refused – did you specify the right host or port?” error message

This error message indicates that the connection to the Kubernetes API server has been refused, likely due to a problem with the Kubernetes API server itself or a misconfiguration of the MicroK8s installation.

Solution:

One possible solution is to check the status of the MicroK8s service by running the command sudo microk8s status --wait-ready. If the service is not running, start it with the command sudo systemctl start snap.microk8s.daemon-kubelet.service. If the problem persists, you may need to uninstall and reinstall MicroK8s.

Problem: Difficulty accessing the Kubernetes dashboard

MicroK8s includes a built-in Kubernetes dashboard that can be accessed through a web browser. However, some users may encounter difficulty accessing the dashboard, either due to a problem with the dashboard itself or a network configuration issue.

Solution:

One possible solution is to ensure that the correct ports are open for accessing the dashboard. By default, the dashboard can be accessed at https://<node-ip>:10443/. Ensure that port 10443 is open on the node running MicroK8s. If the problem persists, you may need to re-install the dashboard by running the command microk8s enable dashboard.

Problem: Issues with pod networking

MicroK8s uses a containerd runtime to manage pod networking, which can sometimes result in networking issues. One common problem is that the containers in the pods cannot communicate with each other, either because they are not on the same network or because of a misconfiguration.

Solution:

One possible solution is to check the network policies in the Kubernetes cluster by running the command microk8s kubectl get networkpolicies. If there are no network policies, you may need to create one using the microk8s kubectl create command. Another possible solution is to ensure that the pod network is properly configured by checking the MicroK8s configuration file at /var/snap/microk8s/current/args/containerd-template.toml and making any necessary modifications.

A brief introduction to canonical microk8s

Canonical microk8s is a lightweight, fast, and simplified Kubernetes distribution designed for small-scale deployments, local development, and edge computing. It provides a complete and secure Kubernetes environment with all the necessary components, including the Kubernetes API server, etcd, kubelet, and other essential add-ons like Prometheus, Grafana, and Istio. Unlike other Kubernetes distributions, microk8s is packaged as a single snap package and can be installed and run on any Linux distribution without any additional dependencies. This makes it an excellent choice for developers, hobbyists, and small businesses looking for an easy-to-use and efficient Kubernetes platform.

Microk8s is built with simplicity and speed in mind. It includes a simplified Kubernetes command-line interface (CLI) that allows users to easily deploy, scale, and manage Kubernetes applications. It also includes a range of add-ons and plug-ins that are installed and configured by default, making it easy to get started with Kubernetes development without having to deal with complex installation and configuration processes. With microk8s, users can spin up a complete Kubernetes environment in just a few seconds, making it a great choice for local development and testing of Kubernetes applications. Additionally, microk8s provides a range of security features, including container isolation, network policies, and role-based access control, ensuring that users can deploy and manage Kubernetes applications with the utmost confidence.

Most popular use cases for canonical microk8s

  1. Container Orchestration: Canonical MicroK8s is a lightweight, self-contained Kubernetes distribution designed for developers and IoT use cases. It can be used to orchestrate containers in production or development environments with minimal overhead, making it ideal for small-scale projects that require efficient management of microservices.
  2. Local Development and Testing: MicroK8s provides a local development environment for Kubernetes, enabling developers to create, test, and debug their applications in a containerized environment without requiring a full-blown Kubernetes cluster. With MicroK8s, developers can create a cluster in a matter of minutes, deploy their code, and test it locally before deploying to a production environment.
  3. Integration with Other Technologies: MicroK8s is a versatile technology that can be integrated with other open-source tools and technologies to build complex applications. For example, you can use MicroK8s with Istio to manage microservices, or with Prometheus to monitor your Kubernetes cluster. Here’s an example of deploying a simple NGINX web server using MicroK8s:
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:latest
      ports:
        - containerPort: 80

This YAML manifest defines a Kubernetes Pod with a single container running the latest version of the NGINX web server. The container listens on port 80, which is exposed by the Pod. You can deploy this manifest to your MicroK8s cluster using the kubectl apply command, and then access the web server using the IP address of the Pod.

Share

It’s Really not that Complicated.

You can actually understand what’s going on inside your live applications.

Try Lightrun’s Playground

Lets Talk!

Looking for more information about Lightrun and debugging?
We’d love to hear from you!
Drop us a line and we’ll get back to you shortly.

By submitting this form, I agree to Lightrun’s Privacy Policy and Terms of Use.