question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

I’m a newbie to k8s and this got me up and running super quick, so thanks for that! I wasn’t able to get hostPort working, and came across this comment:

https://github.com/ubuntu/microk8s/blob/11fe17a5c52055eca1959b65d48510eb488ecd3a/microk8s-resources/actions/ingress.yaml#L82

I can’t use hostNetwork or nodePort for my particular use case. Is that comment still correct? Digging around, it seems like it can work in newer versions of Calico, but requires a portMap plugin, but I don’t really know how to go about installing/enabling that.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:8 (2 by maintainers)

github_iconTop GitHub Comments

5reactions
ktsakalozoscommented, Aug 23, 2018

That was really helpful thank you for providing us with some context.

I am sure you have already read that the use of hostPort and hostNetwork is not recommended because (among others) they limit the pod management options you have with Kubernetes. For example, it is not clear to me how would you upgrade without downtime.

Here is a suggestion that is more aligned with the Kubernetes way. You put the container in a deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
  labels:
    app: mine
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mine
  template:
    metadata:
      labels:
        app: mine
    spec:
      containers:
      - name: mycontainer
        image: myimage:latest
        ports:
        - containerPort: 80
          name: reliable
        - containerPort: 6883
          name: unreliable

You can run as many pods as you want given the hosted services are stateless.

In front of the deployment you put a service.

apiVersion: v1
kind: Service
metadata:
  name: service-name
spec:
  type: NodePort
  ports:
  - name: reliable
    port: 80
    protocol: TCP
    nodePort: 30080
  - name: unreliable
    port: 6883
    protocol: UDP
    nodePort: 36883
  selector:
    app: mine

At this point you can mange the deployment without any restrictions. What we do not like are the ports. You have two options here:

  1. Adjust the nodePort port range. This is done with --service-node-port-range argument in the kube-apiserver (https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/). For microk8s you will need to append the argument to /var/snap/microk8s/current/args/kube-apiserver and restart the api server with sudo systemctl restart snap.microk8s.daemon-apiserver.service. the drawback in this approach is that you may have port conflicts with already running services.

  2. Use iptables to forward traffic to the right ports. This is what Kubernetes expects from a loadbalancer. I am sure you are familiar with this approach, just for reference you can see (https://www.cyberciti.biz/faq/linux-port-redirection-with-iptables/). In this approach you can even skip the nodePort entirely. You can expose the service to a fixed ClusterIP (something like 10.152.183.X) and then use iptable rules to forward traffic accordingly.

Important note: port 8080 is already in use by the API server.

3reactions
jakecobbcommented, May 1, 2020

This may have changed since this issue was opened a couple years ago, but this seems to work today. I have microk8s 1.18.2 and it is using --network=cni with flannel.
The issue linked in the snippet @steveh showed in the original report here indicates hostPort support in the major CNI plugins via the portmap capability and this is present in /var/snap/microk8s/1379/args/cni-network/flannel.conflist:

{
    "name": "microk8s-flannel-network",
    "cniVersion": "0.3.1",
    "plugins": [
      {
        "type": "flannel",
        "name": "flannel-plugin",
        "subnetFile": "/var/snap/microk8s/common/run/flannel/subnet.env",
        "dataDir": "/var/snap/microk8s/common/var/lib/cni/flannel",
        "delegate": {
          "hairpinMode": true,
          "isDefaultGateway": true
        }
      },
      {
        "type": "portmap",
        "capabilities": {"portMappings": true},
        "snat": true
      }
    ]
}

I deployed a pod using hostPort and was able to reach the container successfully using it from the host on 127.0.0.1.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Accessing Kubernetes Pods from Outside of the Cluster
Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach ...
Read more >
Do not specify hostPort unless absolutely necessary
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP...
Read more >
Exposing a Kubernetes application : Service, HostPort ...
One simple way to do so is using a HostPort. A HostPort will open a port only on the Node where the Pod...
Read more >
Avoid GKE's expensive load balancer by using hostPort
In this post, I present a way of avoiding the expensive Google Network Load Balancer by load balancing in-cluster using akrobateo, which acts...
Read more >
Kubernetes : What is hostPort and hostIp used for?
With hostPort you can expose container port to the external network at the address <hostIP>:<hostPort> , where the hostIP is the IP address ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found