consider exposing kubernetes services across the cluster
See original GitHub issueRight now, kubernetes services can be reached from control nodes and kubeworker nodes but cannot be accessed from any other node type.
Example service:
$ kubectl run service-test --image=nginx
deployment "service-test" created
$ kubectl expose deployment service-test --type=NodePort --port=80
service "service-test" exposed
$ kubectl describe svc service-test
Name: service-test
Namespace: default
Labels: run=service-test
Selector: run=service-test
Type: NodePort
IP: 10.254.189.112
Port: <unset> 80/TCP
NodePort: <unset> 31929/TCP
Endpoints: 192.168.1.2:80
Session Affinity: None
No events.
From a control node:
# cluster ip
$ curl -sI 10.254.189.112 | head -n 1
HTTP/1.1 200 OK
# pod endpoint
$ curl -sI 192.168.1.2:80 | head -n 1
HTTP/1.1 200 OK
# node port
$ curl -sI $HOSTNAME:31929 | head -n 1
HTTP/1.1 200 OK
All of the above are reachable via kubeworkers as well. However, from worker or edge nodes these endpoints are unavailable. This means applications running on workers (via mesos or otherwise) cannot communicate with apps running on kubernetes.
A simple way to enable this is to install the kubernetes components (kubernetes
and kubernetes-node
roles) on all nodes but set --register-schedulable=false
on the kubelet for all nodes except kubeworkers. With this, k8s workloads will only be scheduled on kubeworkers but kube-proxy on every node will set up the iptables rules to enable connectivity. It may be possible to just install a subset of the kubernetes components (just kube-proxy maybe?) but more investigation would be needed.
Issue Analytics
- State:
- Created 7 years ago
- Reactions:2
- Comments:9 (9 by maintainers)
Yes, I could see one. Right now people wrote there own schedulers in Mesos. But I could see the use case for both.
Sent from my iPhone
@KaGeN101 You are correct, we just have to verify this is how it is currently working.