Support spawning to different clusters
See original GitHub issueProposed change
Right now, the kubernetes pod is spawned in the same cluster as the hub pod. It would be great if we can configure it to be spawned in other remote clusters. One hub can then spawn into different cloud regions, which is very helpful when dealing with cloud datasets.
The kubernetes API can easily be accessed remotely, but the hub and proxy pod need to find a way to send traffic to the user pod. We can find ways to tunnel this traffic through without much work. My favorite way is to use kubectl port-forward
, also used by my earlier expeirments with accessing dask-kubernetes remotely and now dask-kubernetes itself.
Alternative options
- Deploy one hub per cluster users want to spawn into. This is more complicated logistically, and for the user.
- Make a
Service
object for each pod, and expose it to the internet via aLoadBalancer
. This can receive traffic from the hub and proxy pod
Who would use this feature?
Anyone interested in accssing compute near datasets stored across multiple cloud providers or regions
(Optional): Suggest a solution
- Override
get_pod_url
to start akubectl port-forward
on a free port, to the pod IP on the remote cluster - Make sure that
c.JupyterHub.hub_connect_url
is something that the pod can connect to. This could be over https on the public internet, or something else. - Figure out how to specify which kubernetes cluster the API will need to connect to
Issue Analytics
- State:
- Created 2 years ago
- Comments:6
Top Results From Across the Web
Introducing kube-spawn: a tool to create local, multi-node ...
kube-spawn is a tool to easily start a local, multi-node Kubernetes cluster on a Linux machine. While its original audience was mainly ...
Read more >Massive Clusters of Mobs Randomly Spawning? - Archive
So lately, my friend's multiplayer servers have been spawning giant amounts of mobs randomly. We did not spawn a single one of these, ......
Read more >Spawning Extra Galactic Clusters: a Washup : r/StellarisMods
Spawn a reference star relative to the galactic core. Spawn the rest of the cluster relative to that reference star / other cluster...
Read more >Spawning Kubernetes Clusters in CI for Integration and E2E ...
Minikube is used to run Kubernetes locally for developing and experimenting, but with some tricks we can utilize it in CI to spawn...
Read more >How do I enstantiate objects within a cluster? - Unity Answers
These would then act as spawn points where you could spawn your prefabs. If you want it totally random, you could probably make...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@nreith I actually ended up building a separate spawner for this, and it works fairly well - https://github.com/yuvipanda/jupyterhub-multicluster-kubespawner.
@yuvipanda , Thanks for your great work, I appreciate it very much!
Currently the KubeSpawner is only able to spawn on it’s own namespace(due to reflectors) Is the multicluster related to multi namespace by any means(or only clusters)?
I remember there is a configuration to give full cluster permissions to the hub allowing to create namespaces per user. But this is not the case.
I would like to have a single hub, which can spawn on multiple Kube namespaces(which are not the same as the hub) I have a FB of Kubespawner which changes how reflectors work, and added permission to each namespace I want into the Jupyterhub serviceAccount.
Was curios if in your sub-repo there is a way to implement above scenario, or if my implementation would have any use case for others so I could maybe open a PR and issue about it?.
We did it for multiple reasons:
Thanks for your time!