Support ExternalIP for traefik with JupyterHub auth?
See original GitHub issueStandard disclaimer that I’m probably misunderstanding something basic about kubernetes / helm.
Over in pangeo, we’re set up with jupyterhub-based auth. We might also want to allow access from outside the hub. I can connect to gateway and make a cluster by generating a jupyterhub token.
import os
from dask_gateway import Gateway
from dask_gateway.auth import JupyterHubAuth
auth = JupyterHubAuth(os.environ["PANGEO_TOKEN"])
auth
gateway = Gateway(address="https://us-central1-b.gcp.pangeo.io/services/dask-gateway/",
auth=auth)
cluster = gateway.new_cluster()
However, I can’t connect a client to the cluster
object, at least with the default proxy_address.
OSError: Timed out trying to connect to 'gateway://us-central1-b.gcp.pangeo.io:443/prod.1905285f1b2c457a9a045b2ff16e723c' after 10 s: Timed out trying to connect to 'gateway://us-central1-b.gcp.pangeo.io:443/prod.1905285f1b2c457a9a045b2ff16e723c' after 10 s: [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:1091)
The thing I can’t figure out: what should the proxy address be? When you aren’t using jupyterhub auth, it’s just the external IP associated with traefik-<RELEASE>-dask-gateway
.
Here are my services
bash-5.0$ kubectl -n prod get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-gcp-uscentral1b-prod-dask-gateway ClusterIP 10.39.248.72 <none> 8000/TCP 39d
dask-gateway-1905285f1b2c457a9a045b2ff16e723c ClusterIP None <none> 8786/TCP,8787/TCP,8788/TCP 3m25s
hub ClusterIP 10.39.254.53 <none> 8081/TCP 39d
proxy-api ClusterIP 10.39.253.144 <none> 8001/TCP 39d
proxy-http ClusterIP 10.39.245.173 <none> 8000/TCP 39d
proxy-public LoadBalancer 10.39.245.204 35.238.103.127 443:32108/TCP,80:32559/TCP 39d
traefik-gcp-uscentral1b-prod-dask-gateway ClusterIP 10.39.248.0 <none> 80/TCP 39d
And traefik specifically
bash-5.0$ kubectl -n prod get service traefik-gcp-uscentral1b-prod-dask-gateway -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-07-01T22:24:33Z"
labels:
app.kubernetes.io/instance: gcp-uscentral1b-prod
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: dask-gateway
app.kubernetes.io/version: 0.7.1
gateway.dask.org/instance: gcp-uscentral1b-prod-dask-gateway
helm.sh/chart: dask-gateway-0.7.1
name: traefik-gcp-uscentral1b-prod-dask-gateway
namespace: prod
resourceVersion: "3211164"
selfLink: /api/v1/namespaces/prod/services/traefik-gcp-uscentral1b-prod-dask-gateway
uid: be9cda1f-f0e2-4c45-a1f1-a02ed67daea5
spec:
clusterIP: 10.39.248.0
ports:
- name: web
port: 80
protocol: TCP
targetPort: 8000
selector:
app.kubernetes.io/component: traefik
app.kubernetes.io/instance: gcp-uscentral1b-prod
app.kubernetes.io/name: dask-gateway
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I haven’t been able to find anything in the helm chart that controls the type of the traefik service based on jupyterhub auth, but I’m still looking. Opening this on the off chance that someone knows the simple solution.
Thanks for any insights!
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (5 by maintainers)
Thanks again! I think I just got it by updating my dask gateway proxy address in the helm chart and redeploying.
I’m not sure what I was doing wrong earlier, but that seemed to do it.
Glad I could help in some small way!
Confirmed. There are some discrepancies in configuration that could explain the differences in outcome. Our Dask Gateway URL does not include a path. In addition to spinning up an ELB dynamically with the Traefik service, we also create an AWS Route 53 entry that points to the dynamically-generated ELB using annotations in conjunction with external-dns.
The Traefik block in our
values.yaml
looks something like this:external-dns.alpha.kubernetes.io/hostname
works in conjunction withexternal-dns
to create the Route 53 entry; the remaining tags work in conjunction withcloud-provider-aws
to create the ELB that the Route 53 entry points to.As I’m sure you’ve noticed, the dynamically-generated load balancer has an inconsistent external DNS, which provides a problematic user experience. The value provided by generating a CNAME in conjunction with the ELB is that the CNAME tracks the moving target and presents a consistent endpoint for the client to consume. I see you’re on GCP rather than AWS. External DNS is compatible with GCP.
We actually copy a gateway configuration yaml into the client container so that the gateway URL argument, proxy_address kwarg, and auth specification can be omitted from the gateway instantiation and a bare
gateway = Gateway()
suffices. We find this creates a more accessible user experience.