QHub Extension Mechanism
See original GitHub issueDescription
The are many projects that we would like to integrate into QHub and additionally other projects that we may not have the time/resources to support in QHub. We need an extension mechanism to integrate projects that a loosely connected to Qhub. For me prefect and clearml are good examples of this. I would also like to see if this could be adopted within Qhub for some parts.
Suggestion
Features that the extension mechanism should have.
- allows us to expose the service at https://<custom-domain>/<custom-prefix>/
- optionally setup oauth2 client for service
- provide an “overrides” section so that we can easily control the helm chart
- helm chart information
Example
extensions:
- name: Prefect
chart: <path-to-chart-url>
version: <chart-version>
domain: <optional domain name>
prefix: <prefix-url>
oauth2_client: true/false
overrides:
...
Lets start a discussion about this. We need to use approach as well for a few components.
Issue Analytics
- State:
- Created 2 years ago
- Comments:11 (11 by maintainers)
Top Results From Across the Web
JupyterHub Extension Mechanism · Issue #3416 - GitHub
JupyterHub Extension Mechanism This is a proposal to start the discussion on an extension mechanism for JupyterHub.
Read more >JupyterHub Extension Mechanism - Bountysource
JupyterHub Extension Mechanism. This is a proposal to start the discussion on an extension mechanism for JupyterHub.
Read more >qhub Changelog - PyUp.io
QHub Helm extension mechanism added. - Allow JupyterHub overrides in the `qhub-config.yaml`. - `qhub support` CLI option to save Kubernetes logs.
Read more >Announcing QHub - Quansight
QHub can be deployed with minimal in-house DevOps experience. ... QHub also integrates many common and useful JupyterLab extensions.
Read more >Release notes | Nebari - Nebari dev
QHub Helm extension mechanism added. Allow JupyterHub overrides in the qhub-config.yaml . qhub support CLI option to save Kubernetes logs.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Let me talk in detail a bit more about these pieces.
OAuth2 Provisioning
We should provision an OAuth2 client per service with keycloak this is possible an encouraged.
Should take as inputs:
callback_url
which should be roughly calculated from domain/prefix.This should return:
endpoint
user url information
client id
secret
This is complex … not sure how to expose to the helm chart. Possibly have some way to reference these variables within the overrides? Ideas welcome.
Ingress Provisioning
Typically helm charts already have ingress hooks. How do we make sure that we are integrating this properly. I’d prefer if we hook into the existing chart for this. Since traefik will do the work for provisioning certs. Need to think about this more.
Monitoring Integration
We should apply annotations that allow prometheus monitoring to easily connect.
From the development of the stages in PR #1003 I would like to propose a general “extension” mechanism that makes all parts of Qhub an extension. Take for example the following from
deploy.py
Later when we look at
stages/02-infrastructure/gcp
we see a terraform module /directory that is called by this function. There is a lot being done in theterraform.deploy(...)
function but there are important sections seen here.input_vars
how do values inqhub_config.yaml
and outputs from prior stagesstage_outputs
map to imput variables to the terraform module.state_imports
these are ids/attrs that can be used such that terraform can “import” these resources if they already exist. This allows for qhub to be more stateless. We have used this in the past to import terraform-state in thestages/01-terraform-state
.terraform_objects
is a hook to allow for “dynamic” terraform resources to be rendered to the stage. This should only be a function of theqhub_config.yaml
and NOT previousstage_outputs
so that therender
command can be separated from thedeploy
command.Right now this imformation is written redundantly in
qhub/deploy.py
andqhub/destroy.py
instead I suggest that this information be included in the terraform module itself and qhub will search for it. My suggestion isconfig.py
.The structure I am suggesting. A minimal QHub
extension
would be as follows.config.py
would be an optional file and if nothing is provided we would assume sane defaults:input_vars
would be set to{"qhub_config": config, "stage_outputs": state_outputs}
state_imports
would be empty[]
terraform_outputs
would be empty[]
The optional functions within
config.py
would beterraform_outputs
would be called within the render stage. Whilestate_imports
andinput_vars
would be called in thedeploy
stage. The advantage here is that we could fully adopt this model for all of our components making us fully use our extension mechanism within qhub.Questions
We would add an
extensions
key toqhub-config.yaml
that takes a list of paths. These paths can be directories and github repo paths as well. After the core qhub stages have been run it will iterate through each extension using theconfig.py
in the extension to determine the mapping ofqhub-config.yaml
to terraform input variables. Extensions can also assume that certain providers are already configured via environment variables e.g. keycloak and kubernetes.The main reason I suggest this approach is that it allows for easy abstraction and making qhub-enterprise just another extension to qhub.