[BUG] - Configure conda-store allowed channels via nebari config
See original GitHub issueDescribe the bug
I’d like to configure the CondaStore.conda_allowed_channels via the nebari configuration file. This should be possible as described here:
conda_store:
extra-settings:
CondaStore:
conda_allowed_channels = []
However, this does not validate according the nebari scheme, which requires “extra_settings”.
Trying this configuration instead:
conda_store:
extra_settings:
CondaStore:
conda_allowed_channels = []
…does validate but may result in a broken conda-store service.
Expected behavior
In general any configuration parameter should be possible to pass through to conda-store via nebari-config.yaml.
OS and architecture in which you are running Nebari
GKE
How to Reproduce the problem?
Try the example configs given above and deploy.
Command output
No response
Versions and dependencies used.
Nebari version 2022.11.1.
Compute environment
GCP
Integrations
conda-store
Anything else?
Originally discussed here: https://github.com/nebari-dev/nebari/discussions/1567.
Issue Analytics
- State:
- Created 9 months ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
Documentation on memory and cpu requirements for base ...
Describe the bug I'm working on a qhub deployment in Azure Kubernetes. ... conda-store settings, or environments) so as a new user, ...
Read more >conda (@condaproject) / Twitter
conda-store takes managing and serving environments to a new level via filesystem/lockfile/tarball/docker registry. Handles versioning, environment builds (so ...
Read more >Conda configuration — conda 22.11.1.post16+ce4e810c9 ...
If conda-build channels # # are to be allowed, along with the --use-local command line flag, be # # sure to include the...
Read more >Tips & tricks — conda-forge 2022.12.21 documentation
Type conda config --describe channel_priority for more information. The solution is to add the conda-forge channel on top of defaults in your .condarc ......
Read more >python - Conda install and update do not work also solving ...
I ran into the same problem and I couldn't find a solution, but I did find a workaround. If you create an env...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi @alimanfoo. Thanks for reporting this. If I understood correctly, you got a problem with the
extra-config
not being a standard key for conda_store, right? Have you encountered any issues with usingextra_config
instead, if so, could you post them as @costrouc commented above?Just give some context, the reason why
extra-config
failed is mostly probably dues to a typo here: https://github.com/nebari-dev/nebari/blob/49bca7153d14965f06340bf50f24e054fc0ca4d7/nebari/stages/input_vars.py#L288-L292 and here: https://github.com/nebari-dev/nebari/blob/49bca7153d14965f06340bf50f24e054fc0ca4d7/nebari/schema.py#L125-L126 It should beextra-setting
andextra-config
to comply with conda-store docs.This also means that for Nebari (at least the current versions), the standard key will be
extra_config/extra_settings
. They should not break conda-store because they are correctly passed later to conda-store with the correct syntax here: https://github.com/nebari-dev/nebari/blob/49bca7153d14965f06340bf50f24e054fc0ca4d7/nebari/template/stages/07-kubernetes-services/conda-store.tf#L63-L64I do agree that having those different structures to the same keys are confusing and unnecessary, so an action item would be to rename that to the correct syntax as per conda-store docs
Hi folks, thanks so much for following this one up, much appreciated. Just wanted to say apologies I’m unlikely to have time to do more on this in the short term and I’m afraid logs from when I tried this before will be long gone having been through a number of cycles of destroying and redeploying clusters since.