Configuring a Library Variable Group for Azure correctly (PR #1206 follow-up)
See original GitHub issueThis is a question seeking clarification on the contents of the README file, specifically the information added in PR #1206. It says:
If this is your first build on Azure, make sure to add Library Variable Group containing your BINSTAR_TOKEN for automated anaconda uploads.
I am trying to replicate conda-forge’s build pipeline on Azure for custom test builds. I am not sure if I understand the above correctly and if an additional clarification is required in the README file:
- Is the
BINSTAR_TOKEN
equivalent to an Anaconda Cloud “API / Authorization Token”? (I obtained an unrestricted API token for my Anaconda organization through Anaconda Cloud’s web interface.) - Is there any requirement on the name or configuration of the “Library Variable Group”? (I just named it “anaconda_upload” and activated “allow access to all pipelines” for this group.)
- Do I need to treat / encrypt / encode the token somehow? (I just created a variable named
BINSTAR_TOKEN
inside the Library Variable Group and gave it the “untreated” API token.)
When build_steps.sh
runs on Azure, I can see the following output at the end:
TEST END: /home/conda/feedstock_root/build_artifacts/linux-64/qgis-3.12.2-py38h0e74f0e_0.tar.bz2
Renaming work directory, /home/conda/feedstock_root/build_artifacts/qgis_1593461144392/work to /home/conda/feedstock_root/build_artifacts/qgis_1593461144392/work_moved_qgis-3.12.2-py38h0e74f0e_0_linux-64_main_build_loop
# Automatic uploading is disabled
# If you want to upload package(s) to anaconda.org later, type:
anaconda upload /home/conda/feedstock_root/build_artifacts/linux-64/qgis-3.12.2-py38h0e74f0e_0.tar.bz2
# To have conda build upload to anaconda.org automatically, use
# $ conda config --set anaconda_upload yes
anaconda_upload is not set. Not uploading wheels: []
####################################################################################
Resource usage summary:
Total time: 1:41:11.1
CPU usage: sys=0:10:38.1, user=1:47:17.1
Maximum memory usage observed: 3.8G
Total disk usage observed (not including envs): 813.9M
+ validate_recipe_outputs qgis-feedstock
Cloning into '/tmp/tmps2hz2o4q_recipe/feedstock-outputs'...
validation results:
{
"linux-64/qgis-3.12.2-py38h0e74f0e_0.tar.bz2": true
}
NOTE: Any outputs marked as False are not allowed for this feedstock.
+ [[ True != \F\a\l\s\e ]]
+ upload_package --validate --feedstock-name=qgis-feedstock /home/conda/feedstock_root /home/conda/recipe_root /home/conda/feedstock_root/.ci_support/linux_.yaml
Found git SHA 978eb5e2dc42fc03a5b2d7168be96f01bc110ce4 for this build!
Using BINSTAR_TOKEN for anaconda.org uploads to qgist.
No numpy version specified in conda_build_config.yaml. Falling back to default numpy value of 1.11
Adding in variants from internal_defaults
Adding in variants from /home/conda/recipe_root/conda_build_config.yaml
Adding in variants from /home/conda/feedstock_root/.ci_support/linux_.yaml
Distribution /home/conda/feedstock_root/build_artifacts/linux-64/qgis-3.12.2-py38h0e74f0e_0.tar.bz2 is new for qgist, but no upload is taking place because the BINSTAR_TOKEN/STAGING_BINSTAR_TOKEN is missing or empty.
+ touch /home/conda/feedstock_root/build_artifacts/conda-forge-build-done-linux_
+ test -f /home/vsts/work/1/s/build_artifacts/conda-forge-build-done-linux_
The most interesting aspect is the following statement: no upload is taking place because the BINSTAR_TOKEN/STAGING_BINSTAR_TOKEN is missing or empty. What am I doing wrong or misunderstanding?
Ping @hmaarrfk: Thanks for the original PR, it helped me a lot.
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (5 by maintainers)
Top GitHub Comments
Ahhhh yes. We have a step in our staged-recipes CI where we do a bunch of uploading and rotating of tokens. It may be that we need to write some docs on those steps.
After a tip from my colleague, I now understand these comments. I got confused by all the Azure-specific terminology.
For anyone else trying to upload binaries to a personal channel from their own feedstock repo, ignore the README instructions about the Library Group Variable. That is the right solution for conda-forge that has thousands of pipelines within the build-feedstocks project (since they can in theory be shared across pipelines within the same project). But as the instructions currently are, the CI scripts don’t access the Library Variable Group. There must be some undocumented step. Instead, add a
BINSTAR_TOKEN
as a secret pipeline variable to each of the feedstock-specific pipelines.To be really pedantic, go to
https://dev.azure.com/<your-azure-account>/feedstock-builds/_build
, click on the pipeline named after your feedstock repo, and then click “Edit” in the top right of the UI:Then click “Variables”:
Then add
BINSTAR_TOKEN
, making sure to tick the box “Keep this value secret”The Azure YAML files already properly add the secret variable as an environment variable:
Note that I also set
conda_forge_output_validation
toFalse
. I didn’t test if this was required, but it does remove the env varsFEEDSTOCK_TOKEN
andSTAGING_BINSTAR_TOKEN
, so you don’t have to worry about it trying to submit to official conda-forge repos/channels.