Pipeline node output artifacts should not be overridden if a node is executed multiple times
See original GitHub issueScenario:
- Create a notebook pipeline.
- Create a notebook (
abc.iynb
) that produces a unique result during each execution. - Add the same [notebook] node twice to the pipeline.
- Run the pipeline.
- Inspect the output artifacts
abc.ipynb
andabc.html
The outputs of notebook execution 1 are overridden by the outputs of notebook execution 2.
Issue Analytics
- State:
- Created 3 years ago
- Comments:6 (5 by maintainers)
Top Results From Across the Web
Pipeline: Basic Steps - Jenkins
fileExists : Verify if file exists in workspace. Checks if the given file exists on the current node. Returns true | false ....
Read more >node.js - Gitlab: Passing artefacts through jobs and stages
1 Answer 1 · artifacts.name is not defined so the default "artifacts" string would be used. · If you store report. · Seems...
Read more >Troubleshoot pipeline runs - Azure DevOps - Microsoft Learn
If a pipeline doesn't start at all, check the following common trigger related issues. UI settings override YAML trigger setting; Pull request triggers...
Read more >Build specification reference for CodeBuild
To override the default buildspec file name, location, or both, do one of the following: Run the AWS CLI create-project or update-project command,...
Read more >`.gitlab-ci.yml` keyword reference - GitLab Docs
Zero-downtime upgrades for multi-node instances ... Choose when jobs run · CI/CD job token ... Pipeline artifacts .gitlab-ci.yml.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yes, that’s what I was trying to raise.
@ptitzler yes, hyperparameter tuning will generally have the same input data, with the only difference in runs being the input parameters to the ML pipeline / model. The results (trained model) would generally be different.
And yes also random init of models will lead to slightly different results each training run (unless a fixed random seed is used, which is not uncommon for reproducibility of pipelines).
But yes this would cover any case where the output of the node depends on the input parameters / env variables.