[Feature Request] Pass run index when performing multi-run
See original GitHub issueWhen running a multi-run job (say training the same model with different hyperparameters), it would help to have each DictConfig contain additional data (hydra.multi_run_id
?) that allows to identify in which of the multi-runs the code is being executed.
The issue is that as of now resources cannot easily be allocated.
If for instance we have 4 GPUs available, we would like to have the first run use GPU:0, the second run use GPU:1, …
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (5 by maintainers)
Top Results From Across the Web
Visualization in Polyaxon - Experimentation
Polyaxon has a programmatic experience for generating visualizations for single runs or multi-run using: Plotly express; HiPlot. Tensorboard. Tensorboard is a ...
Read more >multi_run analysis delete - Product Documentation Center
Allows you to delete an existing analysis. You must enter the name of the analysis you wish to delete by specifying its name...
Read more >Routing tree web toolkit - Roda - Jeremy Evans
The route block is called whenever a new request comes in. It is yielded an instance of a subclass of Rack::Request with some...
Read more >Running GCHP: Basics v12 - Geos-chem
Typically simulations require about 90G for a full chemistry run due to ... The "multirun" scripts are for submitting multiple consecutive ...
Read more >Migration Guide - Rally 2.7.0 documentation
The only allowed JSON is a plain array e.g. esrally race . ... The --laps parameter and corresponding multi-run trial functionality has been...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Apologies for reviving an old issue.
I am trying to use the Optuna plugin for HPT (using PyTorch with PyTorch-Lightning). I understand that it is possible to use
${hydra:job.num}
to tell PL which GPU to use. In this case, how does one setup the Launcher to use 4 parallel jobs so as to always make use of 4/4 GPUs on a local server? Thankssee #325