TPU auto_optim barrier is always True
See original GitHub issue🐛 Bug description
In the auto_optim method when using TPUs should the barrier=True
always be set to True when overriding the step.
Unless I am understanding it incorrectly it should be changed depending on the number of TPU devices. Looking through the pytorch 1.5 docs it should only be set to True
when using a single xla device.
EDIT: It looks like something changed from 1.5 to 1.7 with how xla works. pytorch 1.7 doesn’t use barrier but xm.mark_step()
for single xla devices
Environment
- PyTorch Version (e.g., 1.4):
- Ignite Version (e.g., 0.3.0):
- OS (e.g., Linux):
- How you installed Ignite (
conda
,pip
, source): - Python version:
- Any other relevant information:
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Untitled
Mendelmax enclosure, Wilczy trop briggs, Cireson sccm, True node in eighth house, ... Monkey meme funny, Ice barrier yugioh cards, Cremolino ristorante, ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Thanks ! Maybe, a good start can be on Google Colab and then on Kaggle 😃
Sure I agree, I am happy to close the issue for now. I will try to test the impact of the performance of
barrier
some time when I have the TPU resources as currently my 30 hours per week in Kaggle notebooks runs out pretty quickly.