[QUESTION][REQUEST] Way to move model to a specific device
See original GitHub issueIn torch you can move model to run on a specific device via
my_model.to(device)
Is there a way currently to do that with Darts?
It can be especially useful for m1 macs, as it does not use CUDA and you need to specify an mps backend. In pure torch it’s very easy to do, but I couldn’t find a way to do that with Darts.
Issue Analytics
- State:
- Created a year ago
- Comments:11 (3 by maintainers)
Top Results From Across the Web
Saving and loading models across devices in PyTorch
Specify a path to save to PATH = "model.pt" # Save ... Be sure to call model.to(torch.device('cuda')) to convert the model's parameter tensors...
Read more >How to get the device type of a pytorch module conveniently?
You access the device through model.device as for parameters. This solution does not work when you have no parameter inside the model. Share....
Read more >FAQ for survey creators—ArcGIS Survey123 | Documentation
Can I move the items for my survey to a different location in my ArcGIS organization? Can I transfer my survey files to...
Read more >Switch from Android to iPhone - Apple
Move to iOS app in a few simple steps. Open to read more about the move to iOS App. Start by downloading the...
Read more >Shared Responsibility Model - Amazon Web Services (AWS)
The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Got this working. Solution to get num_loader_workers working was to wrap the code execution logic into an
if name == ‘main’ guard.
However, even with num_workers > 0 still very slow on GPU. 2seconds on CPU - I killed the still-running process after 30mins on GPU. I can only conclude there is some sort of issue with current state of M1 PyTorch implementation. Not worth bothering with this in the current state of support IMO. Hope this helps someone in future.
Complete code below.
Using the latest nightly build for PyTorch. Have tried diving deeper, but no luck. If I ever resolve I’ll post.
Wondering if this is related https://github.com/Lightning-AI/lightning/issues/4289