Run samples cleanly on GPU with one line additionSee original GitHub issue
@gbaydin I’m just wondering what your expectation/desire is here for the programming model
I’m looking at the VAE.fsx sample and trying to run it on GPU. Is the intent that I do this by setting GPU as the default device?
or is it expected to instruct things via
model.move(Device.GPU)? In which case how do we specify the move of the data and related tensors to the GPU?
When I make the GPU the default then “Saving samples” takes a very long time. Adding this helps:
samples.move(Device.CPU).saveImage(sprintf "samples_%A_%A.png" epoch i)
We should probably always move to the CPU before doing things like saveImage
- Created 3 years ago
- Comments:13 (7 by maintainers)
Top GitHub Comments
About this, will this situation be simplified when DiffSharp 1.0 is released on nuget? I mean, can the user simply do a couple of #r nuget lines and be good to go without the System.Runtime.InteropServices.NativeLibrary.Load?
I’m hopeful but it’s not certain. I’m slowly working through these issues, trying to understand what’s going on with the native library loading and package delivery. There are lots of quirky issues here.
I’m closing this as addressed.