question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Run samples cleanly on GPU with one line addition

See original GitHub issue

@gbaydin I’m just wondering what your expectation/desire is here for the programming model

I’m looking at the VAE.fsx sample and trying to run it on GPU. Is the intent that I do this by setting GPU as the default device?

dsharp.config(backend=Backend.Torch, device=Device.GPU)

or is it expected to instruct things via model.move(Device.GPU)? In which case how do we specify the move of the data and related tensors to the GPU?

When I make the GPU the default then “Saving samples” takes a very long time. Adding this helps:

samples.move(Device.CPU).saveImage(sprintf "samples_%A_%A.png" epoch i)

We should probably always move to the CPU before doing things like saveImage

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:13 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
dsymecommented, Oct 1, 2020

About this, will this situation be simplified when DiffSharp 1.0 is released on nuget? I mean, can the user simply do a couple of #r nuget lines and be good to go without the System.Runtime.InteropServices.NativeLibrary.Load?

I’m hopeful but it’s not certain. I’m slowly working through these issues, trying to understand what’s going on with the native library loading and package delivery. There are lots of quirky issues here.

0reactions
gbaydincommented, Sep 13, 2021

I’m closing this as addressed.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to set specific gpu in tensorflow?
You can do this in python by having a line os.environ["CUDA_VISIBLE_DEVICES"]="0,1" after importing os package. Using with tf.device('/gpu ...
Read more >
Could a GPU alone run a complete system? : r/hardware
If what you're looking for is a single piece of hardware that can "morph" cleanly between GPU and CPU, I think that that...
Read more >
MLOps for Batch Processing: Running Airflow on GPUs
From there, the external containers will execute their code on the target GPUs as a single-run script. This requires an external container with ......
Read more >
Multi-GPU TensorFlow on Saturn Cloud
We'll train a model serially, on one GPU, and then walk through exactly how ... neatly separated out training, test, and validation samples....
Read more >
Vector Processing on CPUs and GPUs Compared
Modern CPUs and GPUs can all process a lot of data in parallel so what exactly makes them different? This question is getting...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found