[QST] How to use multiple threads per GPU worker?
See original GitHub issueWhile running a large job, I noticed with watch -n 1 nvidia-smi
that my GPUs were relatively underutilized.
I attempted to give each GPU worker more threads on which to process tasks simultaneously with threads_per_worker=2
.:
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
import dask, dask_cudf
cluster = LocalCUDACluster(ip='0.0.0.0', threads_per_worker=2)
client = Client(cluster)
# print client info
client
The cluster starts up fine, and even begins processing tasks with twice as many streams, as expected. However, progress as reported by the Dask Dashboard locks up shortly afterwards on a DAG that completes successfully with the typical single-thread per worker.
nvidia-smi
shows plenty of memory remaining memory per card, and Jupyter shows no errors or warnings.
Is this expected behavior? Any suggestions for how to diagnose the freeze?
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (10 by maintainers)
Top Results From Across the Web
How many threads can run on a GPU? - StreamHPC
Intel CPUs have two threads per physical core for one main reason: optimising usage of the full core. How does this work? A...
Read more >concurrency with multithreading | olcf
a significant amount of work. Streams allow multiple threads to submit kernels for concurrent execution on a single GPU.
Read more >how to differentiate GPU threads in a single GPU for different ...
The solution for this very simple case is to divide up your array into pieces, one piece per thread. For simplicity so that...
Read more >Threading — NVIDIA PhysX SDK 3.4.0 Documentation
This chapter explains how to use PhysX in multithreaded applications. There are three main aspects to using PhysX with multiple threads:.
Read more >ic/ - Rejoice, Artstation has finally decided to take ac - 4Chan
"/ic/ - Artwork/Critique" is 4chan's imageboard for the ... the same thread, Automation doomposting, Twitter/musk drama, GPU fanboy wars.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Good point. The above tests were against gzipped files, so more processes reduced the overhead of host-side decompression.
When I switched to pre-decompressed data, some of the improvement dropped, but it is still significant, and variable only by about 10 seconds instead of 1 minute.
2 processes per worker, chunksize=‘512 MiB’: Wall time: 4min 46s
3 processes per worker, chunksize=‘512 MiB’: Wall time: 3min 55s
3 processes per worker, chunksize=‘1024 MiB’: Wall time: 3min 35s - 3min 46s
Should add Peter has been doing a lot of work adding support for PTDS. So that may be something worth trying out at some point