gpu/cpu mutex naming
See original GitHub issueAs we are adding more packages, we have to decide on the naming.
Currently most packages use <pkg>-proc=*=cpu
and <pkg>-proc=*=gpu
. While it is great to have a uniform naming scheme, I don’t particularly like the -proc
name since the name doesn’t mean anything. We should decide on the naming soon.
We can fix already existing packages to work with the new name and the old name if we decide on a new name.
Issue Analytics
- State:
- Created 3 years ago
- Comments:14 (14 by maintainers)
Top Results From Across the Web
Generic Mutex Subsystem — The Linux Kernel documentation
Uses symbolic names of mutexes, whenever they are printed in debug output. Point-of-acquire tracking, symbolic lookup of function names, list of all locks...
Read more >CUDA, mutex and atomicCAS() - Stack Overflow
The loop in question do { atomicCAS(mutex, 0, 1 + i); } while (*mutex != i + 1);. would work fine if it...
Read more >New CUDA 10.2 v0.99 Mutex Special App - SETI@home
Since you are running double the number of WUs, you will use double the system memory resources, both on the CPU and GPU....
Read more >How is the mutex relative to the CPU? - Quora
Is your question some kind of Zen koan? A mutex is a software construct and a CPU is (these days) a set of...
Read more >CUDA C++ Programming Guide - NVIDIA Documentation Center
This difference in capabilities between the GPU and the CPU exists because they ... As the name suggests, a fire and forget launch...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I would prefer the usage of
cuda
instead ofgpu
to reflect the technology used.Since https://github.com/conda-forge/pytorch-cpu-feedstock/pull/22, that feedstock has the following build strings:
I had asked in that PR:
Copying this here because I’d like to have an idea of what should be used before I go forward with the adaptation in https://github.com/conda-forge/faiss-split-feedstock/pull/19.