question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

gpu/cpu mutex naming

See original GitHub issue

cc @conda-forge/core

As we are adding more packages, we have to decide on the naming.

Currently most packages use <pkg>-proc=*=cpu and <pkg>-proc=*=gpu. While it is great to have a uniform naming scheme, I don’t particularly like the -proc name since the name doesn’t mean anything. We should decide on the naming soon.

We can fix already existing packages to work with the new name and the old name if we decide on a new name.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:14 (14 by maintainers)

github_iconTop GitHub Comments

7reactions
xhochycommented, May 9, 2020

I would prefer the usage of cuda instead of gpu to reflect the technology used.

2reactions
h-vetinaricommented, Jan 9, 2021

Since https://github.com/conda-forge/pytorch-cpu-feedstock/pull/22, that feedstock has the following build strings:

outputs:
  - name: pytorch
    build:
      string: cuda{{ cuda_compiler_version | replace('.', '') }}py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}  # [cuda_compiler_version != "None"]
      string: cpu_py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                                      # [cuda_compiler_version == "None"]

I had asked in that PR:

@h-vetinari: Just checking what the thought was for the build strings here, since I stumbled over them and tracked them down to this PR - I had been thinking about adding the cuda version to the faiss-buildstrings as well: conda-forge/faiss-split-feedstock@53422a1

Perhaps - like for the mutex naming (cf conda-forge/conda-forge.github.io#1059) - there should be a standard way to do this? Speaking of mutexes - if I understand correctly there’s now no “proc” package for pytorch to select the type of installation? Does conda install pytorch=*=cpu still work with the qualifier in front? (wanted to try, but there are no CF packages for windows yet…)

Copying this here because I’d like to have an idea of what should be used before I go forward with the adaptation in https://github.com/conda-forge/faiss-split-feedstock/pull/19.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Generic Mutex Subsystem — The Linux Kernel documentation
Uses symbolic names of mutexes, whenever they are printed in debug output. Point-of-acquire tracking, symbolic lookup of function names, list of all locks...
Read more >
CUDA, mutex and atomicCAS() - Stack Overflow
The loop in question do { atomicCAS(mutex, 0, 1 + i); } while (*mutex != i + 1);. would work fine if it...
Read more >
New CUDA 10.2 v0.99 Mutex Special App - SETI@home
Since you are running double the number of WUs, you will use double the system memory resources, both on the CPU and GPU....
Read more >
How is the mutex relative to the CPU? - Quora
Is your question some kind of Zen koan? A mutex is a software construct and a CPU is (these days) a set of...
Read more >
CUDA C++ Programming Guide - NVIDIA Documentation Center
This difference in capabilities between the GPU and the CPU exists because they ... As the name suggests, a fire and forget launch...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found