question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

pynvml not support lookup process info

See original GitHub issue

When I call nvmlDeviceGetGraphicsRunningProcesses, raise below exception.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
~/gitProject/venv/siren/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
    759         try:
--> 760             _nvmlGetFunctionPointer_cache[name] = getattr(nvmlLib, name)
    761             return _nvmlGetFunctionPointer_cache[name]

/usr/lib64/python3.6/ctypes/__init__.py in __getattr__(self, name)
    355             raise AttributeError(name)
--> 356         func = self.__getitem__(name)
    357         setattr(self, name, func)

/usr/lib64/python3.6/ctypes/__init__.py in __getitem__(self, name_or_ordinal)
    360     def __getitem__(self, name_or_ordinal):
--> 361         func = self._FuncPtr((name_or_ordinal, self))
    362         if not isinstance(name_or_ordinal, int):

AttributeError: /lib64/libnvidia-ml.so.1: undefined symbol: nvmlDeviceGetGraphicsRunningProcesses_v2

During handling of the above exception, another exception occurred:

NVMLError_FunctionNotFound                Traceback (most recent call last)
<ipython-input-5-6d9d0902fdc2> in <module>
----> 1 nvmlDeviceGetGraphicsRunningProcesses(handle)

~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses(handle)
   2179
   2180 def nvmlDeviceGetGraphicsRunningProcesses(handle):
-> 2181     return nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
   2182
   2183 def nvmlDeviceGetAutoBoostedClocksEnabled(handle):

~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
   2147     # first call to get the size
   2148     c_count = c_uint(0)
AttributeError                            Traceback (most recent call last)
~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
    759         try:
--> 760             _nvmlGetFunctionPointer_cache[name] = getattr(nvmlLib, name)
    761             return _nvmlGetFunctionPointer_cache[name]

/usr/lib64/python3.6/ctypes/__init__.py in __getattr__(self, name)
    355             raise AttributeError(name)
--> 356         func = self.__getitem__(name)
    357         setattr(self, name, func)

/usr/lib64/python3.6/ctypes/__init__.py in __getitem__(self, name_or_ordinal)
    360     def __getitem__(self, name_or_ordinal):
--> 361         func = self._FuncPtr((name_or_ordinal, self))
    362         if not isinstance(name_or_ordinal, int):

AttributeError: /lib64/libnvidia-ml.so.1: undefined symbol: nvmlDeviceGetGraphicsRunningProcesses_v2

During handling of the above exception, another exception occurred:

NVMLError_FunctionNotFound                Traceback (most recent call last)
<ipython-input-6-85e61951ad1d> in <module>
----> 1 nvmlDeviceGetGraphicsRunningProcesses_v2(handle)

~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in nvmlDeviceGetGraphicsRunningProcesses_v2(handle)
   2147     # first call to get the size
   2148     c_count = c_uint(0)
-> 2149     fn = _nvmlGetFunctionPointer("nvmlDeviceGetGraphicsRunningProcesses_v2")
   2150     ret = fn(handle, byref(c_count), None)
   2151

~/gitProject/venv/hstk/lib64/python3.6/site-packages/pynvml/nvml.py in _nvmlGetFunctionPointer(name)
    761             return _nvmlGetFunctionPointer_cache[name]
    762         except AttributeError:
--> 763             raise NVMLError(NVML_ERROR_FUNCTION_NOT_FOUND)
    764     finally:
    765         # lock is always freed

NVMLError_FunctionNotFound: Function Not Found

So, I guess may be is the pynvml change something lead to this problem #72

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:1
  • Comments:12 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
hstk30commented, Aug 4, 2021

I guess maybe is Hanlp depends pynvml, I’m not sure.

0reactions
wookayincommented, Aug 4, 2021

@hstk30 That seems correct. It should never use pynvml as a dependency; actually this package should never have existed.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Not supported: no process information #72 - GitHub
But, my nvidia-smi can show process info, but gpustat is show Not Supported about process info. I upgrade gpustat 0.4.1 to 0.6 still...
Read more >
NVML Device Query API - NVIDIA Documentation Center
This function returns information only about compute running processes (e.g. CUDA application which have active context). Any graphics ...
Read more >
NVML cannot load methods "NVMLError_FunctionNotFound"
I have a 1080Ti GPU with CUDA 10.2 , NVIDIA driver 440.59 and pynvml version 11.4.1 running on Ubuntu 16.04 .
Read more >
Welcome to nvitop's documentation! — nvitop: the one-stop ...
An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution ... Python 3.6+ is required, and Python versions lower than 3.6 is not...
Read more >
emfacilities.protocols.pynvml — Scipion 3.0.0 documentation
On Windows with the WDDM driver, usedGpuMemory is reported as None # Code that processes this structure should check for None, I.E. #...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found