question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

python backend error: c_python_backend_utils.TritonModelException: Tensor is stored in GPU and cannot be converted to NumPy

See original GitHub issue

Description I am currently using the Python Backend BLS function and called another tensorrt model using the pb_utils.inferencerequest interface and the call succeeded, but the result is stored on the GPU,and I can’t find how to copy the interface from the GPU.

Triton Information 22.01

Are you using the Triton container or did you build it yourself? no

Expected behavior Can python backend copy InferenceRequest results directly to the CPU?

Here is my debugging information:

(Pdb)
> /fas_repo/bls_model/1/fas_pipe.py(60)face_detect()
-> inputs=[images],
(Pdb)
> /fas_repo/bls_model/1/fas_pipe.py(61)face_detect()
-> requested_output_names=self.outputs_0)
(Pdb)
> /fas_repo/bls_model/1/fas_pipe.py(58)face_detect()
-> infer_request = pb_utils.InferenceRequest(
(Pdb)
> /fas_repo/bls_model/1/fas_pipe.py(62)face_detect()
-> infer_response = infer_request.exec()
(Pdb)
> /fas_repo/bls_model/1/fas_pipe.py(65)face_detect()
-> confs = pb_utils.get_output_tensor_by_name(infer_response, 'class')
(Pdb)
> /fas_repo/bls_model/1/fas_pipe.py(66)face_detect()
-> locs = pb_utils.get_output_tensor_by_name(infer_response, 'bbox')
(Pdb) p confs
<c_python_backend_utils.Tensor object at 0x7f08f1716130>
(Pdb) dir(confs)
['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'as_numpy', 'from_dlpack', 'is_cpu', 'name', 'to_dlpack', 'triton_dtype']
(Pdb) p confs.is_cpu()
False
(Pdb) p confs.as_numpy()
*** c_python_backend_utils.TritonModelException: Tensor is stored in GPU and cannot be converted to NumPy.
(Pdb)

This is the code that I sent the request:

        ......
        import pdb
        pdb.set_trace()
        images = pb_utils.Tensor("images", preprocessed_imgs)
        infer_request = pb_utils.InferenceRequest(
            model_name=self.model_name0,
            inputs=[images],
            requested_output_names=self.outputs_0)
        infer_response = infer_request.exec()
        #if infer_response.has_error():
        #    return False
        confs = pb_utils.get_output_tensor_by_name(infer_response, 'class')
        locs = pb_utils.get_output_tensor_by_name(infer_response, 'bbox')
        ......

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:13 (4 by maintainers)

github_iconTop GitHub Comments

4reactions
Tabriziancommented, Jun 27, 2022

Looks like the DLPack protocol has changed a bit since we designed this interface in Python backend and Numpy is using a newer version. I’ll file a ticket for improving the DLPack support in Python backend.

2reactions
harish-headroomcommented, Aug 29, 2022

Any update on this? I am currently blocked by this

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cannot convert a symbolic Tensor to Numpy array (using RTX ...
I installed it using Anaconda in a prescribed way: conda create -n tf-gpu tensorFlow-gpu , I then installed jupyterlab, spyder, matplotlib, ...
Read more >
NumPy API on TensorFlow
TensorFlow NumPy APIs have well-defined semantics for converting literals to ND array, as well as for performing type promotion on ND array inputs....
Read more >
How to Move a Torch Tensor from CPU to GPU and Vice ...
In this article, we will see how to move a tensor from CPU to GPU and from GPU to CPU in Python. Why...
Read more >
Error with convert CUDA tensor to numpy - PyTorch Forums
You cannot directly convert a tensor stored on the GPU to a numpy array, which would be stored on the CPU. As the...
Read more >
API reference — TensorLy: Tensor Learning in Python
So instead of using PyTorch or NumPy functions ( pytorch.tensor or numpy.array ... to check if a tensor is on the current backend,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found