question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

CuPy (De)serialization error

See original GitHub issue

I’m encountering the following exception when trying to perform a custom tree-reduce on a set of large cupy objects. I don’t get this exception with smaller objects, so I have not been able to reproduce it with the normal pytests. The n_features in the HashingVectorizer that is used as input to the Naive Bayes pytest, however, can be modified to 8M in order to reproduce this exception.

Traceback (most recent call last):
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/core.py", line 124, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 255, in deserialize
    deserializers=deserializers,
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 268, in deserialize
    return loads(header, frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/cuda.py", line 28, in cuda_loads
    return loads(header, frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 608, in deserialize
    v = deserialize(h, f)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 268, in deserialize
    return loads(header, frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/cuda.py", line 28, in cuda_loads
    return loads(header, frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/cupy.py", line 63, in cuda_deserialize_cupy_ndarray
    frame = PatchedCudaArrayInterface(frame)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/cupy.py", line 26, in __init__
    self.__cuda_array_interface__ = ary.__cuda_array_interface__
AttributeError: 'bytes' object has no attribute '__cuda_array_interface__'
distributed.utils - ERROR - 'bytes' object has no attribute '__cuda_array_interface__'
Traceback (most recent call last):
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/utils.py", line 665, in log_errors
    yield
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/comm/ucx.py", line 207, in read
    frames, deserialize=self.deserialize, deserializers=deserializers
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/comm/utils.py", line 73, in from_frames
    res = await offload(_from_frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/utils.py", line 1458, in offload
    return await loop.run_in_executor(_offload_executor, lambda: fn(*args, **kwargs))
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/utils.py", line 1458, in <lambda>
    return await loop.run_in_executor(_offload_executor, lambda: fn(*args, **kwargs))
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/comm/utils.py", line 61, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/core.py", line 124, in loads
    value = _deserialize(head, fs, deserializers=deserializers)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 255, in deserialize
    deserializers=deserializers,
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 268, in deserialize
    return loads(header, frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/cuda.py", line 28, in cuda_loads
    return loads(header, frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 608, in deserialize
    v = deserialize(h, f)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 268, in deserialize
    return loads(header, frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/cuda.py", line 28, in cuda_loads
    return loads(header, frames)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/cupy.py", line 63, in cuda_deserialize_cupy_ndarray
    frame = PatchedCudaArrayInterface(frame)
  File "/raid/cnolet/miniconda3/envs/cuml_dev_013/lib/python3.7/site-packages/distributed/protocol/cupy.py", line 26, in __init__
    self.__cuda_array_interface__ = ary.__cuda_array_interface__
AttributeError: 'bytes' object has no attribute '__cuda_array_interface__'
distributed.worker - ERROR - 'bytes' object has no attribute '__cuda_array_interface__'

I believe this is the same as https://github.com/rapidsai/ucx-py/issues/421, and while it only occurs when protocol=ucx, the stack trace is giving me all dask.distributed errors, so I’ve opted to start a fresh thread here.

cc @jakirkham

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:23 (14 by maintainers)

github_iconTop GitHub Comments

1reaction
quasibencommented, Mar 17, 2020

@cjnolet, yup I am seeing that as well.

@mrocklin @jakirkham seems like you two both want the same thing 😃

1reaction
mrocklincommented, Mar 17, 2020

We can avoid this routine by a check at the end of the merge_frames routine:

Alternatively, maybe with UCX we shouldn’t be splitting and merging frames at all?

Read more comments on GitHub >

github_iconTop Results From Across the Web

TensorRT + PyInstaller on Nvidia Jetson Nano. Deserialize ...
I have a code reading a serialized TensorRT engine: import tensorrt as trt import pycuda.driver as cuda cuda.init() device = cuda.
Read more >
5 Optimizing Object Serialization - Cornell CS
trolling whether a jstream is part of the GC heap, de-serialized objects ... into the receiving JVM without having to copy the data...
Read more >
chainer.serializers.NpzDeserializer
strict (bool) – If True , the deserializer raises an error when an expected value is not found in the given NPZ file....
Read more >
no_proto - Rust - Docs.rs
NoProto moves the cost of deserialization to the access methods instead of deserializing the entire object ahead of time (Incremental Deserialization). This ...
Read more >
Install spaCy · spaCy Usage Documentation
Install spaCy with GPU support provided by CuPy for your given CUDA version. ... 2022-06-07, v3.3.1, New Span Ruler component, JSON (de)serialization of...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found