"can't pickle thread.lock objects" when calling array.store with distributed
See original GitHub issueI am trying to store a dask array using distributed. When I call store
, I get an error “can’t pickle thread.lock objects”.
I originally was trying this in a much more complex context involving netCDF, xarray, etc. But I managed to come up with the following minimal example.
import numpy as np
import dask.array as da
from distributed import Client
def create_and_store_dask_array():
shape = (10000, 1000)
chunks = (1000, 1000)
data = da.zeros(shape, chunks=chunks)
store = np.memmap('test.memmap', mode='w+', dtype=data.dtype, shape=data.shape)
data.store(store)
print("Success!")
create_and_store_dask_array()
client = Client()
create_and_store_dask_array()
The first call works, but the second fails. The output is:
Success!
/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/distributed/protocol/pickle.pyc - INFO - Failed to serialize (<function store at 0x7f0ee802f488>, (<functools.partial object at 0x7f0ec84f1418>, (1000, 1000)), (slice(2000, 3000, None), slice(0, 1000, None)), <thread.lock object at 0x7f0f2c715af0>)
Traceback (most recent call last):
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/distributed/protocol/pickle.py", line 43, in dumps
return cloudpickle.dumps(x, protocol=pickle.HIGHEST_PROTOCOL)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/cloudpickle/cloudpickle.py", line 706, in dumps
cp.dump(obj)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/cloudpickle/cloudpickle.py", line 146, in dump
return Pickler.dump(self, obj)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/pickle.py", line 224, in dump
self.save(obj)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/pickle.py", line 568, in save_tuple
save(element)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/pickle.py", line 306, in save
rv = reduce(self.proto)
TypeError: can't pickle thread.lock objects
/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/distributed/protocol/core.pyc - CRITICAL - Failed to Serialize
Traceback (most recent call last):
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/distributed/protocol/core.py", line 43, in dumps
for key, value in data.items()
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/distributed/protocol/core.py", line 44, in <dictcomp>
if type(value) is Serialize}
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/distributed/protocol/serialize.py", line 106, in serialize
header, frames = {}, [pickle.dumps(x)]
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/distributed/protocol/pickle.py", line 43, in dumps
return cloudpickle.dumps(x, protocol=pickle.HIGHEST_PROTOCOL)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/cloudpickle/cloudpickle.py", line 706, in dumps
cp.dump(obj)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/site-packages/cloudpickle/cloudpickle.py", line 146, in dump
return Pickler.dump(self, obj)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/pickle.py", line 224, in dump
self.save(obj)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/pickle.py", line 568, in save_tuple
save(element)
File "/home/rpa/.conda/envs/lagrangian_vorticity/lib/python2.7/pickle.py", line 306, in save
rv = reduce(self.proto)
TypeError: can't pickle thread.lock objects
Versions:
import dask
print dask.__version__
import distributed
print distributed.__version__
>>> 0.12.0
>>> 1.14.3
Issue Analytics
- State:
- Created 7 years ago
- Comments:20 (13 by maintainers)
Top Results From Across the Web
TypeError: can't pickle _thread.lock objects - Stack Overflow
I had the same problem with Pool() in Python 3.6.3. Error received: TypeError: can't pickle _thread.RLock objects.
Read more >17.2. multiprocessing — Process-based parallelism
When an object is put on a queue, the object is pickled and a background thread later flushes the pickled data to an...
Read more >How to Check Which Object Cause TypeError: cannot pickle ...
Lock objects are famous by “unserializable”. The common cause is like this; object_a = create_object_a() # Let's say this object contains a lock...
Read more >dask.array.core - Dask documentation
See examples for details. args : dask arrays or other objects dtype : np.dtype, ... Lock, optional Whether or not to lock the...
Read more >multiprocessing — Process-based parallelism — Python 3.9.2 ...
from multiprocessing import Process, Lock def f(l, i): l.acquire() try: ... These shared objects will be process and thread-safe.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Can you try the following
I’m not sure that there is anything we can do here. In principle SerializableLock assumes that the place you’re storing to can be written to from separate processes without coordination. That might not be true.
It would be good to improve error reporting, to point users to this option if it’s right for them. I don’t know of a good way of doing that though. If anyone has suggestions on how to improve this that would be helpful.
On Thu, Apr 25, 2019 at 6:30 AM sharath nair notifications@github.com wrote: