TypeError: can't pickle _thread._local objects
See original GitHub issueThank you for your code.When I run train.py,I met a problem as following:
C:\Users\Administrator\Anaconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float
to np.floating
is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type
.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File “C:/image_captioning/code/a-PyTorch-Tutorial-to-Image-Captioning-master/train.py”, line 328, in <module>
main()
File “C:/image_captioning/code/a-PyTorch-Tutorial-to-Image-Captioning-master/train.py”, line 116, in main
epoch=epoch)
File “C:/image_captioning/code/a-PyTorch-Tutorial-to-Image-Captioning-master/train.py”, line 162, in train
for i, imgs, caps, caplens in enumerate(train_loader):
File “C:\Users\Administrator\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py”, line 501, in iter
return _DataLoaderIter(self)
File “C:\Users\Administrator\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py”, line 289, in init
w.start()
File “C:\Users\Administrator\Anaconda3\lib\multiprocessing\process.py”, line 105, in start
self._popen = self._Popen(self)
File “C:\Users\Administrator\Anaconda3\lib\multiprocessing\context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\Users\Administrator\Anaconda3\lib\multiprocessing\context.py”, line 322, in _Popen
return Popen(process_obj)
File “C:\Users\Administrator\Anaconda3\lib\multiprocessing\popen_spawn_win32.py”, line 65, in init
reduction.dump(process_obj, to_child)
File “C:\Users\Administrator\Anaconda3\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can’t pickle _thread._local objects
Could you give me some advice?
Issue Analytics
- State:
- Created 5 years ago
- Comments:5 (2 by maintainers)
Yes, it happens because
h5py
won’t read from multiple processes. By omittingnum_workers
, you’re setting it to the default of0
, which uses only the main process. You could also set it to1
(on Linux, at least).I have run into a similar issue. However, I have to use more than one worker to load my dataset (One process is too slow in my case). Any suggestion? Thanks in Advance.