TypeError: can't pickle Environment objects when num_workers > 0 for LSUN
See original GitHub issueThe program fails to create an iterator for a DataLoader object when the used dataset is LSUN and the amount of workers is greater than zero. I do not have such an error when work with other datasets. Something tells me that the issue might be caused by lmdb. I run on Windows 10, CUDA 10.
Code:
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
dataset = dset.LSUN(root='D:/bedroom_train_lmdb', classes=['bedroom_train'],
transform=transforms.Compose([
transforms.Resize((64, 64)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
dataloader = torch.utils.data.DataLoader(dataset, batch_size=128,
shuffle=True, num_workers=4)
for data in dataloader:
print(data)
Error:
Traceback (most recent call last):
File "C:/Users/x/.PyCharm2018.3/config/scratches/scratch.py", line 15, in <module>
for data in dataloader:
File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
return _DataLoaderIter(self)
File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
w.start()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle Environment objects
Issue Analytics
- State:
- Created 5 years ago
- Comments:15 (3 by maintainers)
Top Results From Across the Web
DataLoader Multiprocessing error: can't pickle odict_keys ...
TypeError : can't pickle Environment objects when num_workers > 0 for LSUN ... The program fails to create an iterator for a DataLoader...
Read more >Python ERROR => h5py objects cannot be pickled
I am facing this error "h5py objects cannot be pickled" while running (train.py) on ... I am trying ( num_workers=0 ) , but...
Read more >PyTorch Dataloader当使用LMDB数据时num_worker>0引发 ...
PyTorch Dataloader当使用LMDB数据时num_worker>0引发TypeError: can't pickle Environment objects问题的解决 ... 解决方法: ... class DataLoader(torch.
Read more >A brand new website interface for an even better experience!
TypeError : can't pickle Environment objects when num_workers > 0 for LSUN.
Read more >报错:TypeError: cant pickle Environment objects 和EOFError ...
ForkingPickler(file, protocol).dump(obj) TypeError: can't pickle Environment objects ... 修改num_workers=0,即可解决。 num_workers (int, optional):
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

A possible solution is similar to the one for HDF5:
__init__Here is an illustration:
Explanation The multi-processing actually happens when you create the data iterator (e.g., when calling
for datum in dataloader:): https://github.com/pytorch/pytorch/blob/461014d54b3981c8fa6617f90ff7b7df51ab1e85/torch/utils/data/dataloader.py#L712-L720 In short, it would create multiple processes which “copy” the state of the current process. This copy involves a pickle of the LMDB’s Env thus causes an issue. In our solution, we open it at the first data iteration and the opened lmdb file object would be dedicated to each subprocess.this issue also appear in linux, the reason is the opened lmdb env can not be pickled