It seems that there are some problems with --chop argument
See original GitHub issueIt seems that there are some problems with --chop argument, here are the instructions to reproduce the problem.
I set the DIV2K test data_range to 801-802 and used the orginal model EDSR_x2.pt.
python main.py --model edsr --scale 2 --save test --data_train DIV2K --dir_data . --save_results --epoch 2 --data_range 1-800/801-802 --data_test DIV2K --batch_size 16 --chop --patch_size 96 --test_only --pre_train D:\EDSR-PyTorch-master\models\downloaed_models\EDSR_x2.pt
Then an runtimerrror occurs:
It Expected 4-dimensional input for 4-dimensional weight 3 3 1, but got 3-dimensional input of size [1, 184, 270] instead.
Traceback (most recent call last): File "D:/EDSR-PyTorch-master/src/main.py", line 35, in <module> main() File "D:/EDSR-PyTorch-master/src/main.py", line 28, in main while not t.terminate(): File "D:\EDSR-PyTorch-master\src\trainer.py", line 160, in terminate self.test() File "D:\EDSR-PyTorch-master\src\trainer.py", line 109, in test sr = self.model(lr, idx_scale) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "D:\EDSR-PyTorch-master\src\model\__init__.py", line 57, in forward return forward_function(x) File "D:\EDSR-PyTorch-master\src\model\__init__.py", line 135, in forward_chop y = self.forward_chop(*p, shave=shave, min_size=min_size) File "D:\EDSR-PyTorch-master\src\model\__init__.py", line 126, in forward_chop y = P.data_parallel(self.model, *x, range(n_GPUs)) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\parallel\data_parallel.py", line 204, in data_parallel return module(*inputs[0], **module_kwargs[0]) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "D:\EDSR-PyTorch-master\src\model\edsr.py", line 56, in forward x = self.sub_mean(x) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\modules\conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected 4-dimensional input for 4-dimensional weight 3 3 1, but got 3-dimensional input of size [1, 184, 270] instead
Then I tried to determine the color space of DIV2K’s 801-802 pictures using matlab, and find that it has 3 color channels (RGB) as excepted.
I removed the --chop argument and rerun, it seems OK this time, however CUDA out of memory.
In conclusion, it seems there are some problems with --chop argument when testing with DIV2K 801-900 images, guessing out of size?
Hope somebody else can reproduce this issue and figure it out.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:4
- Comments:7
Top GitHub Comments
I solve this issue adding the following line
p1 = [p[0].unsqueeze(0)]
and then
y = self.forward_chop(*p1, shave=shave, min_size=min_size)
in file Model/init.py in fordward_chop() function.
Thanks a lot for the code. I hope that it help you
I solve the same problem following the @m732367606 . Thx very much!