question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

It seems that there are some problems with --chop argument

See original GitHub issue

Referenced #128 #138

It seems that there are some problems with --chop argument, here are the instructions to reproduce the problem.

I set the DIV2K test data_range to 801-802 and used the orginal model EDSR_x2.pt.

python main.py --model edsr --scale 2 --save test --data_train DIV2K --dir_data . --save_results --epoch 2 --data_range 1-800/801-802 --data_test DIV2K --batch_size 16 --chop --patch_size 96 --test_only --pre_train D:\EDSR-PyTorch-master\models\downloaed_models\EDSR_x2.pt

Then an runtimerrror occurs:

It Expected 4-dimensional input for 4-dimensional weight 3 3 1, but got 3-dimensional input of size [1, 184, 270] instead.

Traceback (most recent call last): File "D:/EDSR-PyTorch-master/src/main.py", line 35, in <module> main() File "D:/EDSR-PyTorch-master/src/main.py", line 28, in main while not t.terminate(): File "D:\EDSR-PyTorch-master\src\trainer.py", line 160, in terminate self.test() File "D:\EDSR-PyTorch-master\src\trainer.py", line 109, in test sr = self.model(lr, idx_scale) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "D:\EDSR-PyTorch-master\src\model\__init__.py", line 57, in forward return forward_function(x) File "D:\EDSR-PyTorch-master\src\model\__init__.py", line 135, in forward_chop y = self.forward_chop(*p, shave=shave, min_size=min_size) File "D:\EDSR-PyTorch-master\src\model\__init__.py", line 126, in forward_chop y = P.data_parallel(self.model, *x, range(n_GPUs)) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\parallel\data_parallel.py", line 204, in data_parallel return module(*inputs[0], **module_kwargs[0]) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "D:\EDSR-PyTorch-master\src\model\edsr.py", line 56, in forward x = self.sub_mean(x) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "D:\Anaconda3\envs\python3.6SRDenseNet\lib\site-packages\torch\nn\modules\conv.py", line 338, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected 4-dimensional input for 4-dimensional weight 3 3 1, but got 3-dimensional input of size [1, 184, 270] instead

Then I tried to determine the color space of DIV2K’s 801-802 pictures using matlab, and find that it has 3 color channels (RGB) as excepted.

I removed the --chop argument and rerun, it seems OK this time, however CUDA out of memory.

In conclusion, it seems there are some problems with --chop argument when testing with DIV2K 801-900 images, guessing out of size?

Hope somebody else can reproduce this issue and figure it out.

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:4
  • Comments:7

github_iconTop GitHub Comments

10reactions
miguelpradescommented, Jun 17, 2019

I solve this issue adding the following line p1 = [p[0].unsqueeze(0)]

and then y = self.forward_chop(*p1, shave=shave, min_size=min_size)

in file Model/init.py in fordward_chop() function.

  for p in zip(*x_chops):
                
                p1 = [p[0].unsqueeze(0)
         
                y = self.forward_chop(*p1, shave=shave, min_size=min_size)
                if not isinstance(y, list): y = [y]
                if not y_chops:
                    y_chops = [[_y] for _y in y]
                else:
                    for y_chop, _y in zip(y_chops, y): y_chop.append(_y)

Thanks a lot for the code. I hope that it help you

0reactions
wanglixilinxcommented, Oct 28, 2020

I solve the same problem following the @m732367606 . Thx very much!

Read more comments on GitHub >

github_iconTop Results From Across the Web

It seems that there are some problems with --chop argument
I set the DIV2K test data_range to 801-802 and used the orginal model EDSR_x2.pt. ... Then an runtimerrror occurs: It Expected 4-dimensional input ......
Read more >
Why CHAZ/CHOP Failed - Inkstick Media
As we argue, even though CHAZ initially labeled itself as an autonomous zone, its trajectory was closer to that of a movement like...
Read more >
Problems with Chopper when trying to chop by length
The Chopper only chops at existing vertices. So if you have a polyline with a great distance between vertices, your results might be...
Read more >
News & Views — Name the Logical Fallacy: COVID-19 Edition
One way to evaluate information is to look for “logical fallacies,” which are errors in reasoning that make an argument unsound.
Read more >
The Braves' history with 'The Chop,' and the damage it causes ...
Despite erroneous claims, there is no longstanding tradition of the chant being used in Atlanta for the Braves. From 1966 when the team...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found