question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Bug in testing Set5 X3 with `args.chop=True`

See original GitHub issue

When I set args.chop=False, everything goes well. When I set it to True, it outputs errors as:

Evaluation:################################################   2019-10-10-16:38:03
 40%|██████████████████                           | 2/5 [00:02<00:04,  1.36s/it]
Traceback (most recent call last):
  File "/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3325, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-2-8439669b053c>", line 1, in <module>
    runfile('/Code/EDSR-PyTorch-master/src/main.py', wdir='/Code/EDSR-PyTorch-master/src')
  File "/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Code/EDSR-PyTorch-master/src/main.py", line 41, in <module>
    main()
  File "/Code/EDSR-PyTorch-master/src/main.py", line 32, in main
    while not t.terminate():
  File "/Code/EDSR-PyTorch-master/src/trainer.py", line 141, in terminate
    self.test()
  File "/Code/EDSR-PyTorch-master/src/trainer.py", line 91, in test
    sr = self.model(lr, idx_scale)
  File "/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/Code/EDSR-PyTorch-master/src/model/__init__.py", line 62, in forward
    return forward_function(x)
  File "/Code/EDSR-PyTorch-master/src/model/__init__.py", line 175, in forward_chop
    _y[..., top, right] = y_chop[1][..., top, right_r]
RuntimeError: The expanded size of the tensor (127) must match the existing size (128) at non-singleton dimension 3.  Target sizes: [1, 3, 127, 127].  Tensor sizes: [3, 127, 128]

Note that I have no problem with training. In testing, for X2 and X4, args.chop=True works good. For X3, some images in Set5 and Set14 would have problems. By the way, I download the processed datasets directly from your link in Readme.

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:7

github_iconTop GitHub Comments

4reactions
HolmesShuancommented, Feb 4, 2021

Edit EDSR-PyTorch-master/src/model/__init__.py as follow:

def forward_chop(self, *args, shave=10, min_size=160000):
        scale = 1 if self.input_large else self.scale[self.idx_scale]
        n_GPUs = min(self.n_GPUs, 4)
        # height, width
        h, w = args[0].size()[-2:]

        h_half, w_half = h//2, w//2
        h_size, w_size = h_half + shave, w_half + shave

        top = slice(0, h_size)
        bottom = slice(h - h_size, h)
        left = slice(0, w_size)
        right = slice(w - w_size, w)
        x_chops = [torch.cat([
            a[..., top, left],
            a[..., top, right],
            a[..., bottom, left],
            a[..., bottom, right]
        ]) for a in args]

        y_chops = []
        if h * w < 4 * min_size:
            for i in range(0, 4, n_GPUs):
                x = [x_chop[i:(i + n_GPUs)] for x_chop in x_chops]
                y = P.data_parallel(self.model, *x, range(n_GPUs))
                if not isinstance(y, list): y = [y]
                if not y_chops:
                    y_chops = [[c for c in _y.chunk(n_GPUs, dim=0)] for _y in y]
                else:
                    for y_chop, _y in zip(y_chops, y):
                        y_chop.extend(_y.chunk(n_GPUs, dim=0))
        else:
            for p in zip(*x_chops):
                p1 = [p[0].unsqueeze(0)]
                y = self.forward_chop(*p1, shave=shave, min_size=min_size)
                if not isinstance(y, list): y = [y]
                if not y_chops:
                    y_chops = [[_y] for _y in y]
                else:
                    for y_chop, _y in zip(y_chops, y): y_chop.append(_y)

        h, w = scale * h, scale * w
        h_half, w_half = scale * h_half, scale * w_half
        h_size, w_size = scale * h_size, scale * w_size
        shave *= scale

        # h *= scale
        # w *= scale

        # top = slice(0, h_half)
        # bottom = slice(h - h_half, h)
        # bottom_r = slice(h//2 - h, None)
        # left = slice(0, w_half)
        # right = slice(w - w_half, w)
        # right_r = slice(w//2 - w, None)

        # batch size, number of color channels
        b, c = y_chops[0][0].size()[:-2]
        y = [y_chop[0].new(b, c, h, w) for y_chop in y_chops]
        for y_chop, _y in zip(y_chops, y):
            _y[..., 0:h_half, 0:w_half] = y_chop[0][..., 0:h_half, 0:w_half]
            _y[..., 0:h_half, w_half:w] = y_chop[1][..., 0:h_half, (w_size - w + w_half):w_size]
            _y[..., h_half:h, 0:w_half] = y_chop[2][..., (h_size - h + h_half):h_size, 0:w_half]
            _y[..., h_half:h, w_half:w] = y_chop[3][..., (h_size - h + h_half):h_size, (w_size - w + w_half):w_size]

        if len(y) == 1: y = y[0]

        return y
2reactions
Senwang98commented, Feb 4, 2021

@HolmesShuan, Thanks for your reply! I have solved this issue by replacing forward_chop in EDSR with forward_chop in RCAN(PyTorch0.4.0). My model shows this change doesn’t affect my accuracy. But, I still thank you very much!

Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeError: expected input to have 3 channels, but got 4 ...
As it is known, my test images have 3 channels, but I got an error as follows, ... issue shows a similar error,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found