question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

See original GitHub issue

When I want to run the DSGAN/train.py with python train.py have the top question,who can help me solve this problem,thank you. And I have try the ways in some blogs said ,set the relu and leakyrelu the inplace is False,but the problem doesn’t solved~ This is the code of model: `from torch import nn import torch

class Generator(nn.Module): def init(self, n_res_blocks=8): super(Generator, self).init() self.block_input = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, padding=1), nn.PReLU() ) self.res_blocks = nn.ModuleList([ResidualBlock(64) for _ in range(n_res_blocks)]) self.block_output = nn.Conv2d(64, 3, kernel_size=3, padding=1)

def forward(self, x):
    block = self.block_input(x)
    for res_block in self.res_blocks:
        block = res_block(block)
    block = self.block_output(block)
    return torch.sigmoid(block)

class Discriminator(nn.Module): def init(self, recursions=1, stride=1, kernel_size=5, gaussian=False, wgan=False, highpass=True): super(Discriminator, self).init() if highpass: self.filter = FilterHigh(recursions=recursions, stride=stride, kernel_size=kernel_size, include_pad=False, gaussian=gaussian) else: self.filter = None self.net = DiscriminatorBasic(n_input_channels=3) self.wgan = wgan

def forward(self, x, y=None):
    if self.filter is not None:
        x = self.filter(x)
    x = self.net(x)
    if y is not None:
        x -= self.net(self.filter(y)).mean(0, keepdim=True)
    if not self.wgan:
        x = torch.sigmoid(x)
    return x

class DiscriminatorBasic(nn.Module): def init(self, n_input_channels=3): super(DiscriminatorBasic, self).init() self.net = nn.Sequential( nn.Conv2d(n_input_channels, 64, kernel_size=5, padding=2), nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(64, 128, kernel_size=5, padding=2),
        nn.BatchNorm2d(128),
        nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(128, 256, kernel_size=5, padding=2),
        nn.BatchNorm2d(256),
        nn.LeakyReLU(0.2,inplace=False),

        nn.Conv2d(256, 1, kernel_size=1)
    )

def forward(self, x):
    return self.net(x)

class ResidualBlock(nn.Module): def init(self, channels): super(ResidualBlock, self).init() self.conv1 = nn.Conv2d(channels, channels, kernel_size=3, padding=1) self.prelu = nn.PReLU() self.conv2 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)

def forward(self, x):
    residual = self.conv1(x)
    residual = self.prelu(residual)
    residual = self.conv2(residual)
    return x + residual

class GaussianFilter(nn.Module): def init(self, kernel_size=5, stride=1, padding=4): super(GaussianFilter, self).init() # initialize guassian kernel mean = (kernel_size - 1) / 2.0 variance = (kernel_size / 6.0) ** 2.0 # Create a x, y coordinate grid of shape (kernel_size, kernel_size, 2) x_coord = torch.arange(kernel_size) x_grid = x_coord.repeat(kernel_size).view(kernel_size, kernel_size) y_grid = x_grid.t() xy_grid = torch.stack([x_grid, y_grid], dim=-1).float()

    # Calculate the 2-dimensional gaussian kernel
    gaussian_kernel = torch.exp(-torch.sum((xy_grid - mean) ** 2., dim=-1) / (2 * variance))

    # Make sure sum of values in gaussian kernel equals 1.
    gaussian_kernel = gaussian_kernel / torch.sum(gaussian_kernel)

    # Reshape to 2d depthwise convolutional weight
    gaussian_kernel = gaussian_kernel.view(1, 1, kernel_size, kernel_size)
    gaussian_kernel = gaussian_kernel.repeat(3, 1, 1, 1)

    # create gaussian filter as convolutional layer
    self.gaussian_filter = nn.Conv2d(3, 3, kernel_size, stride=stride, padding=padding, groups=3, bias=False)
    self.gaussian_filter.weight.data = gaussian_kernel
    self.gaussian_filter.weight.requires_grad = False

def forward(self, x):
    return self.gaussian_filter(x)

class FilterLow(nn.Module): def init(self, recursions=1, kernel_size=5, stride=1, padding=True, include_pad=True, gaussian=False): super(FilterLow, self).init() if padding: pad = int((kernel_size - 1) / 2) else: pad = 0 if gaussian: self.filter = GaussianFilter(kernel_size=kernel_size, stride=stride, padding=pad) else: self.filter = nn.AvgPool2d(kernel_size=kernel_size, stride=stride, padding=pad, count_include_pad=include_pad) self.recursions = recursions

def forward(self, img):
    for i in range(self.recursions):
        img = self.filter(img)
    return img

class FilterHigh(nn.Module): def init(self, recursions=1, kernel_size=5, stride=1, include_pad=True, normalize=True, gaussian=False): super(FilterHigh, self).init() self.filter_low = FilterLow(recursions=1, kernel_size=kernel_size, stride=stride, include_pad=include_pad, gaussian=gaussian) self.recursions = recursions self.normalize = normalize

def forward(self, img):
    if self.recursions > 1:
        for i in range(self.recursions - 1):
            img = self.filter_low(img)
    img = img - self.filter_low(img)
    if self.normalize:
        return 0.5 + img * 0.5
    else:
        return img

`

image

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:15

github_iconTop GitHub Comments

1reaction
Flyoooflycommented, Dec 24, 2020

Crop HR and the unsatisfied LR after resizing must not belong to the same domain. I think it is necessary to use generators and discriminators to transform this LR from its domain to the HR domain. I think this is the meaning of this GAN. Let this dissatisfied LR learn its “should learn” characteristics. Of course this is just my understanding.

------------------ 原始邮件 ------------------ 发件人: “ManuelFritsche/real-world-sr” <notifications@github.com>; 发送时间: 2020年12月24日(星期四) 上午9:26 收件人: “ManuelFritsche/real-world-sr”<real-world-sr@noreply.github.com>; 抄送: “2207326681”<2207326681@QQ.COM>;“State change”<state_change@noreply.github.com>; 主题: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18)

You can see the code of the data loader and  you can  know the Z is the images that cropped input HR. When you train,the croped  input(Z) and the unstastifed  LR image (resize the input HR) are get with the data loader.  … —Original— From: “hcleung3325”<notifications@github.com> Date: Wed, Dec 23, 2020 17:28 PM To: “ManuelFritsche/real-world-sr”<real-world-sr@noreply.github.com>; Cc: “State change”<state_change@noreply.github.com>;“JackoooooR”<2207326681@QQ.COM>; Subject: Re: [ManuelFritsche/real-world-sr] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 256, 1, 1]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! (#18) I think cuda version is 10.0. As long as it is compatible with the torch version it should be fine. Torch 1.1.0 is also compatible with cuda 9.0 I think I am trying to train a generator using the provided code with Div2k dataset. However, I don’t know where do I need to input the source images Z for the Discriminator. If some part of code is dealing with this, may I know where is it? Thank you very much. — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

Thanks Flyooofly. Is that the crop HR image is not necessary the same region as the generated LR images for the discriminator? If so, will the discriminator has difficult to learn the mapping since the crop HR image and the generated LR image are not in the same region at all? Thanks again.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.

0reactions
canornotcommented, Dec 21, 2021

Oh,this problem had been solved.I remeber my method is adjust the version of pytorch. I remember I shift down the version .Before is 1.6.0 and I adjust to 1.4.0 or maybe lower that I don’t remember clearly,sorry.

Not necessary to reinstall pytorch older version. Placing optimizer_d.step() after g_loss.backward() and before optimizer_g.step() can simply solve the problem. Since fake_tex is involved in calculating discriminator loss, as well as in calculating g_loss. optimizer_d.step() would modify the property of fake_tex, resulting in corresponding error.

Read more comments on GitHub >

github_iconTop Results From Across the Web

[torch.cuda.FloatTensor [4, 512, 16, 16]], which is output 0 of ...
RuntimeError : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 4]] is at version...
Read more >
RuntimeError: one of the variables needed for ... - M N – Medium
RuntimeError : one of the variables needed for gradient computation has been modified by an inplace operation. You got an error like the...
Read more >
RuntimeError: one of the variables needed for gradient ...
Hint : the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there...
Read more >
RuntimeError: one of the variables needed for gradient ...
RuntimeError : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 1, 256 ...
Read more >
one of the variables needed for gradient computation ... - Reddit
RuntimeError : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3, 3 ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found