question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

See original GitHub issue

Hi, I tried to use your s2conv/so3conv in multi model like following. (Model includes your s2conv/so3conv)

def train(epoch):
    model.train()
    for batch_idx, (image,target) in enumerate(train_loader):
        image = image.to(device)
        optimizer.zero_grad()
       
        # multi model
        re_image1 = model(image)
        re_image2 = model(image)
        loss = re_image1.abs().mean() + re_image2.abs().mean()

        loss.backward()
        optimizer.step()

Then I got following error.

  File "main.py", line 66, in <module>
    main()
  File "main.py", line 62, in main
    train(epoch)
  File "main.py", line 53, in train
    loss.backward()
  File "/home/hayashi/.python-venv/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/hayashi/.python-venv/lib/python3.5/site-packages/torch/autograd/__init__.py", line 89, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

There are no error when I use mono-model like following

def train(epoch):
    model.train()
    for batch_idx, (image,target) in enumerate(train_loader):
        image = image.to(device)
        optimizer.zero_grad()
       
        # mono model
        image1 = model(image)
        loss = image1.abs().mean() 

        loss.backward()
        optimizer.step()

So I think this error is not caused from inplace operation. Do you know this error’s detail?

P.S. I found this error doesn’t occur when I use past version of your s2conv/so3conv. (maybe this is for Pytorch v0.3.1) If you can, please republish past version of s2cnn (for Pytorch v0.3.1).

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5

github_iconTop GitHub Comments

2reactions
mariogeigercommented, May 8, 2018

I can fix it with torch.einsum("ij,jk->ik", (x.clone(), torch.randn(3, 3)))

1reaction
mariogeigercommented, May 8, 2018

The problem comes from s2_rft when we use torch.einsum. The problem can be reproduced by the following code:

x = torch.randn(3, 3, requires_grad=True)
z1 = torch.einsum("ij,jk->ik", (x, torch.randn(3, 3)))
z2 = torch.einsum("ij,jk->ik", (x, torch.randn(3, 3)))
z1.sum().backward()
Read more comments on GitHub >

github_iconTop Results From Across the Web

Encounter the RuntimeError: one of the variables needed for ...
How ever, I encounter the RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation while ...
Read more >
RuntimeError: one of the variables needed for gradient ...
RuntimeError : one of the variables needed for gradient computation has been modified by an inplace operation. You got an error like the...
Read more >
can't find the inplace operation: one of the variables needed ...
I need to zero grad_output before I set the new column (corresponding with the output that I want the gradient to be calculated...
Read more >
one of the variables needed for gradient computation has ...
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], ...
Read more >
one of the variables needed for gradient computation ... - Reddit
Why RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation ?
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found