RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
See original GitHub issueHi, I tried to use your s2conv/so3conv in multi model like following. (Model includes your s2conv/so3conv)
def train(epoch):
model.train()
for batch_idx, (image,target) in enumerate(train_loader):
image = image.to(device)
optimizer.zero_grad()
# multi model
re_image1 = model(image)
re_image2 = model(image)
loss = re_image1.abs().mean() + re_image2.abs().mean()
loss.backward()
optimizer.step()
Then I got following error.
File "main.py", line 66, in <module>
main()
File "main.py", line 62, in main
train(epoch)
File "main.py", line 53, in train
loss.backward()
File "/home/hayashi/.python-venv/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/hayashi/.python-venv/lib/python3.5/site-packages/torch/autograd/__init__.py", line 89, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
There are no error when I use mono-model like following
def train(epoch):
model.train()
for batch_idx, (image,target) in enumerate(train_loader):
image = image.to(device)
optimizer.zero_grad()
# mono model
image1 = model(image)
loss = image1.abs().mean()
loss.backward()
optimizer.step()
So I think this error is not caused from inplace operation. Do you know this error’s detail?
P.S. I found this error doesn’t occur when I use past version of your s2conv/so3conv. (maybe this is for Pytorch v0.3.1) If you can, please republish past version of s2cnn (for Pytorch v0.3.1).
Issue Analytics
- State:
- Created 5 years ago
- Comments:5
Top Results From Across the Web
Encounter the RuntimeError: one of the variables needed for ...
How ever, I encounter the RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation while ...
Read more >RuntimeError: one of the variables needed for gradient ...
RuntimeError : one of the variables needed for gradient computation has been modified by an inplace operation. You got an error like the...
Read more >can't find the inplace operation: one of the variables needed ...
I need to zero grad_output before I set the new column (corresponding with the output that I want the gradient to be calculated...
Read more >one of the variables needed for gradient computation has ...
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], ...
Read more >one of the variables needed for gradient computation ... - Reddit
Why RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation ?
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I can fix it with
torch.einsum("ij,jk->ik", (x.clone(), torch.randn(3, 3)))
The problem comes from s2_rft when we use torch.einsum. The problem can be reproduced by the following code: