question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError when using scatter_std

See original GitHub issue

hi @rusty1s ,

I am trying to update some pyg code written one year ago to make it work with newer versions of packages. While testing my code (which used to work), I encountered the following error when calling scatter_std:

Traceback (most recent call last): File “”, line 1, in <module> RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): File “/home/tformal/wisard_debug/lib/python3.7/site-packages/torch_scatter/composite/std.py”, line 29, in scatter_std index = broadcast(index, src, dim) tmp = scatter_sum(src, index, dim, dim_size=dim_size) count = broadcast(count, tmp, dim).clamp_(1) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE mean = tmp.div(count) RuntimeError: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.

I have the same error with the toy example:

import torch
from torch_scatter import scatter_std

data = torch.rand(5, 4)
index = torch.tensor([0, 0, 0, 1, 1])
scatter_std(src=data, index=index, dim=0)

packages versions: >>> torch_scatter.__version__ ‘2.0.4’ >>> torch.__version__ ‘1.5.0’

what I want to do is normalize node features in my graph, so I need to compute scatter std. Did something change regarding this function ?

thanks in advance

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
rusty1scommented, Jun 24, 2020

Just released a new version of the package 😃

1reaction
rusty1scommented, Jun 12, 2020

I can confirm this, really sorry. It is fixed in master. For a quick fix, you can just replace clamp_ by clamp in the std.py file.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Boost not working over network - Stack Overflow
Here's the entire error(it's a runtime error, so no line numbers) libc++abi.dylib: terminating with uncaught exception of type ...
Read more >
std::bad_alloc" doing forward pass on CPU during deployment ...
For the app I have a test suite which involves running multiple forward passes through this. The app only runs on the CPU....
Read more >
RuntimeError: The following operation failed in the TorchScript ...
Hi I'm having issues with jit_compile option on NUTS sampler, hoping someone has a work around! I get the runtime error when I...
Read more >
Cuda runtime error (59) - Part 1 (2018) - fast.ai Course Forums
Fighting with this for the last hour … The error message: RuntimeError: cuda runtime error (59) : device-side assert triggered at ...
Read more >
torch/lib/c10d/ProcessGroupNCCL.cpp - GitCode
This is to prevent overflow issues with sum, since we use uint8 to // represent a ... argument is NULL which will //...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found