question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

indices should be either on cpu or on the same device as the indexed tensor (cpu)

See original GitHub issue

Describe the bug

image_to_image.py line 92 throws the error above.

  `  init_latents = self.scheduler.add_noise(init_latents, noise, timesteps)`

I’ve tried adding .to(self.device) to the 3 parameters. Device should be ‘cuda’ though.

Reproduction

device = "cuda"

pipei2i = StableDiffusionImg2ImgPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    revision="fp16", 
    torch_dtype=torch.float16,
    use_auth_token=True
).to(device)

response = requests.get('https://pbs.twimg.com/media/Fa1_7_vWYAEwfX-.png')

init_image = Image.open(io.BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
init_image = preprocess(init_image)

outputs = pipei2i(prompt=prompt, init_image=init_image, strength=0.75, num_inference_steps=75,guidance_scale=0.75)

Logs

No response

System Info

diffusers==0.2.4
nvidia/cuda:11.3.0-cudnn8-devel-ubuntu20.04

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:2
  • Comments:21 (8 by maintainers)

github_iconTop GitHub Comments

3reactions
andydhancockcommented, Aug 25, 2022

Solved: Maybe I should listen to the error message sometime… it should all be cpu init_latents = self.scheduler.add_noise(init_latents, noise, timesteps) becomes init_latents = self.scheduler.add_noise(init_latents, noise, timesteps.cpu()) and all is well.

Is .to(device) not supported for timesteps then?

I think it was supported but ‘device’ was cuda, and it’s trying to do something with numpy which uses cpu

2reactions
andydhancockcommented, Aug 25, 2022

Solved: Maybe I should listen to the error message sometime… it should all be cpu init_latents = self.scheduler.add_noise(init_latents, noise, timesteps) becomes init_latents = self.scheduler.add_noise(init_latents, noise, timesteps.cpu()) and all is well.

Read more comments on GitHub >

github_iconTop Results From Across the Web

yolo - Indices should be either on cpu or on the same device ...
I want to train a model with this downloaded dataset, for this I use this command. python train.py --workers 8 --device 0 --batch-size...
Read more >
Indices should be either on cpu or on the ... - PyTorch Forums
As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this ......
Read more >
Indices should be either on cpu or on the same ... - YouTube
Indices should be either on cpu or on the same device as the indexed tensor I hope you found a solution that worked...
Read more >
RuntimeError: indices should be either on cpu or on the same ...
Everytime I try to run on a RTX 3090 on Vast.AI I get this error: RuntimeError: indices should be either on cpu or...
Read more >
YOLO7 indices should be either on cpu or on the same device ...
YOLO7 indices should be either on cpu or on the same device as the indexed tensor (cpu). 心似骄阳QAQ 于 2022-11-01 16:30:23 发布 2608...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found