Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItΒ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.FloatTensor) should be the same

See original GitHub issue

Tried running a training session following the example on the site but I keep getting this error.

Tried launching webui with --no-half option but does not change anything.

Any idea?

Training at rate of 0.003 until step 100000
Preparing dataset...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:02<00:00,  3.80it/s]
  0%|                                                                                                                                   | 0/100000 [00:00<?, ?it/s]
Applying xformers cross attention optimization.
Error completing request
Arguments: ('test-jungho_lee', '0.003', 1, 'D:\\dreambooth\\train_jungho_lee\\portrait-pp', 'textual_inversion', 512, 640, 100000, 500, 500, 'D:\\stable-diffusion-webui\\textual_inversion_templates\\style.txt', True, False, '', '', 20, 1, 7, -1.0, 448, 640, 5.0, '', True, True, 1, 1) {}
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\", line 185, in f
    res = list(func(*args, **kwargs))
  File "D:\stable-diffusion-webui\", line 54, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\extensions\DreamArtist\scripts\dream_artist\", line 30, in train_embedding
    embedding, filename = dream_artist.cptuning.train_embedding(*args)
  File "D:\stable-diffusion-webui\extensions\DreamArtist\scripts\dream_artist\", line 413, in train_embedding
    x_samples_ddim = shared.sd_model.decode_first_stage.__wrapped__(shared.sd_model, output[2])  # forward with grad
  File "D:\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\", line 763, in decode_first_stage
    return self.first_stage_model.decode(z)
  File "D:\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\", line 331, in decode
    z = self.post_quant_conv(z)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\", line 457, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\", line 453, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.FloatTensor) should be the same

Issue Analytics

  • State:closed
  • Created 10 months ago
  • Comments:17 (5 by maintainers)

github_iconTop GitHub Comments

CodeExplodecommented, Nov 12, 2022

This happens for me if I select β€˜Train with reconstruction’, it also seems to break regular textual inversion permanently though I only noticed that once and haven’t tried again.

To fix it, I had to:

  1. Replace /modules/ with the original, since this extension makes minor changes to the file and then it can’t work anymore without the extension present, which is a big problem.

  2. Remove the DreamArtist folder from /extensions/ (not just disable it).

  3. Start the web-ui.

  4. Then with the ui running, redownload or move the DreamArtist folder back into extensions.

  5. Then with the ui still open, use the refresh button in the extensions tab.

  6. Ensure the extension is checked to be enabled

  7. Close and restart the web-ui completely

It should work again, and make sure to not click β€˜Train with reconstruction’

If you just want to remove the extension permanently, you just need to do 1 & 2.

IdiotSandwichTheThirdcommented, Nov 13, 2022

Try to reduce the resolution of the image for training. Good results can also be produced without adding reconstruction losses.

It works with 384x384 - do I really have to stick with that?

It’s either that or remove --no-half --precision full from the commands, it works at 512x512 with those removed again.

Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeError: Input type (torch.cuda.FloatTensor) and weight ...
FloatTensor ) and weight type (torch.cuda.HalfTensor) should be the same. I tried several ways by changing transforms, changing device type,Β ...
Read more >
RuntimeError: Input type (torch.FloatTensor) and weight type ...
You get this error because your model is on the GPU, but your data is on the CPU. So, you need to send...
Read more >
RuntimeError: Input type (torch.cuda.HalfTensor) and ... - GitHub
RuntimeError : Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same #44.
Read more >
HugginFace dataset error: RuntimeError: Input type (torch ...
FloatTensor ) and weight type (torch.cuda.HalfTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor....
Read more >
RuntimeError: Input type (torch.cuda ... - forums
I get the following error: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found