question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

See original GitHub issue

I’ve encountered the following error when I’m trying to fine-tuning mel-spec from Tacotron2

Traceback (most recent call last):
  File "train.py", line 276, in <module>
    main()
  File "train.py", line 272, in main
    train(0, a, h)
  File "train.py", line 127, in train
    y_g_hat = generator(x)
  File "/home/kynh/anaconda3/envs/nguyenlm_hifigan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home2/nguyenlm/Projects/hifi-gan-clone/models.py", line 101, in forward
    x = self.conv_pre(x)
  File "/home/kynh/anaconda3/envs/nguyenlm_hifigan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/kynh/anaconda3/envs/nguyenlm_hifigan/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 202, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:12 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
leminhnguyencommented, Jan 4, 2021

@Miralan Thanks you. After reading the code in meldataset.py file, I think there are some bugs in the code at the line audio = audio[:, mel_start * self.hop_size:(mel_start + frames_per_seg) * self.hop_size]. When I run the code I realize some the potential problems.

  1. Assume the audio size is torch.Size([1, 36206]), frames_per_seg=32 and random mel_start=113. So we will slice audio[28928:37120]. It means the shape of audio after slicing is torch.Size([1, 7278] (because upper bound is 36206). That leads to RuntimeError: mismatch dimension. So as you said we can pad zeros
  2. Assume the audio size is torch.Size([1, 72972]), frames_per_seg=32 and random mel_start=300. So we will slice audio[76800:84992]. It means the shape of audio after slicing is torch.Size([1, 0] (because we slice out of range of the audio). That leads to empty tensor. Again, we can overcome this problem by padding zeros, but I wonder if there are many empty tensors like that and filled out by zeros, it significantly affect to the quality of training or fine-tuning ?
0reactions
yulijun1220commented, Nov 29, 2021

@leminhnguyen I’v encountered the same error. Do you have any suggestion on this? Traceback (most recent call last): File “train.py”, line 238, in <module> val_percent=args.val / 100) File “train.py”, line 128, in train_net loss.backward() File “C:\ProgramData\Anaconda3\envs\pytorch-py3.7\lib\site-packages\torch_tensor.py”, line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File “C:\ProgramData\Anaconda3\envs\pytorch-py3.7\lib\site-packages\torch\autograd_init_.py”, line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: Expected tensor for argument #1 ‘grad_output’ to have the same type as tensor for argument #2 ‘weight’; but type torch.cuda.HalfTensor does not equal torch.cuda.FloatTensor (while checking arguments for cudnn_convolution_backward_input) The source code link of train.py is https://github.com/milesial/Pytorch-UNet/blob/master/train.py

Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeError: Input type (torch.cuda.FloatTensor) and weight ...
FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same. I tried several ways by changing transforms, changing device type, ...
Read more >
RuntimeError: Input type (torch.FloatTensor) and weight type ...
You get this error because your model is on the GPU, but your data is on the CPU. So, you need to send...
Read more >
HugginFace dataset error: RuntimeError: Input type (torch ...
FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor.
Read more >
RuntimeError: Input type (torch.cuda ... - Fast.ai forums
I get the following error: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same.
Read more >
Runtime error Input type (torch.cuda.FloatTensor) and weight ...
FloatTensor ) should be the same. arrow_drop_up 0. I'm trying to train my model but I encounter this error message ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found