Output of RandomGhosting is not float32
See original GitHub issueAs I am still trying to wrap my head around the SubjectsDataset class I find myself confused at this error I’ve been getting when I try to do training with a defined batch_size on the Dataloader (using torch.utils.data.Dataloader):
RuntimeError: Expected object of scalar type Float but got scalar type Double for sequence element 1 in sequence argument at position #1 'tensors'
If I don’t set a batch_size this error disappears. I found the exact same issue in PyTorch forums and the solution involves making a transformation ToTensor that explicitly defines the data type to be loaded. https://discuss.pytorch.org/t/runtimeerror-expected-object-of-scalar-type-double-but-got-scalar-type-long-for-sequence-element-1-in-sequence-argument-at-position-1-tensors/46952
I thought I could do this using the Lambda transformation; however, it’s not clear to me how as the images are ScalarImage type.
for example this is not possible:
subject['gt'].tensor = subject['gt'].tensor.type(torch.DoubleTensor)
Thoughts?
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (5 by maintainers)
Top GitHub Comments
Fixed in
v0.18.1
. Thanks for reporting!Yeah I guess it wasn’t that minimal, sorry hahah. But since we found the issue it’s fine. Thanks for your quick replies.