[BUG] Raise Exception when Int Tensor
See original GitHub issueI was trying to use ColorJitter
augmentation and mistakenly passing torch.uint8
Tensor as input. This didn’t raise any warning/error but instead crashed the session abruptly. I realized the cause after 2-3 attempts. So assuming that majority of augmentations work on Float types, it would be great to introduce a type-check at the beginning of forward pass to avoid unnecessary crashes.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Floating point exception (core dumped) when divided by zero ...
Bug When torch.Tensor is divided by zero tensors with data type as int, including int8, int32, int64, the program is terminated forcibly and ......
Read more >Error: Can not convert a int into a Tensor or Operation. While ...
The error is because you have declared loss as a tensor for your cross entropy loss and then used loss again as int...
Read more >expected Tensor as element 0 in argument 0, but got int
I was using this example code in a deep q network but it keeps throwing an error “TypeError: expected Tensor as element 0...
Read more >tf.Variable | TensorFlow v2.11.0
If no error is raised, the Op outputs the value of the variable before the increment. This is essentially a shortcut for count_up_to(self,...
Read more >Basic Tensor Functionality — Theano 1.1.2+29.g8b2825658 ...
If an integer is provided, the method will return that many Variables and if ... you do not provide the ndim , then...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Yes. I have added a type check.
_validate_input_dtype(input, accepted_dtypes=[torch.float16, torch.float32, torch.float64])
. Only float numbers are accepted now.Sorry but I wasn’t working on this. Also, I just skimmed through the commit that was referring to this issue and thought it’ll be fixed by that. So I’m not sure about the progress.