Stop backprop when all gradients are None
See original GitHub issueWe need to stop backprop if all gradient variables are None.
(chainer.Variable(numpy.array([1, 2], 'f')) * 1).backward()
It causes TypeError:
TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'
Issue Analytics
- State:
- Created 6 years ago
- Comments:22 (15 by maintainers)
Top Results From Across the Web
Why tape.gradient returns all none in my Sequential model?
Your model is not being recorded by the tape. You have to put the computations into the context of the tape if you...
Read more >Introduction to gradients and automatic differentiation
In this guide, you will explore ways to compute gradients with TensorFlow, especially in ... Conversely, to disable the default behavior of watching...
Read more >None gradients with nn.Parameter - autograd - PyTorch Forums
The warning seem to indicate the issue: if nothing requires gradients, then nothing will be computed. You need to find where the Tensors...
Read more >Autograd tutorial - Pytorch中文手册
The autograd package provides automatic differentiation for all operations on Tensors. It is a define-by-run framework, which means that your backprop is ...
Read more >Understanding Autograd: 5 Pytorch tensor functions - Medium
So how does during backpropagation, PyTorch(or any other DL library for that ... The signature for backward is backward(gradient=None, ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
In order to avoid that, there could be another option:
.grad
is manually set. If no grad is set, raise an error. IfNone
is set, just do nothing.Variable
already has._grad_valid
flag. Currently it can beFalse
only inParameter
. We could set it toFalse
by default for variables created as outputs ofFunctionNode.apply()
.In #5709, a related topic is discussed.