question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Overriding ReLU gradients using gradient registry

See original GitHub issue

TensorFlow.js version

1.5.0

Describe the problem or feature request

I’m attempting to override the gradient computation of certain Ops, say ReLUs, in a loaded GraphModel. The gradient registry seems like the right place, but currently tf.getKernelsForBackend doesn’t seem to return a ReLU kernel. It seems like most core Ops currently don’t yet use the gradient registry.

Does my description here seem correct? Is there currently already a different way to override Op gradients of loaded models? If not, could and should I try porting the specific ops I’d like to be able to override to the new kernel and gradient registry system?

Thank you for all your work on TFJS!

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
rthadurcommented, Jan 9, 2020

@ludwigschubert can we close this issue ?

1reaction
ludwigschubertcommented, Jan 9, 2020

Exciting to hear! I mainly need ReLU and MaxPool atm. I’m experimenting on a fork right now to see if I can port those two ops to use the registry. I’m not sure I’ll know enough about the inner workings of TFJS to make it PR-ready, but I’ll try. If there’s a commit/PR/branch that ports an existing op, I’d be glad about any pointers. (But no pressure, I’ll explore on my own, too. 😄)

Thanks for giving me extra context! 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

Replicating RegisterGradient and gradient_override_map in ...
So, I had created a new Graph and Session before calling "gradient_override_map" and succeeded to change gradient function from "Relu" to " ...
Read more >
How to Fix the Vanishing Gradients Problem Using the ReLU
Neural networks are trained using stochastic gradient descent. This involves first calculating the prediction error made by the model and using ...
Read more >
Autograd mechanics — PyTorch 1.13 documentation
Gradients for non-differentiable functions. The gradient computation using Automatic Differentiation is only valid when each elementary function being used ...
Read more >
Introduction to gradients and automatic differentiation
In this guide, you will explore ways to compute gradients with TensorFlow, ... Once you've recorded some operations, use GradientTape.gradient(target, ...
Read more >
Activation Functions and Their Gradients
This matters when computing the gradient of our activation function with respect to an input vector $\textbf{x}$. So how do we compute gradients...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found