question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Handling unconnected gradients in Scipy optimizer

See original GitHub issue

Feature request

Our Scipy optimizer wrapper should explicitly handle unconnected gradients.

Motivation

Currently, the GPflow Scipy optimizer wrapper (gpflow.optimizers.Scipy) simply errors (with one of the typically unhelpful TensorFlow error messages) when one of the passed-in variables is not connected to the loss to be minimized. This can easily happen in a valid case, for example when using an SVGP model with a Constant kernel (which might be used as a very simple baseline) but having inducing locations not explicitly set to non-trainable. This could also happen erroneously in a bug case, e.g. when calling Scipy().minimize(model1.training_loss, model2.trainable_variables).

Proposal

Describe the solution you would like An explicit error message when the variables are not connected to the loss (and hence the grads list contains None entries).

What alternatives have you considered? We could also simply change the call for the gradient computation to use unconnected_gradients=tf.UnconnectedGradients.ZERO. This might silently hide bug cases (see above), so perhaps this option could be an extra flag to the minimize() call?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
st--commented, Mar 26, 2021

Would you like to open a PR so we can discuss more concretely on the code? 😃

0reactions
antonykampcommented, Mar 23, 2021

Ah! You mean to give a warning, yet continue running?

As you pointed out, sometimes its desired behavior eg. in bug cases. Am I right? 😃

Moving the check was a great idea ofc 😄 I would add the new flag as a simple attribute of Scipy-objects (and uncon_gradients_handle stays?)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Optimization (scipy.optimize) — SciPy v1.9.3 Manual
Unconstrained minimization of multivariate scalar functions ( minimize ) The minimize function provides a common interface to unconstrained and constrained ...
Read more >
scipy.optimize.minimize — SciPy v1.9.3 Manual
Method TNC uses a truncated Newton algorithm [5], [8] to minimize a function with variables subject to bounds. This algorithm uses gradient information;...
Read more >
scipy.optimize.check_grad — SciPy v1.9.3 Manual
scipy.optimize.check_grad# · 'random' , then gradients along a random vector are used to check grad against forward difference approximation using func. By ...
Read more >
Introduction to gradients and automatic differentiation
In this guide, you will explore ways to compute gradients with TensorFlow, especially in eager execution. Setup. import numpy as np
Read more >
python - Tensorflow GradientTape "Gradients does not exist ...
Gradient tape's gradient method has a unconnected_gradients parameter that allows you to specify whether unconnected gradients should be ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found