question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Get Learning Rate SPSA & QNSPSA

See original GitHub issue

What is the expected enhancement?

For SPSA as well as QNSPSA you can set learning_rate=None. When calling the .settings() property of the optimizers after running the optimization, it will return learning_rate=None again. However, the effective learning_rate is actually changed by the calibrate function. Can self.learning_rate be updated accordingly?

@Cryoris @des137 @mishmash

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
Cryoriscommented, Oct 7, 2021

@des137 could you see if the info in #7109 matches your expectations? 🙂

1reaction
Cryoriscommented, Oct 7, 2021

Ok, then I’ll add the logger for now 👍🏻

Read more comments on GitHub >

github_iconTop Results From Across the Web

SPSA - qiskit.algorithms.optimizers
Calibrate SPSA parameters with a powerseries as learning rate and perturbation coeffs. Estimate the standard deviation of the loss function. Get the support ......
Read more >
Optimization using SPSA — PennyLane documentation
SPSA is an optimization method that involves approximating the gradient of the cost function at each iteration step.
Read more >
SPSA (Simultaneous Perturbation Stochastic Approximation ...
A is a stabilisation factor of the learning rate (it avoids a bigger difference between the learning rate in the first and in...
Read more >
SPSA Algorithm
This contrasts with algorithms requiring direct measurements of the gradient of the objective function (which are often difficult or impossible to obtain).
Read more >
Is it possible to combine SPSA and Adam? - Cross Validated
Gradient free methods such as SPSA do basically the same, they make a bad estimate of the gradient, by randomly perturbing the parameter...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found