Optimizers improvement: Move preprocessing to initialization
See original GitHub issueFeature details
The implementation of optimization methods via classes offers more advantages than are currently being used in PennyLane.
The proposed refactor is to reduce the step and step_and_cost methods to their absolute minimum, as they are the part that is called repeatedly in optimizations. Other parts like preprocessing and validation of hyperparameters or the function to be optimized can be moved into the instantiation of the optimizer, if we move task-specific hyperparameters and the function input from step to __init__. The main restriction would be that an optimizer instance can not be reused for several cost functions. This is not too severe, however, as we can implement a method to allow for easy copying of an instance.
The overhead in (rather rare) cases where this reusing for multiple cost functions is required seems to be much smaller than the overhead currently affecting all users by performing validation and similar steps at every iteration.
This refactor moved optimizers closer to stateful entities, which seems like a good fit to me, in particular for optimizers that use data from previous steps as bias/surrogate data (think reusing the metric tensor in QNGOptimizer) or inherently have a memory (like BFGS-type optimizers).
Implementation
Move validation steps and cost function input from step/step_and_cost to __init__ of optimizers.
How important would you say this feature is?
1: Not important. Would be nice to have.
Additional information
Happy to work on this.
Issue Analytics
- State:
- Created 2 years ago
- Comments:11 (6 by maintainers)

Top Related StackOverflow Question
Yes, exactly. Accounting for the fact that the several steps should update the last parameters, change the last line to
Note that if we give a
num_stepshyperparameter toGradientDescentOptimizerwe also could domaking a full optimization readily available without users having to write a loop.
Great!