Regression in qml.gradients.param_shift_hessian() [BUG]
See original GitHub issueExpected behavior
The expected behavior is that the hessian is correctly computed and this is what pennylane == 0.23.0 does.
Actual behavior
From pennylane 0.24.0 onwards (the problem is still present in the current master) something goes wrong when param_shift_hessian() calls _process_argnum() https://github.com/PennyLaneAI/pennylane/blob/6019194744d8357e382e5019265ebc70db37d87f/pennylane/gradients/parameter_shift_hessian.py#L463
after being called with argnum=None
The latter function
https://github.com/PennyLaneAI/pennylane/blob/6019194744d8357e382e5019265ebc70db37d87f/pennylane/gradients/parameter_shift_hessian.py#L38-L63
sets argnum = tape.trainable_params and then surprisingly raises the exception after qml.math.max(argnum) >= tape.num_params evaluates to true.
I have not yet managed to cook up a minimal example, but it seems to only happen for some QNodes and work for others.
Regardless, I do not understand how qml.math.max(tape.trainable_params) can ever be larger than tape.num_params, so clearly there is some bug in PennyLane.
Any help, thoughts or suggestions are welcome. I will try to provide more information.
Additional information
No response
Source code
No response
Tracebacks
No response
System information
Name: PennyLane
Version: 0.25.0.dev0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author: None
Author-email: None
License: Apache License 2.0
Location: /home/cvjjm/src/covqcstack/qcware/pennylane
Requires: numpy, scipy, networkx, retworkx, autograd, toml, appdirs, semantic-version, autoray, cachetools, pennylane-lightning
Required-by: pytket-pennylane, PennyLane-Qchem, PennyLane-Lightning, covvqetools
Platform info: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
Python version: 3.8.8
Numpy version: 1.20.1
Scipy version: 1.8.0
Installed devices:
- default.gaussian (PennyLane-0.25.0.dev0)
- default.mixed (PennyLane-0.25.0.dev0)
- default.qubit (PennyLane-0.25.0.dev0)
- default.qubit.autograd (PennyLane-0.25.0.dev0)
- default.qubit.jax (PennyLane-0.25.0.dev0)
- default.qubit.tf (PennyLane-0.25.0.dev0)
- default.qubit.torch (PennyLane-0.25.0.dev0)
- pytket.pytketdevice (pytket-pennylane-0.1.0)
- lightning.qubit (PennyLane-Lightning-0.24.0)
Existing GitHub issues
- I have searched existing GitHub issues to make sure the issue does not already exist.
Issue Analytics
- State:
- Created a year ago
- Comments:6 (5 by maintainers)

Top Related StackOverflow Question
The root cause seems to be that, as the doc string says https://github.com/PennyLaneAI/pennylane/blob/6019194744d8357e382e5019265ebc70db37d87f/pennylane/tape/tape.py#L1283-L1286
num_paramsis really the number of trainable params, but of course there can be non trainable params before the trainable ones, so the testif qml.math.max(argnum) >= tape.num_params:in line 57 above is wrong. It looks right because the namenum_paramssuggests that it returns the total number of parameters, but this is not true.Here is a minimal example that works with 0.23.0 and breaks with 0.24.0:
output:
The path 1 and the proposed fix of @dwierichs look perfectly fine for me.
In the future one might consider allowing for argnum to be “total number of parameters” large and then compute the full hessian, but I agree that the default for argnum=None should be as described by @dwierichs above.