question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Regression in qml.gradients.param_shift_hessian() [BUG]

See original GitHub issue

Expected behavior

The expected behavior is that the hessian is correctly computed and this is what pennylane == 0.23.0 does.

Actual behavior

From pennylane 0.24.0 onwards (the problem is still present in the current master) something goes wrong when param_shift_hessian() calls _process_argnum() https://github.com/PennyLaneAI/pennylane/blob/6019194744d8357e382e5019265ebc70db37d87f/pennylane/gradients/parameter_shift_hessian.py#L463 after being called with argnum=None The latter function https://github.com/PennyLaneAI/pennylane/blob/6019194744d8357e382e5019265ebc70db37d87f/pennylane/gradients/parameter_shift_hessian.py#L38-L63 sets argnum = tape.trainable_params and then surprisingly raises the exception after qml.math.max(argnum) >= tape.num_params evaluates to true.

I have not yet managed to cook up a minimal example, but it seems to only happen for some QNodes and work for others.

Regardless, I do not understand how qml.math.max(tape.trainable_params) can ever be larger than tape.num_params, so clearly there is some bug in PennyLane.

Any help, thoughts or suggestions are welcome. I will try to provide more information.

Additional information

No response

Source code

No response

Tracebacks

No response

System information

Name: PennyLane
Version: 0.25.0.dev0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author: None
Author-email: None
License: Apache License 2.0
Location: /home/cvjjm/src/covqcstack/qcware/pennylane
Requires: numpy, scipy, networkx, retworkx, autograd, toml, appdirs, semantic-version, autoray, cachetools, pennylane-lightning
Required-by: pytket-pennylane, PennyLane-Qchem, PennyLane-Lightning, covvqetools

Platform info:           Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
Python version:          3.8.8
Numpy version:           1.20.1
Scipy version:           1.8.0
Installed devices:
- default.gaussian (PennyLane-0.25.0.dev0)
- default.mixed (PennyLane-0.25.0.dev0)
- default.qubit (PennyLane-0.25.0.dev0)
- default.qubit.autograd (PennyLane-0.25.0.dev0)
- default.qubit.jax (PennyLane-0.25.0.dev0)
- default.qubit.tf (PennyLane-0.25.0.dev0)
- default.qubit.torch (PennyLane-0.25.0.dev0)
- pytket.pytketdevice (pytket-pennylane-0.1.0)
- lightning.qubit (PennyLane-Lightning-0.24.0)

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (5 by maintainers)

github_iconTop GitHub Comments

3reactions
cvjjmcommented, Jul 14, 2022

The root cause seems to be that, as the doc string says https://github.com/PennyLaneAI/pennylane/blob/6019194744d8357e382e5019265ebc70db37d87f/pennylane/tape/tape.py#L1283-L1286 num_params is really the number of trainable params, but of course there can be non trainable params before the trainable ones, so the test if qml.math.max(argnum) >= tape.num_params: in line 57 above is wrong. It looks right because the name num_params suggests that it returns the total number of parameters, but this is not true.

Here is a minimal example that works with 0.23.0 and breaks with 0.24.0:

import pennylane as qml
from pennylane import numpy as np

dev = qml.device('default.qubit', wires=2)

@qml.qnode(dev)
def qnode(params):
    qml.RY(0.1, wires=0)
    qml.RX(params[0], wires=0)
    qml.RZ(params[1], wires=0)
    return qml.expval(qml.PauliZ(0))

hess = qml.gradients.param_shift_hessian(qnode)

params = np.array([0.2, 0.3])

qnode(params)
print(qnode.tape.trainable_params, qnode.tape.num_params, qml.math.max(qnode.tape.trainable_params) >= qnode.tape.num_params)

print(hess(params))

output:

[1, 2] 2 True
Traceback (most recent call last):
  File "pl_test13.py", line 20, in <module>
    print(hess(params))
  File ".../pennylane/pennylane/gradients/hessian_transform.py", line 125, in hessian_wrapper
    qhess = _wrapper(*args, **kwargs)
  File ".../pennylane/pennylane/transforms/batch_transform.py", line 289, in _wrapper
    tapes, processing_fn = self.construct(qnode.qtape, *targs, **tkwargs)
  File ".../pennylane/pennylane/transforms/batch_transform.py", line 403, in construct
    tapes, processing_fn = self.transform_fn(tape, *args, **kwargs)
  File ".../pennylane/pennylane/gradients/parameter_shift_hessian.py", line 463, in param_shift_hessian
    bool_argnum = _process_argnum(argnum, tape)
  File ".../pennylane/pennylane/gradients/parameter_shift_hessian.py", line 58, in _process_argnum
    raise ValueError(
ValueError: The index 2 exceeds the number of trainable tape parameters (2).

1reaction
cvjjmcommented, Jul 15, 2022

The path 1 and the proposed fix of @dwierichs look perfectly fine for me.

In the future one might consider allowing for argnum to be “total number of parameters” large and then compute the full hessian, but I agree that the default for argnum=None should be as described by @dwierichs above.

Read more comments on GitHub >

github_iconTop Results From Across the Web

[Regression] Qml Positioning with nmea source file does not ...
The QML type Map does not update when position source is nmea file source. Steps how to reproduce: Please run the attached example...
Read more >
QML gradients with an orientation - qt - Stack Overflow
QML gradient allows only from top to bottom in a Rectangle. The documentation says that it has to be done through combination of...
Read more >
qml.gradients — PennyLane 0.28.0 documentation
This module provides a selection of device-independent, differentiable quantum gradient transforms. As such, these quantum gradient transforms can be used ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found