question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Classical post-processing prior to computing expval

See original GitHub issue

Feature details

For good reason, the autograd features of pennylane don’t work with circuits that conclude with qml.sample(). However, there are many circumstances wherein one would want to do some form classical post-processing such as denoising via a neural network or symmetry post-selection prior to computing expectation values.

Presently, there seems to be no way to perform any classical post-processing on measurement samples that allows one to use the built-in optimization libraries in pennylane. Ideally, there would be an abstraction that would allow one to evaluate a custom loss function (i.e. post-processed expval) and use the result of that loss function to optimize the underlying parameterized quantum circuit from which the samples originated.

In particular, I am interested in the case where a neural network is trained on the sample data and is used to compute the expectation value. This seems like an important abstraction for near term devices where noise/error mitigation schemes can significantly improve algorithm performance.

Implementation

It seems like the implementation for expval is treated entirely by the Device object. As a first thought, we could write a custom_loss method in Device that takes in a lambda function corresponding to the custom loss function specified by the user and a set of arguments corresponding to arguments of the lambda. But I am not sure if this would work.

I’d be interested in learning more about the framework and implementing this as a feature if this is a feasible thing to do.

How important would you say this feature is?

3: Very important! Blocking work.

Additional information

No response

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:8 (5 by maintainers)

github_iconTop GitHub Comments

3reactions
Jaybsonicommented, Oct 8, 2021

Even using qml.probs I would still need to train an ML before the expectation value is computed. So is the only problem with my approach that I’m using qml.sample? How does the circuit optimization work when the loss isn’t a differentiable function? If the cost function is computed after the qml.probs how does the optimizer know what “direction” to step in?

One idea that you could use to temporary get around this issue would be to include the QNode execution inside your loss function, then you could (manually or otherwise) compute the gradient of the loss function with respect to the weights for your ML application. This could look something like this:

import pennylane as qml
import scipy.optimize as opt
import numpy as np

num_shots = 5
dev = qml.device("default.qubit", wires=2, shots=num_shots)


@qml.qnode(dev)
def circuit(weights):
    qml.RX(weights[0], wires=0)
    qml.RX(weights[0], wires=1)
    qml.RY(weights[1], wires=0)
    qml.RY(weights[1], wires=1)
    return qml.probs(wires=[0, 1])


def histogram(weights):
    return circuit(weights) * num_shots


def loss_function(weights):
    result_hist = np.array(histogram(weights))
    opt_hist = np.array([0., 0., 0., num_shots])

    diff_squares = np.sum((result_hist - opt_hist)**2)
    return diff_squares


def main():
    init_weights = np.random.uniform(0, 2*np.pi, 2)
    print("Init Params: {}".format(init_weights))
    print("Init loss value: {}".format(loss_function(init_weights)))

    res = opt.minimize(loss_function, init_weights)
    print("Optim Params: {}".format(res.x))
    print("Optim Loss value: {}".format(loss_function(res.x)))

    circuit(res.x)
    print(circuit.draw())
    return


if __name__ == "__main__":
    main()

I think it would be cool if an ML post-processing model could be included in the autodiff stack. It seems like @albi3ro 's idea would lend itself better to this sort of use case. How difficult would it be to include this?

I would be interested in helping to develop a post-processing stack. I have been very impressed with the Pennylane library and I think this use case would be an extremely useful abstraction for near-term algorithm development. I’m not sure anyone else has any classical ML integration in a post-processing layer.

It’s hard to estimate how difficult such an implementation would be because the scope is still unclear. For example, the change suggested by @albi3ro would require a refactor of the MeasurementProcess class which in itself would be multiple PRs. There are certain internal design decisions which need to be made to facilitate this functionality in a ‘clean’ manner into PennyLane.

That being said we really appreciate the enthusiasm to contribute! This is a great feature request but this will likely take some time to address. In the meantime you could create a community demo which presents the use case. Once we decide how we want to implement it, we will definitely reach out to you to help contribute!

0reactions
CatalinaAlbornozcommented, Oct 12, 2021

Hi @kharazity, it’s great that you’re participating in Hacktoberfest! You can find other issues here and remember that you can contribute to other participating projects too.

Thank you for contributing to PennyLane!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Estimating observables with classical shadows in the Pauli basis
This implies that computing expectation values with classical shadows ... If the desired observables are known prior to the measurement, ...
Read more >
Optimised Trotter Decompositions for Classical and Quantum ...
Abstract: Suzuki-Trotter decompositions of exponential operators like \exp(Ht) are required in almost every branch of numerical physics.
Read more >
Simon's problem - Wikipedia
In computational complexity theory and quantum computing, Simon's problem is a computational ... Simon's algorithm uses a linear number of queries and any...
Read more >
Introduction to Classical and Quantum Computing
early 1600's, centuries before the digital age. Human computers persisted into mod- ern history, with NASA, for example employing people to compute launch...
Read more >
Exploring Simon's Algorithm with Daniel Simon - AWS
Customers exploring quantum computing often rely on existing ... This is precisely the classical post-processing required: solve the system ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found