Classical post-processing prior to computing expval
See original GitHub issueFeature details
For good reason, the autograd features of pennylane don’t work with circuits that conclude with qml.sample(). However, there are many circumstances wherein one would want to do some form classical post-processing such as denoising via a neural network or symmetry post-selection prior to computing expectation values.
Presently, there seems to be no way to perform any classical post-processing on measurement samples that allows one to use the built-in optimization libraries in pennylane. Ideally, there would be an abstraction that would allow one to evaluate a custom loss function (i.e. post-processed expval) and use the result of that loss function to optimize the underlying parameterized quantum circuit from which the samples originated.
In particular, I am interested in the case where a neural network is trained on the sample data and is used to compute the expectation value. This seems like an important abstraction for near term devices where noise/error mitigation schemes can significantly improve algorithm performance.
Implementation
It seems like the implementation for expval is treated entirely by the Device object. As a first thought, we could write a custom_loss method in Device that takes in a lambda function corresponding to the custom loss function specified by the user and a set of arguments corresponding to arguments of the lambda. But I am not sure if this would work.
I’d be interested in learning more about the framework and implementing this as a feature if this is a feasible thing to do.
How important would you say this feature is?
3: Very important! Blocking work.
Additional information
No response
Issue Analytics
- State:
- Created 2 years ago
- Comments:8 (5 by maintainers)

Top Related StackOverflow Question
One idea that you could use to temporary get around this issue would be to include the QNode execution inside your loss function, then you could (manually or otherwise) compute the gradient of the loss function with respect to the weights for your ML application. This could look something like this:
It’s hard to estimate how difficult such an implementation would be because the scope is still unclear. For example, the change suggested by @albi3ro would require a refactor of the MeasurementProcess class which in itself would be multiple PRs. There are certain internal design decisions which need to be made to facilitate this functionality in a ‘clean’ manner into PennyLane.
That being said we really appreciate the enthusiasm to contribute! This is a great feature request but this will likely take some time to address. In the meantime you could create a community demo which presents the use case. Once we decide how we want to implement it, we will definitely reach out to you to help contribute!
Hi @kharazity, it’s great that you’re participating in Hacktoberfest! You can find other issues here and remember that you can contribute to other participating projects too.
Thank you for contributing to PennyLane!