Gradient framework does not bind parameters correctly
See original GitHub issueEnvironment
- Qiskit Terra version: main
- Python version: 3.9.12
- Operating system: macOS 12.3.1
What is happening?
I notice that QuantumCircuit.decompose
ignores some parameter expressions when I modify them via Instruction.params
.
Iβm not sure that this is a valid way to update parameter expressions, but it is used in the gradient framework.
https://github.com/Qiskit/qiskit-terra/blob/b510d6a65d0c64b9543b4d31c2789b07a4cd75c4/qiskit/opflow/gradients/circuit_gradients/param_shift.py#L228
How can we reproduce the issue?
In the following example, decompose
generates Ry(ΞΈ[0] + 1)
while the first parameter of RealAmplitudes
is updated with ΞΈ[0] + 1
.
from qiskit import QuantumCircuit
from qiskit.circuit.library import RealAmplitudes
qc = RealAmplitudes(num_qubits=2, reps=1)
qc.rx(2 * qc.parameters[0], 0)
gate = qc[0][0]
param = qc.parameters[0]
gate.params[0] = param + 1
print('original')
print(qc)
print('decompose')
print(qc.decompose())
output
original
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
q_0: β€0 ββ€ Rx(2*ΞΈ[0]) β
β RealAmplitudes(ΞΈ[0] + 1,ΞΈ[1],ΞΈ[2],ΞΈ[3]) βββββββββββββββ
q_1: β€1 βββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββ
decompose
ββββββββββββ βββββββββββββββββββββββββββ
q_0: β€ Ry(ΞΈ[0]) ββββ βββ€ Ry(ΞΈ[2]) ββ€ R(2*ΞΈ[0],0) β
ββββββββββββ€βββ΄ββββββββββββββ€βββββββββββββββ
q_1: β€ Ry(ΞΈ[1]) ββ€ X ββ€ Ry(ΞΈ[3]) ββββββββββββββββ
βββββββββββββββββββββββββββββ
What should happen?
expected result is as follows.
original
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
q_0: β€0 ββ€ Rx(2*ΞΈ[0]) β
β RealAmplitudes(ΞΈ[0] + 1,ΞΈ[1],ΞΈ[2],ΞΈ[3]) βββββββββββββββ
q_1: β€1 βββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββ
decompose
ββββββββββββββββ ββββββββββββββββββββββββββββ
q_0: β€ Ry(ΞΈ[0] + 1) ββββ βββ€ Ry(ΞΈ[2]) ββ€ R(2*ΞΈ[0], 0) β
βββ¬βββββββββββ¬βββββ΄ββββββββββββββ€ββββββββββββββββ
q_1: βββ€ Ry(ΞΈ[1]) ββββ€ X ββ€ Ry(ΞΈ[3]) βββββββββββββββββ
ββββββββββββ βββββββββββββββββ
Any suggestions?
No idea.
If this is a wrong way to update parameter expressions, could you let me know the right way? I want to update the gradient framework.
Issue Analytics
- State:
- Created a year ago
- Comments:9 (9 by maintainers)
Top Results From Across the Web
Introduction to gradients and automatic differentiation
Gradients of non-scalar targetsββ A gradient is fundamentally an operation on a scalar. Thus, if you ask for the gradient of multiple targets,...
Read more >Item Response Theory | Columbia Public Health
The Model predicts the probability of a correct response, in the same manner as the 1 β PL Model and the 2 PL...
Read more >An overview of gradient descent optimization algorithms
This blog post looks at variants of gradient descent and the algorithms that are commonly used to optimize them.
Read more >Generalized parameter estimation in multi-echo gradient-echo ...
A framework for generalized parameter estimation in multi-echo gradient-echo MR signal models of multiple chemical species was developed and validated andΒ ...
Read more >Scaling up stochastic gradient descent for non-convex ...
Stochastic gradient descent (SGD) is a widely adopted iterative method ... parameter vector u_b have been successfully updated by the j_{th}Β ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Alright. I then reopen this.
Sounds great!