question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItΒ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Gradient framework does not bind parameters correctly

See original GitHub issue

Environment

  • Qiskit Terra version: main
  • Python version: 3.9.12
  • Operating system: macOS 12.3.1

What is happening?

I notice that QuantumCircuit.decompose ignores some parameter expressions when I modify them via Instruction.params. I’m not sure that this is a valid way to update parameter expressions, but it is used in the gradient framework. https://github.com/Qiskit/qiskit-terra/blob/b510d6a65d0c64b9543b4d31c2789b07a4cd75c4/qiskit/opflow/gradients/circuit_gradients/param_shift.py#L228

How can we reproduce the issue?

In the following example, decompose generates Ry(ΞΈ[0] + 1) while the first parameter of RealAmplitudes is updated with ΞΈ[0] + 1.

from qiskit import QuantumCircuit
from qiskit.circuit.library import RealAmplitudes

qc = RealAmplitudes(num_qubits=2, reps=1)
qc.rx(2 * qc.parameters[0], 0)
gate = qc[0][0]
param = qc.parameters[0]
gate.params[0] = param + 1
print('original')
print(qc)
print('decompose')
print(qc.decompose())

output

original
     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
q_0: ─0                                         β”œβ”€ Rx(2*ΞΈ[0]) β”œ
     β”‚  RealAmplitudes(ΞΈ[0] + 1,ΞΈ[1],ΞΈ[2],ΞΈ[3]) β”‚β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
q_1: ─1                                         β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
decompose
     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
q_0: ─ Ry(ΞΈ[0]) β”œβ”€β”€β– β”€β”€β”€ Ry(ΞΈ[2]) β”œβ”€ R(2*ΞΈ[0],0) β”œ
     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Œβ”€β”΄β”€β”β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
q_1: ─ Ry(ΞΈ[1]) β”œβ”€ X β”œβ”€ Ry(ΞΈ[3]) β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

What should happen?

expected result is as follows.

original
     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
q_0: ─0                                         β”œβ”€ Rx(2*ΞΈ[0]) β”œ
     β”‚  RealAmplitudes(ΞΈ[0] + 1,ΞΈ[1],ΞΈ[2],ΞΈ[3]) β”‚β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
q_1: ─1                                         β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
decompose
     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
q_0: ─ Ry(ΞΈ[0] + 1) β”œβ”€β”€β– β”€β”€β”€ Ry(ΞΈ[2]) β”œβ”€ R(2*ΞΈ[0], 0) β”œ
     β””β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”˜β”Œβ”€β”΄β”€β”β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
q_1: ─── Ry(ΞΈ[1]) β”œβ”€β”€β”€ X β”œβ”€ Ry(ΞΈ[3]) β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Any suggestions?

No idea.

If this is a wrong way to update parameter expressions, could you let me know the right way? I want to update the gradient framework.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:9 (9 by maintainers)

github_iconTop GitHub Comments

1reaction
t-imamichicommented, Apr 6, 2022

Alright. I then reopen this.

0reactions
t-imamichicommented, Apr 6, 2022

Sounds great!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Introduction to gradients and automatic differentiation
Gradients of non-scalar targets​​ A gradient is fundamentally an operation on a scalar. Thus, if you ask for the gradient of multiple targets,...
Read more >
Item Response Theory | Columbia Public Health
The Model predicts the probability of a correct response, in the same manner as the 1 – PL Model and the 2 PL...
Read more >
An overview of gradient descent optimization algorithms
This blog post looks at variants of gradient descent and the algorithms that are commonly used to optimize them.
Read more >
Generalized parameter estimation in multi-echo gradient-echo ...
A framework for generalized parameter estimation in multi-echo gradient-echo MR signal models of multiple chemical species was developed and validated andΒ ...
Read more >
Scaling up stochastic gradient descent for non-convex ...
Stochastic gradient descent (SGD) is a widely adopted iterative method ... parameter vector u_b have been successfully updated by the j_{th}Β ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found