`run_and_measure` with `define_noisy_gate` gives biased results
See original GitHub issueUsing define_noisy_gate
to simulate a channel, I get expected results if I use run
but not when I use run_and_measure
. More specifically, if I run
%matplotlib inline
import matplotlib.pyplot as plt
from pyquil import get_qc, Program
import numpy as np
def kraus_ops_bit_flip(prob):
# define flip (X) and not flip (I) Kraus operators
I_ = np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]])
X_ = np.sqrt(prob) * np.array([[0, 1], [1, 0]])
return [I_, X_]
def random_unitary(n):
# draw complex matrix from Ginibre ensemble
z = np.random.randn(n, n) + 1j * np.random.randn(n, n)
# QR decompose this complex matrix
q, r = np.linalg.qr(z)
# make this decomposition unique
d = np.diagonal(r)
l = np.diag(d) / np.abs(d)
return np.matmul(q, l)
# pick probability
prob = 0.2
# noisy program
p = Program()
p.defgate("DummyGate", random_unitary(2))
p += ("DummyGate", 0)
p.define_noisy_gate("DummyGate", [0], kraus_ops_bit_flip(prob))
qc = get_qc('1q-qvm')
num_expts = 1000
num_shots = 1000
results = np.zeros(num_expts)
for i in range(num_expts):
results_tmp = qc.run_and_measure(p, trials=num_shots)
results[i] = np.count_nonzero(results_tmp[0]) / num_shots
plt.figure(figsize=(10, 8))
plt.hist(results, bins=50)
plt.show()
I get the following plot, which should really be centered at 0.2 but is instead centered on some other value:
If instead I run the following code using run
:
%matplotlib inline
import matplotlib.pyplot as plt
from pyquil import get_qc, Program
from pyquil.gates import MEASURE
import numpy as np
def kraus_ops_bit_flip(prob):
# define flip (X) and not flip (I) Kraus operators
I_ = np.sqrt(1 - prob) * np.array([[1, 0], [0, 1]])
X_ = np.sqrt(prob) * np.array([[0, 1], [1, 0]])
return [I_, X_]
def random_unitary(n):
# draw complex matrix from Ginibre ensemble
z = np.random.randn(n, n) + 1j * np.random.randn(n, n)
# QR decompose this complex matrix
q, r = np.linalg.qr(z)
# make this decomposition unique
d = np.diagonal(r)
l = np.diag(d) / np.abs(d)
return np.matmul(q, l)
# pick probability
prob = 0.2
# noisy program
p = Program()
p.defgate("DummyGate", random_unitary(2))
p += ("DummyGate", 0)
p.define_noisy_gate("DummyGate", [0], kraus_ops_bit_flip(prob))
ro = p.declare('ro')
p += MEASURE(0, ro)
qc = get_qc('1q-qvm')
num_expts = 1000
num_shots = 1000
p.wrap_in_numshots_loop(num_shots)
results = np.zeros(num_expts)
for i in range(num_expts):
results_tmp = qc.run(p)
results[i] = np.count_nonzero(results_tmp) / num_shots
plt.figure(figsize=(10, 8))
plt.hist(results, bins=50)
plt.show()
I get the output image centered on the correct value.
Issue Analytics
- State:
- Created 5 years ago
- Comments:8 (8 by maintainers)
Top Results From Across the Web
No results found
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I’m not sure this is what’s going wrong, but here’s my guess.
Because the (pure state mode) QVM maintains a pure state, it can’t hold a single true noisy distribution. To get around this, it uses the noise model to select from a family of pure states in such a way that pulling a random pure state from that family + pulling a random bit string from the pure state’s distribution joins to give the correct overall noisy distribution.
run
re-runs both selection steps each time it pulls a bit string (& so gives the desired result), whereasrun_and_measure
chooses the pure state only once and then repeatedly samples from it, which violates this.This has been a source of confusion for so long that I don’t understand why we don’t throw an error if someone tries to use noisy r_a_m. I don’t see how it will ever give a useful answer.
You could probably detect this behavior here by asking whether the center of the r_a_m distribution changes appreciably as you repeatedly run it. If so, I’ll bet this is the explanation.
It is a horrible idea to have QVM and the QPU use different abstractions. All generic code in PyQuil that “consumes” a quantum computer should treat both the same – otherwise we run into problems like this.