numpy.linalg.eig decomposes unitary matrix to non-unitary eigenvector matrix
See original GitHub issueEvery complex unitary matrix is a normal matrix, hence it follows from the spectral theorem that every unitary matrix should be diagonalizable by a unitary matrix:
# for U unitary
d, V= np.linalg.eig(U)
np.testing.assert_allclose(V @ np.diag(d) @ V.conj().T, U)
However for certain classes of unitaries, the returned V is not unitary, causing the assertion fail. Note that np.testing.assert_allclose(V @ np.diag(d) @ V.inv(), U)
still works well. However, I rely on the generated matrix to be unitary. Currently in my prototype I’m using closest_unitary
from Michael Goerz’s blogpost to make V
unitary, and that seems to still work for me.
I would like to understand though whether I’m doing the right thing and things are working as intended, or these rounding errors can be circumvented somehow.
Reproducing code example:
u1=np.array(
[[ 1.0000000000000000+0.0000000000000000e+00j, 0.0000000000000000+0.0000000000000000e+00j, 0.0000000000000000+0.0000000000000000e+00j, 0.0000000000000000+0.0000000000000000e+00j],
[ 0.0000000000000000+0.0000000000000000e+00j, 0.2273102773349319-9.8180175613134565e-02j, -0.1200906687385194-9.6072534990815506e-01j, -0.0357172823753281+8.6736173798840355e-19j],
[ 0.0000000000000000+0.0000000000000000e+00j, 0.2009831761518099+4.5199492504062982e-01j, -0.1562881664727747+2.3112095861637594e-02j, -0.0595517189636592-8.5252553243605544e-01j],
[ 0.0000000000000000+0.0000000000000000e+00j, -0.6773506763478407-4.8496711520170011e-01j, 0.1531920008715559-1.1898354609028339e-01j, -0.2923244362763149-4.2769674888704595e-01j]]
)
u2=np.array(
[[ 6.3616870250576751e-01+7.7154998668403085e-01j, 7.4915966813003955e-18+4.2843403189465048e-17j, -1.0670468446714549e-16+1.1538031985852377e-16j, 7.2296961594819082e-19+5.4038459512872594e-18j],
[ 8.4978997301802701e-17+1.1893895609896763e-16j, 2.2035859738533992e-01+1.1292208651931387e-01j, 6.6484970601421378e-01-7.0383935318155111e-01j, -2.2722217185744742e-02-2.7557668741074088e-02j],
[ 1.6640839446779151e-17-3.2997465454912499e-17j, -2.2087747199836227e-01+4.4261359188593208e-01j, -1.1725777733627676e-01-1.0588094072445392e-01j, 6.1988112341373613e-01-5.8829718979630630e-01j],
[ 1.0141542958775636e-17+6.1509405247093862e-17j, -5.6732929637545129e-02-8.3113080575242637e-01j, 1.8925770983029677e-01+4.2501678096757978e-02j, 1.4402176357197022e-01-4.9763020072181474e-01j]])
u1u2 = u1 @ u2.conj().T
# unitary tests
np.testing.assert_allclose(u2 @ u2.conj().T, np.eye(4),atol=1e-8,rtol=1e-5)
np.testing.assert_allclose(u1 @ u1.conj().T, np.eye(4),atol=1e-8,rtol=1e-5)
np.testing.assert_allclose(u1u2 @ u1u2.conj().T, np.eye(4),atol=1e-8,rtol=1e-5)
d, V = np.linalg.eig(u1u2)
# fails - V is not unitary!
np.testing.assert_allclose(V @ V.conj().T, np.eye(4), atol=1e-5)
# fails as V is not unitary!
np.testing.assert_allclose(V @ np.diag(d) @ V.conj().T, u1u2, atol=1e-5)
u1u2
is a diagonal matrix, with all values in the diagonal being equal, however it also has tiny (10e-16, 10e-17) perturbations that have a big impact on the unitarity of the resulting eigenvector matrix.
Based on my experimentations it’s not the diagonality that causes the issue but how close the eigenvalues (diagonals) are.
import random
import numpy as np
import matplotlib.pyplot as plt
dim = 4
def binary_search(a,b,f,tol=1e-300):
"""
binary_search finds the maximum point within the tolerance `tol`
in the interval `a` and `b` where f(a) is false
"""
if f(a) == f(b):
raise Exception("Incorrect input, f(a)==f(b): {}, a={}, b={}".format(f(a),a,b))
while np.abs(a-b) > tol:
mid = (a + b) / 2
if f(mid):
b = mid
else:
a = mid
return a
def sensitive(M, noise, max_err=1e-15, num_samples=100):
"""
A unitary is considered sensitive to a given level of `noise` if more than 90% of the `num_samples`
noisy samples at that level fail to satisfy the unitarity assumption of diagonalization.
All random samples are calculated by adding a random matrix E to the original unitary.
E's elements are scaled to `noise`.
"""
num_sensitive = 0
for s in range(num_samples):
E = (1 - np.random.rand(dim,dim) * 2 + 1.9j * np.random.rand(dim, dim)) * noise
d, V = np.linalg.eig(M+E)
if not np.allclose(V @ np.diag(d) @ V.conj().T, M+E, atol=max_err):
num_sensitive += 1
return num_sensitive > 0.9 * num_samples
print("stdev\t\t\tsensitivity")
res = 8
num_samples = 3
xs = []
ys = []
for i in range(res, -1, -1):
for s in range(num_samples):
# artificially create unitary diagonal with random eigenvalues chosen to be
# roughly have stdev of 10^{-i}
r = lambda: np.exp(1j * 2 * np.pi * random.gauss(1, 10**(-i)))
r1 = r()
DD = np.diag([r(),r(),r(),r()])
sensitivity = binary_search(0,1,(lambda noise: sensitive(DD,noise,1e-8)))
x = np.std(np.angle(np.diagonal(DD)))
y = sensitivity
print("{}\t{}".format(x,y))
xs.append(x)
ys.append(y)
ax = plt.axes(xlabel="std dev of eigenvalues",ylabel="sensitivity")
ax.set_xlim([1e-8,1e-1])
ax.set_ylim([1e-25,1e-8])
plt.yscale("log")
plt.xscale("log")
plt.gca().invert_yaxis()
plt.scatter(xs,ys,alpha=0.2, s=200)
plt.show()
Results in:
Meaning that the closer the values in the diagonal are to each other the higher the sensitivity is to errors.
Error message:
There is no error message for this scenario, just a “seemingly wrong” answer.
Numpy/Python version information:
1.17.3 3.7.5 (default, Nov 1 2019, 02:16:32)
[Clang 11.0.0 (clang-1100.0.33.8)]
Thank you, any guidance appreciated.
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:12 (8 by maintainers)
The Notes section of the documentation for np.linalg.eig states that
Therefore, either the documentation is wrong, or this is actually a bug.
It looks like the original question was answered in the thread and the documentation issue was addressed in #15550. If there are further related problems, please open a new issue and reference this one.