Prettier interface for LU decomposition
See original GitHub issueIs your feature request related to a problem? Please describe.
The appropriate way to apply an inverse to a matrix or a vector right now is the following:
import numpy as np
from scipy.linalg import lu_factor, lu_solve
a = np.random.randn(5, 5)
inv = lu_factor(a)
lu_solve(inv, a)
This is really quite alright, however an inexperienced user may be tempted to use linalg.inv
instead. Plus the formulas are less readable.
Describe the solution you’d like
I realized that scipy.sparse.linalg.LinearOperator
implements most of the required functionality to provide an interface that would look like this, while not introducing any overhead over the lu_factor + lu_solve:
inv = Inverse(a)
np.allclose(a @ inv, inv @ a)
The prototype implementation is pretty straightforward, here it is with some bells and whistles (functionality not accessible with the output of lu_factor):
Rough implementation
from functools import cache
from copy import deepcopy
from scipy.sparse.linalg import LinearOperator
from scipy import linalg
import numpy as np
class Inverse(LinearOperator):
__array_ufunc__ = None
def __init__(
self, a, overwrite_a=False, check_finite=True, transpose=False
):
if isinstance(a, Inverse):
self._decomp = deepcopy(a._decomp)
else:
self._decomp = linalg.lu_factor(a, overwrite_a, check_finite)
self._transposed = transpose
self.dtype = a.dtype
# No need to check for transpose because only square matrices are
# invertible
self.shape = a.shape
def _matvec(self, b):
return linalg.lu_solve(self._decomp, b, int(self._transposed))
_matmat = _matvec
def _rmatvec(self, b, conj=True):
if not self._transposed:
transpose = 0
if conj:
transpose = 2
conj = False
else:
transpose = 1
prod = linalg.lu_solve(self._decomp, b, transpose)
return prod.conj() if conj else prod
_rmatmat = _rmatvec
@cache
def toarray(self):
return self.matmat(np.identity(self.shape[0], dtype=self.dtype))
def conj(self):
other = type(self)(self)
other._decomp[0] = other._decomp[0].conj()
return other
def transpose(self):
return Inverse(self, transpose=(not self._transposed))
def _adjoint(self):
other = self.conj()
other._transposed = not self._transposed
def __rmul__(self, x):
result = super().__rmul__(x)
if result is not NotImplemented:
return result
if isinstance(x, LinearOperator):
# Defer to the other to implement this.
raise NotImplementedError
# Copied from LinearOperator.dot with modification
x = np.asarray(x)
if x.ndim == 1 or x.ndim == 2 and x.shape[0] == 1:
return self._rmatvec(x.T, conj=False)
elif x.ndim == 2:
return self._rmatmat(x, conj=False)
else:
raise ValueError('expected 1-d or 2-d array or matrix, got %r'
% x)
a, b = np.random.randn(5, 5), np.random.randn(5)
inv = Inverse(a)
inv.toarray()
inv.T @ b
inv @ a
a @ inv
Would this be a desired feature? I’d be happy to give a shot at implementing if it is.
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (9 by maintainers)
About the matrix division,
A / B
is almost unanimouslyA * inv(B)
that’s just numerical linear algebra standard following the scalar hintx/y = x * (1/y)
. There is no discussion on that one.A \ B
isinv(A) @ B
also after 4 decades of matlab , it is now practically a standard too. In fact it is called backslash operator in every linear algebra suite due to this choice. But Guido killed that possibility right at the outset in Python. We barely got a matmul infix operator after a long negotiation.So we are left with some operator choices
solve(A, B)
is to me fine compared toSomeOp(a) @ B
but that’s subjective or say personal choice.The other parts I don’t know if that is even feasible. For me it seems like a lot of work for not equally lot of benefits but maybe I am wrong. You might want to pitch for that in the mailing list too. There is some long-awaited overhaul expectation in almost anyone and perhaps you can tip the scales.
Indeed, I don’t propose to implement all the operator computations with the new object. Many array operations could be supported via
__array__
and__array_ufunc__
though. In particular, addingInverse(S)
to an array, like in the code snippet above, should be within reach.This is a very interesting point. Neither the concept of linear operators, nor its implementation in
scipy.sparse.linalg.interface
has much to do with sparsity. Perhaps this suggests a restructuring:LinearOperator
interface independent from sparse linear algebraspmatrix
a subclass of aLinearOperator
. At a cursory glance, this would have a minimal changesetscipy.sparse.linalg.interface
out of the sparse package, and probably reimport it in the sparse package for backwards compatibility.I fear you are overlooking the subtlety with left- vs right- solve. In your proposal you’re assuming that
A / B == A @ inv(B)
, whereas usually matrix division should be interpreted the other way around. This makes the operator ordering more straightforward, but prevents the user from implementinginv(B) @ A
. Using matmul allows to take care of both cases.More broadly, I think that arrays don’t benefit too much from becoming
LinearOperator
s, whereas more complex objects, such as factorizations do benefit from the syntactic sugar.