question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Psuedoinverse solves

See original GitHub issue

xref https://github.com/google/jax/pull/2794

If we’re willing to define a “pseudo-inverse solve” (which as far as I can tell does not exist in NumPy or SciPy) for computing A⁺ b rather than A⁺ directly, we can potentially speed-up gradients of pseudo-inverse solves by large factor by defining a custom gradient rule using similar tricks to those used by lax.custom_linear_solve.

This will be most relevant for computation on CPUs (where matrix-multiplication is comparable to the cost of computing an SVD/eigen-decomposition) and where we only use a single right-hand-side vector.

Should we go ahead and add a helper function for this somewhere? Maybe jax.ops.linalg?? If the performance gap is large enough, we can add a loud warning to the docstring for jnp.linalg.pinv.

I suspect it simply doesn’t exist in NumPy/SciPy because there isn’t much to be gained for such a function if you’re only worrying about the forward solve.

EDIT NOTE: removed incorrect benchmark that only worked for invertible matrices.

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:4
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
mattjjcommented, Apr 23, 2020

By the way, should the solves just form U, s, V = svd(a) and then multiply by U @ (np.where(s > cutoff, np.divide(1, s), 0.) * V) and V.T @ (np.where(s > cutoff, np.divide(1, s), 0.) * U.T) respecively?

1reaction
shoyercommented, Apr 23, 2020

Yes, that intuition looks about right to me, at least for inverses – you turn matrix-matrix multiplications into matrix-vector multiplications.

The JVP rules for svd and pinv involve lots of dense matrix-matrix multiplication, which is slow (on CPU). When I profile things on a GPU, sgemm operations only take ~25% of the runtime (the rest is inside the SVD), so the potential speed-up is much smaller. This makes sense because GPUs are faster for matmuls but not really faster for matrix decomposition.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Pseudoinverse Calculator
This pseudoinverse calculator can determine the Moore-Penrose pseudoinverse for any small matrix.
Read more >
Pseudoinverse Calculator - eMathHelp
Calculate matrix pseudoinverse step by step. The calculator will find the Moore-Penrose inverse (pseudoinverse) of the given matrix, with steps shown.
Read more >
Moore-Penrose Pseudoinverse of a Matrix calculator
Moore-Penrose Pseudoinverse of a Matrix calculator - Online Moore-Penrose Pseudoinverse of a Matrix calculator that will find solution, step-by-step online.
Read more >
pseudoinverse [[1,1],[0,0]] - Wolfram|Alpha
Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, ...
Read more >
Moore-Penrose Pseudoinverse Calculator - comnuan.com
Online matrix calculator for pseudoinverse, Moore–Penrose pseudoinverse of real or complex matrix.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found