question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

BUG: initial approximations to eigenvectors in svds should not be with all positive components

See original GitHub issue

Describe your issue.

https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/_eigen/_svds.py at least in one place sets initial approximations to eigenvectors with all components random uniform on [0, 1]. This is a poor choice as common sense suggests that vectors with only positive components cover the positive quadrant only, not the whole vector space, are generally not so good to approximate eigenvectors. E.g., all example in the original MATLAB/Octave code, written for the paper, https://github.com/lobpcg/blopex/blob/master/blopex_tools/matlab/lobpcg/lobpcg.m use randn, e.g., line 97. Poor initial approximations may make the solver running longer or even fail.

The simplest solution is replacing all calls of uniform random with normal random.

Reproducing Code Example

N/A

Error message

N/A

SciPy/NumPy/Python version information

N/A

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:2
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
rgommerscommented, Dec 8, 2021

Apologies, done.

1reaction
lobpcgcommented, Dec 8, 2021

gh-15154 is very minor, since only changes the documentation and only in one file. The present issue is bigger, affecting the actual default initialization choices of lobpcg calls in https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/_eigen/_svds.py, although still easy to fix. I would however wait for the more important and very long-awaiting https://github.com/scipy/scipy/pull/11829 to merge first before making a PR for this issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Singular Value Decomposition (SVD)
First, in many applications, the data matrix A is close to a matrix of low rank and it is useful to find a...
Read more >
Machine Learning — Singular Value Decomposition (SVD ...
We calculate: These matrices are at least positive semidefinite (all eigenvalues are positive or zero). As shown, they share the same positive eigenvalues...
Read more >
Chapter 7 The Singular Value Decomposition (SVD)
A is often rectangular, but ATA and AAT are square, symmetric, and positive semidefinite. The Singular Value Decomposition (SVD) separates any matrix into ......
Read more >
Chapter 6 The Singular Value Decomposition - Virginia Tech
Theorem 11 All eigenvalues of a symmetric positive definite matrix are positive; all eigenvalues of a symmetric positive semidefinite matrix are nonnegative.
Read more >
14 Singular Value Decomposition - CS @ Utah
For any high-dimensional data analysis, one's first thought should often be: can I use an SVD? The singular value decomposition is an invaluable...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found