Feature request: sklearn.decomposition.PCA using the covariance matrix
See original GitHub issueHi all,
I am trying to compute PCA using sklearn.decomposition.PCA that seems that it uses correlation matrix I would like instead to use covariance matrix to compute the principal components.
I ask a question on StackOverflow two years ago but I get no answer.
I think it’s a very important feature see also this question on Crossvalidated StackExchange site.
Thanks and all the best! Giacomo
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:7 (4 by maintainers)
Top Results From Across the Web
sklearn.decomposition.PCA — scikit-learn 1.2.0 documentation
Principal component analysis (PCA). Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space.
Read more >Principle Component Analysis (PCA) with Scikit-Learn - Python
PCA allows us to quantify the trade-offs between the number of features we utilize and the total variance explained by the data. PCA...
Read more >Feature Extraction using PCA - Python Example - Data Analytics
Python's sklearn. preprocessing StandardScaler class can be used for standardizing the dataset. Construct the covariance matrix: Once the data ...
Read more >Principal Component Analysis – How PCA algorithms works ...
Import Data · Step 1: Standardize each column · Step 2 Compute Covariance Matrix · Step 3: Compute Eigen values and Eigen Vectors...
Read more >Principal Components Analysis with Python (Sci-Kit Learn)
In a large dataset with many features, this is not practical, and it is ... The PCA of the data needs to be...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

This stackexchange question has a discussion on the relationship between SVD and PCA. TL;DR if the features of your X matrix are mean-centered, you can find the principal components by finding either the eigenvectors of the covariance matrix, or the SVD of X. To my knowledge, the PCA algorithm involves first centering the features.
Hi I would like to claim this issue, and look into adding this to PCA if there is interest. Would there also be interest in implementing this as the Gramian Matrix?