Difference between revisions of "Principal Component Analysis (PCA)"
m |
m |
||
Line 24: | Line 24: | ||
** [http://auai.org/uai2017/proceedings/papers/250.pdf Feature-to-Feature Regression for a Two-Step Conditional Independence Test | Q. Zhang, S. Filippi, S. Flaxman, and D. Sejdinovic] | ** [http://auai.org/uai2017/proceedings/papers/250.pdf Feature-to-Feature Regression for a Two-Step Conditional Independence Test | Q. Zhang, S. Filippi, S. Flaxman, and D. Sejdinovic] | ||
* [http://pathmind.com/wiki/eigenvector A Beginner's Guide to Eigenvectors, Eigenvalues, PCA, Covariance and Entropy Learning | Chris Nicholson - A.I. Wiki pathmind] | * [http://pathmind.com/wiki/eigenvector A Beginner's Guide to Eigenvectors, Eigenvalues, PCA, Covariance and Entropy Learning | Chris Nicholson - A.I. Wiki pathmind] | ||
+ | * [http://alexhwilliams.info/itsneuronalblog/2016/03/27/pca/#some-things-you-maybe-didnt-know-about-pca Everything you did and didn't know about PCA | Alex Williams - Its Neutonal] | ||
+ | |||
+ | <hr> | ||
+ | |||
a data reduction technique that allows to simplify multidimensional data sets to 2 or 3 dimensions for plotting purposes and visual variance analysis. | a data reduction technique that allows to simplify multidimensional data sets to 2 or 3 dimensions for plotting purposes and visual variance analysis. | ||
+ | |||
+ | |||
+ | <hr> | ||
# Center (and standardize) data | # Center (and standardize) data |
Revision as of 08:36, 3 September 2020
YouTube search... ...Google search
- AI Solver
- ...find outliers
- Clustering
- Manifold Hypothesis
- Anomaly Detection
- Dimensional Reduction
- Unsupervised Learning
- T-Distributed Stochastic Neighbor Embedding (t-SNE) ..non-linear
- How to Calculate Principal Component Analysis (PCA) from Scratch in Python | Jason Brownlee - Machine Learning Mastery
- Data Science Concepts Explained to a Five-year-old | Megan Dibble - Toward Data Science
- Causation vs. Correlation - Multivariate Additive Noise Model (MANM)
- Independent Component Analysis (ICA) | University of Helsinki
- Linear Non-Gaussian Acyclic Model (ICA-LiNGAM) | S. Shimizu, P. Hoyer, A. Hyvarinen, and A. Kerminen - University of Helsinki
- Greedy DAG Search (GDS) | Alain Hauser and Peter Biihlmann
- Feature-to-Feature Regression for a Two-Step Conditional Independence Test | Q. Zhang, S. Filippi, S. Flaxman, and D. Sejdinovic
- A Beginner's Guide to Eigenvectors, Eigenvalues, PCA, Covariance and Entropy Learning | Chris Nicholson - A.I. Wiki pathmind
- Everything you did and didn't know about PCA | Alex Williams - Its Neutonal
a data reduction technique that allows to simplify multidimensional data sets to 2 or 3 dimensions for plotting purposes and visual variance analysis.
- Center (and standardize) data
- First principal component axis
- Across centroid of data cloud
- Distance of each point to that line is minimized, so that it crosses the maximum variation of the data cloud
- Second principal component axis
- Orthogonal to first principal component
- Along maximum variation in the data
- First PCA axis becomes x-axis and second PCA axis y-axis
- Continue process until the necessary number of principal components is obtained
NumXL