Difference between revisions of "Dimensional Reduction"
Line 11: | Line 11: | ||
− | Algorithms: | + | * Algorithms: |
** [[Principal Component Analysis (PCA)]] | ** [[Principal Component Analysis (PCA)]] | ||
** [http://en.wikipedia.org/wiki/Independent_component_analysis[Independent Component Analysis (ICA)] | ** [http://en.wikipedia.org/wiki/Independent_component_analysis[Independent Component Analysis (ICA)] |
Revision as of 06:47, 5 April 2020
Youtube search... ...Google search
--> To identify the most important features
- Algorithms:
- Principal Component Analysis (PCA)
- [Independent Component Analysis (ICA)
- Canonical Correlation Analysis (CCA)
- Linear Discriminant Analysis (LDA)
- Multidimensional Scaling (MDS)
- Non-Negative Matrix Factorization (NMF)]
- Partial Least Squares Regression (PLSR)
- [Principal Component Regression (PCR)
- Projection Pursuit
- Sammon Mapping/Projection
- Pooling / Sub-sampling: Max, Mean
- Kernel Trick
- Isomap
- Local Linear Embedding (LLE)
- t-Distributed Stochastic Neighbor Embedding (t-SNE)
- Softmax
Related:
- (Deep) Convolutional Neural Network (DCNN/CNN)
- Factor analysis
- Feature extraction
- Feature selection
- Seven Techniques for Dimensionality Reduction | KNIME
- Nonlinear dimensionality reduction | Wikipedia
Some datasets may contain many variables that may cause very hard to handle. Especially nowadays data collecting in systems occur at very detailed level due to the existence of more than enough resources. In such cases, the data sets may contain thousands of variables and most of them can be unnecessary as well. In this case, it is almost impossible to identify the variables which have the most impact on our prediction. Dimensional Reduction Algorithms are used in this kind of situations. It utilizes other algorithms like Random Forest, Decision Tree to identify the most important variables. 10 Machine Learning Algorithms You need to Know | Sidath Asir @ Medium