Difference between revisions of "Dimensional Reduction"
m |
m |
||
Line 20: | Line 20: | ||
[https://www.bing.com/news/search?q=~Dimensional+~Reduction+AI&qft=interval%3d%228%22 ...Bing News] | [https://www.bing.com/news/search?q=~Dimensional+~Reduction+AI&qft=interval%3d%228%22 ...Bing News] | ||
− | * [[Embedding]] ... [[Fine-tuning]] ... [[Agents#AI-Powered Search|Search]] ... [[Clustering]] ... [[Recommendation]] ... [[Anomaly Detection]] ... [[Classification]] ... [[Dimensional Reduction]] . | + | * [[Embedding]] ... [[Fine-tuning]] ... [[Agents#AI-Powered Search|Search]] ... [[Clustering]] ... [[Recommendation]] ... [[Anomaly Detection]] ... [[Classification]] ... [[Dimensional Reduction]]. [[...find outliers]] |
* [[Math for Intelligence]] ... [[Finding Paul Revere]] ... [[Social Network Analysis (SNA)]] ... [[Dot Product]] ... [[Kernel Trick]] | * [[Math for Intelligence]] ... [[Finding Paul Revere]] ... [[Social Network Analysis (SNA)]] ... [[Dot Product]] ... [[Kernel Trick]] | ||
* [[Hyperdimensional Computing (HDC)]] | * [[Hyperdimensional Computing (HDC)]] | ||
Line 32: | Line 32: | ||
* [https://en.wikipedia.org/wiki/Feature_selection Feature selection] | * [https://en.wikipedia.org/wiki/Feature_selection Feature selection] | ||
* [https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Locally-linear_embedding Nonlinear dimensionality reduction | Wikipedia] | * [https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Locally-linear_embedding Nonlinear dimensionality reduction | Wikipedia] | ||
− | + | ||
To identify the most important [[Feature Exploration/Learning | Features]] to address: | To identify the most important [[Feature Exploration/Learning | Features]] to address: |
Revision as of 17:42, 16 August 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- Embedding ... Fine-tuning ... Search ... Clustering ... Recommendation ... Anomaly Detection ... Classification ... Dimensional Reduction. ...find outliers
- Math for Intelligence ... Finding Paul Revere ... Social Network Analysis (SNA) ... Dot Product ... Kernel Trick
- Hyperdimensional Computing (HDC)
- Pooling / Sub-sampling: Max, Mean
- Backpropagation ... FFNN ... Forward-Forward ... Activation Functions ...Softmax ... Loss ... Boosting ... Gradient Descent ... Hyperparameter ... Manifold Hypothesis ... PCA
- Seven Techniques for Dimensionality Reduction | KNIME
- Dimensionality Reduction Techniques Jupyter Notebook | Jon Tupitza
- (Deep) Convolutional Neural Network (DCNN/CNN)
- Factor analysis
- Feature extraction
- Feature selection
- Nonlinear dimensionality reduction | Wikipedia
To identify the most important Features to address:
- reduce the amount of computing resources required
- 2D & 3D intuition often fails in higher dimensions
- distances tend to become relatively the 'same' as the number of dimensions increases
- Algorithms:
- Principal Component Analysis (PCA) is an unsupervised linear transformation technique helps us identify patterns in data based of the correlation between the features. PCA aims to find the directions of the maximum variance in high dimensional data and project it onto a lower dimensional feature space.
- Independent Component Analysis (ICA)
- Canonical Correlation Analysis (CCA)
- Linear Discriminant Analysis (LDA) is a supervised linear transformation technique is to find the feature subspace that optimizes class separability.
- Multidimensional Scaling (MDS)
- Non-Negative Matrix Factorization (NMF)
- Partial Least Squares Regression (PLSR)
- Principal Component Regression (PCR)
- Projection Pursuit
- Sammon Mapping/Projection
- Local Linear Embedding (LLE) creates an embedding of the dataset and tries to preserve the relationships between neighborhoods in the dataset. LLE can be thought of as a series of local PCAs that are globally compared to find the best non-linear embedding.
- Isomap Embedding is a non-linear dimensionality reduction technique that creates an embedding of the dataset and tries to preserve the relationships in the dataset. Isomap looks for a lower-dimensional embedding which maintains distances between all points.
- T-Distributed Stochastic Neighbor Embedding (t-SNE) ...similar objects are modeled by nearby points
- Singular Value Decomposition (SVD) is a linear dimensionality reduction technique.
Dimensional Reduction techniques for reducing the number of input variables in training data - captures the “essence” of the data
Some datasets may contain many variables that may cause very hard to handle. Especially nowadays data collecting in systems occur at very detailed level due to the existence of more than enough resources. In such cases, the data sets may contain thousands of variables and most of them can be unnecessary as well. In this case, it is almost impossible to identify the variables which have the most impact on our prediction. Dimensional Reduction Algorithms are used in this kind of situations. It utilizes other algorithms like Random Forest, Decision Tree to identify the most important variables. 10 Machine Learning Algorithms You need to Know | Sidath Asir @ Medium
Projection
Youtube search... ...Google search
- Autoencoder (AE) / Encoder-Decoder
- Unsupervised
- Privacy
- Manifold Hypothesis
- Uniform Manifold Approximation and Projection (UMAP) | L. McInnes, J. Healy, and J. Melville ... a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction.
- UMAP...Python version
- UMAP-JS ...Javascript version
- Uniform Manifold Approximation and Projection (UMAP) | L. McInnes, J. Healy, and J. Melville ... a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction.
- Uncovering High-dimensional Structures of Projections from Dimensionality Reduction Methods | Michael Thrun & Alfred Ultsch - ScienceDirect