Difference between revisions of "Principal Component Analysis (PCA)"

From
Jump to: navigation, search
(Created page with "[http://www.youtube.com/results?search_query=PCA+Anomaly+Detection YouTube search...] * Support Vector Regression (SVR) * [http://www.asimovinstitute.org/author/fjodorvan...")
 
m
 
(50 intermediate revisions by the same user not shown)
Line 1: Line 1:
[http://www.youtube.com/results?search_query=PCA+Anomaly+Detection YouTube search...]
+
{{#seo:
 +
|title=PRIMO.ai
 +
|titlemode=append
 +
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools 
  
* [[Support Vector Regression (SVR)]]
+
<!-- Google tag (gtag.js) -->
* [http://www.asimovinstitute.org/author/fjodorvanveen/ Neural Network Zoo | Fjodor Van Veen]
+
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
* [http://www.giocc.com/harnessing-the-grid-ai-with-support-vector-machines.html Harnessing The Grid AI with Support Vector Machines
+
<script>
Posted on Feb 15, 2015 | Gio Carlo Cielo]
+
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
  
Support vector machines (SVM) find optimal solutions for classification problems. Classically they were only capable of categorising linearly separable data; say finding which images are of Garfield and which of Snoopy, with any other outcome not being possible. During training, SVMs can be thought of as plotting all the data (Garfields and Snoopys) on a graph (2D) and figuring out how to draw a line between the data points. This line would separate the data, so that all Snoopys are on one side and the Garfields on the other. This line moves to an optimal line in such a way that the margins between the data points and the line are maximised on both sides. Classifying new data would be done by plotting a point on this graph and simply looking on which side of the line it is (Snoopy side or Garfield side). Using the kernel trick, they can be taught to classify n-dimensional data. This entails plotting points in a 3D plot, allowing it to distinguish between Snoopy, Garfield AND Simon’s cat, or even higher dimensions distinguishing even more cartoon characters. SVMs are not always considered neural networks. Cortes, Corinna, and Vladimir Vapnik. “Support-vector networks.” Machine learning 20.3 (1995): 273-297.
+
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 +
}}
 +
[https://www.youtube.com/results?search_query=Principal+Component+Analysis+PCA YouTube]
 +
[https://www.quora.com/search?q=Principal%20Component%20Analysis%20PCA ... Quora]
 +
[https://www.google.com/search?q=Principal+Component+Analysis+PCA ...Google search]
 +
[https://news.google.com/search?q=Principal+Component+Analysis+PCA ...Google News]
 +
[https://www.bing.com/news/search?q=Principal+Component+Analysis+PCA&qft=interval%3d%228%22 ...Bing News]
  
http://www.asimovinstitute.org/wp-content/uploads/2016/09/svm.png http://www.giocc.com/img/harnessing-the-grid-ai-with-support-vector-machines/svm.png
+
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
 +
* [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative]] ... [[Train, Validate, and Test]]
 +
* [[Embedding]] ... [[Fine-tuning]] ... [[Retrieval-Augmented Generation (RAG)|RAG]] ... [[Agents#AI-Powered Search|Search]] ... [[Clustering]] ... [[Recommendation]] ... [[Anomaly Detection]] ... [[Classification]] ... [[Dimensional Reduction]].  [[...find outliers]]
 +
* [[Optimization Methods]]
 +
* [[Supervised|Supervised Learning]] ... [[Semi-Supervised]] ... [[Self-Supervised]] ... [[Unsupervised]]
 +
** [[T-Distributed Stochastic Neighbor Embedding (t-SNE)]]  ..non-linear
 +
* [http://machinelearningmastery.com/calculate-principal-component-analysis-scratch-python/ How to Calculate Principal Component Analysis (PCA) from Scratch in Python | Jason Brownlee - Machine Learning Mastery]
 +
* [http://towardsdatascience.com/data-science-concepts-explained-to-a-five-year-old-ad440c7b3cbd Data Science Concepts Explained to a Five-year-old | Megan Dibble - Toward Data Science]
 +
* [[Perspective]] ... [[Context]] ... [[In-Context Learning (ICL)]] ... [[Transfer Learning]] ... [[Out-of-Distribution (OOD) Generalization]]
 +
* [[Causation vs. Correlation]] ... [[Autocorrelation]] ...[[Convolution vs. Cross-Correlation (Autocorrelation)]]
 +
** [[Causation vs. Correlation#Multivariate Additive Noise Model (MANM)|Multivariate Additive Noise Model (MANM)]]
 +
** [http://www.cs.helsinki.fi/u/ahyvarin/whatisica.shtml Independent Component Analysis (ICA) | University of Helsinki]
 +
** [http://www.cs.helsinki.fi/u/ahyvarin/papers/JMLR06.pdf Linear Non-Gaussian Acyclic Model (ICA-LiNGAM) | S. Shimizu, P. Hoyer, A. Hyvarinen, and A. Kerminen - University of Helsinki]
 +
** [http://archive.org/details/arxiv-1104.2808/page/n15 Greedy DAG Search (GDS) | Alain Hauser and Peter Biihlmann]
 +
** [http://auai.org/uai2017/proceedings/papers/250.pdf Feature-to-Feature Regression for a Two-Step Conditional Independence Test | Q. Zhang, S. Filippi, S. Flaxman, and D. Sejdinovic]
 +
* [http://pathmind.com/wiki/eigenvector A Beginner's Guide to Eigenvectors, Eigenvalues, PCA, Covariance and Entropy Learning | Chris Nicholson - A.I. Wiki pathmind]
 +
* [http://alexhwilliams.info/itsneuronalblog/2016/03/27/pca/#some-things-you-maybe-didnt-know-about-pca Everything you did and didn't know about PCA | Alex Williams - Its Neutonal]
  
<youtube>g8D5YL6cOSE</youtube>
+
Principal Component Analysis (PCA) goal is to reduce the dimensionality of a data set consisting of a large number of interrelated variables, while retaining as much as possible of the variation present in the data set. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data. The new dimensions are called principal components, and they are uncorrelated and ordered by the amount of variance they explain. PCA can help you simplify large data tables, visualize multidimensional data, and identify hidden patterns in your data. This data reduction technique allows the simplifying multidimensional data sets to 2 or 3 dimensions for plotting purposes and visual variance analysis.
<youtube>foWkxFlaigM</youtube>
+
 
<youtube>N1vOgolbjSc</youtube>
+
 
<youtube>zErT-VtYOHk</youtube>
+
<hr>
<youtube>y8J6ggsLSfw</youtube>
+
 
 +
# Center (and standardize) data
 +
# First principal component axis
 +
## Across centroid of data cloud
 +
## Distance of each point to that line is minimized, so that it crosses the maximum variation of the data cloud
 +
# Second principal component axis
 +
## Orthogonal to first principal component
 +
## Along maximum variation in the data
 +
# First PCA axis becomes x-axis and second PCA axis y-axis
 +
# Continue process until the necessary number of principal components is obtained
 +
 
 +
 
 +
http://www.sthda.com/sthda/RDoc/figure/factor-analysis/principal-component-analysis-basics-scatter-plot-data-mining-1.png
 +
 
 +
 
 +
<youtube>HMOI_lkzW08</youtube>
 +
<youtube>_UVHneBUBW0</youtube>
 +
<youtube>FgakZw6K1QQ</youtube>
 +
<youtube>rng04VJxUt4</youtube>
 +
<youtube>u6A-rnsj8sg</youtube>
 +
<youtube>kw9R0nD69OU</youtube>
 +
<youtube>4zbUcgfycTU</youtube>
 +
<youtube>YEDOSOd44bU</youtube>
 +
 
 +
== NumXL ==
 +
<youtube>WCigXTRVH78</youtube>
 +
<youtube>I29Ga2iRb0w</youtube>
 +
<youtube>G7pZzsFVaYg</youtube>
 +
<youtube>yQSAzHNfnDY</youtube>

Latest revision as of 15:33, 28 April 2024

YouTube ... Quora ...Google search ...Google News ...Bing News

Principal Component Analysis (PCA) goal is to reduce the dimensionality of a data set consisting of a large number of interrelated variables, while retaining as much as possible of the variation present in the data set. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data. The new dimensions are called principal components, and they are uncorrelated and ordered by the amount of variance they explain. PCA can help you simplify large data tables, visualize multidimensional data, and identify hidden patterns in your data. This data reduction technique allows the simplifying multidimensional data sets to 2 or 3 dimensions for plotting purposes and visual variance analysis.



  1. Center (and standardize) data
  2. First principal component axis
    1. Across centroid of data cloud
    2. Distance of each point to that line is minimized, so that it crosses the maximum variation of the data cloud
  3. Second principal component axis
    1. Orthogonal to first principal component
    2. Along maximum variation in the data
  4. First PCA axis becomes x-axis and second PCA axis y-axis
  5. Continue process until the necessary number of principal components is obtained


principal-component-analysis-basics-scatter-plot-data-mining-1.png


NumXL