Difference between revisions of "Representation Learning"

From
Jump to: navigation, search
(Unsupervised)
m
 
(10 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
{{#seo:
 +
|title=PRIMO.ai
 +
|titlemode=append
 +
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS
 +
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
 +
}}
 
[http://www.youtube.com/results?search_query=Representation+Learning YouTube search...]
 
[http://www.youtube.com/results?search_query=Representation+Learning YouTube search...]
 +
[http://www.google.com/search?q=Representation+Learning+deep+machine+learning+ML+artificial+intelligence ...Google search]
 +
 +
* [[Reinforcement Learning (RL)]]
 +
* [[Feature Exploration/Learning]]
 +
 +
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, [[Autoencoder (AE) / Encoder-Decoder | auto-encoders]], manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.  [http://arxiv.org/abs/1206.5538 Representation Learning: A Review and New Perspectives | Y. Bengio, A. Courville, and P. Vincent]
 +
 +
* <b>Feature Learning</b> or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, [[Video|video]], and sensor data has not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. [http://en.wikipedia.org/wiki/Feature_learning Wikipedia]
 +
* <b>[[Manifold Hypothesis#Manifold Learning|Manifold Learning]]</b> is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high. [http://scikit-learn.org/stable/modules/manifold.html#:~:text=Manifold%20learning%20is%20an%20approach,sets%20is%20only%20artificially%20high. Manifold learning | SciKitLearn]
  
 
<youtube>e3GaXeqrG9I</youtube>
 
<youtube>e3GaXeqrG9I</youtube>
 
<youtube>Yr1mOzC93xs</youtube>
 
<youtube>Yr1mOzC93xs</youtube>
 +
 +
<img src="http://lilianweng.github.io/lil-log/assets/images/grasp2vec.png" width="700" height="300">
 +
  
 
== Representation Learning and Deep Learning ==  
 
== Representation Learning and Deep Learning ==  
 
Yoshua Bengio | Institute for Pure & Applied Mathematics (IPAM)   
 
Yoshua Bengio | Institute for Pure & Applied Mathematics (IPAM)   
  
<youtube>2vxZbZC21Gg</youtube>
 
 
<youtube>O6itYc2nnnM</youtube>
 
<youtube>O6itYc2nnnM</youtube>
 
<youtube>VUixoO0jCIQ</youtube>
 
<youtube>VUixoO0jCIQ</youtube>
Line 15: Line 32:
  
 
= Self-Supervised =
 
= Self-Supervised =
 +
* [[Self-Supervised]]
 
<youtube>aGhYitrOJRc</youtube>
 
<youtube>aGhYitrOJRc</youtube>
  
 
= Semi-Supervised =
 
= Semi-Supervised =
 +
* [[Semi-Supervised]]
 
<youtube>1tB7lALJ3ew</youtube>
 
<youtube>1tB7lALJ3ew</youtube>
  
 
= Unsupervised =
 
= Unsupervised =
 +
* [[Unsupervised]]
 
<youtube>y-SrsyckRbo</youtube>
 
<youtube>y-SrsyckRbo</youtube>
 
<youtube>ceD736_Fknc</youtube>
 
<youtube>ceD736_Fknc</youtube>
 +
<youtube>2vxZbZC21Gg</youtube>
  
 
= Supervised Learning of Rules for Unsupervised =
 
= Supervised Learning of Rules for Unsupervised =
Line 28: Line 49:
  
 
= Large-Scale Graph =
 
= Large-Scale Graph =
 +
 +
* [[Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning]]
 +
 
<youtube>oQL4E1gK3VU</youtube>
 
<youtube>oQL4E1gK3VU</youtube>

Latest revision as of 14:18, 16 September 2023

YouTube search... ...Google search

The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning. Representation Learning: A Review and New Perspectives | Y. Bengio, A. Courville, and P. Vincent

  • Feature Learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensor data has not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. Wikipedia
  • Manifold Learning is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high. Manifold learning | SciKitLearn


Representation Learning and Deep Learning

Yoshua Bengio | Institute for Pure & Applied Mathematics (IPAM)

Self-Supervised

Semi-Supervised

Unsupervised

Supervised Learning of Rules for Unsupervised

Large-Scale Graph