Difference between revisions of "Feature Exploration/Learning"

From
Jump to: navigation, search
m
m
Line 41: Line 41:
 
A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables. Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious. [http://www.infoworld.com/article/3394399/machine-learning-algorithms-explained.html Machine learning algorithms explained | Martin Heller - InfoWorld]
 
A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables. Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious. [http://www.infoworld.com/article/3394399/machine-learning-algorithms-explained.html Machine learning algorithms explained | Martin Heller - InfoWorld]
  
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>b8f_gTTKm2U</youtube>
 +
<b>AI Explained: Feature Importance
 +
</b><br>Fiddler Labs  Learn more about feature importance, the different techniques, and the pros and cons of each. #ExplainableAI 
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>WVclIFyCCOo</youtube>
 
<youtube>WVclIFyCCOo</youtube>
 +
<b>HH2
 +
</b><br>BB2
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>KvZ2KSxlWBY</youtube>
 
<youtube>KvZ2KSxlWBY</youtube>
 +
<b>HH3
 +
</b><br>BB3
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>yQsOFWqpjkE</youtube>
 
<youtube>yQsOFWqpjkE</youtube>
 
+
<b>HH4
 +
</b><br>BB4
 +
|}
 +
|}<!-- B -->
  
 
= <span id="Feature Selection"></span>Feature Selection =
 
= <span id="Feature Selection"></span>Feature Selection =
Line 55: Line 85:
 
== <span id="Sparse Coding - Feature Extraction"></span>Sparse Coding - Feature Extraction ==
 
== <span id="Sparse Coding - Feature Extraction"></span>Sparse Coding - Feature Extraction ==
  
 +
 +
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>7a0_iEruGoM</youtube>
 
<youtube>7a0_iEruGoM</youtube>
 +
<b>HH1
 +
</b><br>BB1
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>FL81zSjAEEg</youtube>
 
<youtube>FL81zSjAEEg</youtube>
 +
<b>HH2
 +
</b><br>BB2
 +
|}
 +
|}<!-- B -->

Revision as of 19:33, 21 September 2020

YouTube search... ...Google search

A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables. Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious. Machine learning algorithms explained | Martin Heller - InfoWorld

AI Explained: Feature Importance
Fiddler Labs Learn more about feature importance, the different techniques, and the pros and cons of each. #ExplainableAI

HH2
BB2

HH3
BB3

HH4
BB4

Feature Selection

Sparse Coding - Feature Extraction

HH1
BB1

HH2
BB2