Difference between revisions of "Feature Exploration/Learning"
m |
m |
||
| Line 44: | Line 44: | ||
= <span id="Feature Inspection"></span>Feature Inspection = | = <span id="Feature Inspection"></span>Feature Inspection = | ||
<youtube>WVclIFyCCOo</youtube> | <youtube>WVclIFyCCOo</youtube> | ||
| − | |||
| − | |||
| − | |||
<youtube>KvZ2KSxlWBY</youtube> | <youtube>KvZ2KSxlWBY</youtube> | ||
| − | |||
| − | |||
<youtube>yQsOFWqpjkE</youtube> | <youtube>yQsOFWqpjkE</youtube> | ||
| Line 56: | Line 51: | ||
= <span id="Feature Selection"></span>Feature Selection = | = <span id="Feature Selection"></span>Feature Selection = | ||
| + | <youtube>V0u6bxQOUJ8</youtube> | ||
| + | <youtube>_XOKz5VlTQY</youtube> | ||
| + | <youtube>arhdVDsPLVI</youtube> | ||
| + | <youtube>TsqTuwTKFSs</youtube> | ||
| + | <youtube>YaKMeAlHgqQ</youtube> | ||
== <span id="Sparse Coding - Feature Extraction"></span>Sparse Coding - Feature Extraction == | == <span id="Sparse Coding - Feature Extraction"></span>Sparse Coding - Feature Extraction == | ||
Revision as of 18:05, 21 September 2020
YouTube search... ...Google search
- Feature selection | Wikipedia
- Notes on Feature Preprocessing: The What, the Why, and the How | Matthew Mayo - KDnuggets
- Evaluating Machine Learning Models
- Automated Machine Learning (AML) - AutoML
- Recursive Feature Elimination (RFE)
- Principal Component Analysis (PCA)
- Representation Learning
- Feature Engineering and Selection: A Practical Approach for Predictive Models | Max Kuhn and Kjell Johnson
- Jon Tupitza's Famous Jupyter Notebooks:
- AI Governance
- Data Science / Data Governance
- Benchmarks
- Data Preprocessing
- Feature Exploration/Learning ...inspection, data profiling, selection
- Data Quality ...validity, accuracy, cleaning, completeness, consistency, encoding, padding, augmentation, labeling, auto-tagging, normalization, standardization, and imbalanced data
- Bias and Variances
- Master Data Management (MDM) / Feature Store / Data Lineage / Data Catalog
- Privacy in Data Science
- Data Interoperability
- Excel - Data Analysis
- Data Science / Data Governance
- Visualization
- Tools: Paxata, Trifacta, alteryx, databricks, Qubole
A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables. Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious. Machine learning algorithms explained | Martin Heller - InfoWorld
Contents
Feature Inspection
Data Profiling
Feature Selection
Sparse Coding - Feature Extraction