Difference between revisions of "Feature Exploration/Learning"
m |
m |
||
| Line 5: | Line 5: | ||
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | ||
}} | }} | ||
| − | [https://www.youtube.com/results?search_query=Feature+Exploration+ | + | [https://www.youtube.com/results?search_query=ai+Feature+Exploration+Learning YouTube] |
| − | [https://www.google.com/search?q=Feature+Exploration+ | + | [https://www.quora.com/search?q=ai%20Feature%20Exploration%20Learning ... Quora] |
| + | [https://www.google.com/search?q=ai+Feature+Exploration+Learning ...Google search] | ||
| + | [https://news.google.com/search?q=ai+Feature+Exploration+Learning ...Google News] | ||
| + | [https://www.bing.com/news/search?q=ai+Feature+Exploration+Learning&qft=interval%3d%228%22 ...Bing News] | ||
| − | * [ | + | * [[Data Science]] ... [[Data Governance|Governance]] ... [[Data Preprocessing|Preprocessing]] ... [[Feature Exploration/Learning|Exploration]] ... [[Data Interoperability|Interoperability]] ... [[Algorithm Administration#Master Data Management (MDM)|Master Data Management (MDM)]] ... [[Bias and Variances]] ... [[Benchmarks]] ... [[Datasets]] |
| − | * [ | + | * [[Data Quality]] ...[[AI Verification and Validation|validity]], [[Evaluation - Measures#Accuracy|accuracy]], [[Data Quality#Data Cleaning|cleaning]], [[Data Quality#Data Completeness|completeness]], [[Data Quality#Data Consistency|consistency]], [[Data Quality#Data Encoding|encoding]], [[Data Quality#Zero Padding|padding]], [[Data Quality#Data Augmentation, Data Labeling, and Auto-Tagging|augmentation, labeling, auto-tagging]], [[Data Quality#Batch Norm(alization) & Standardization| normalization, standardization]], and [[Data Quality#Imbalanced Data|imbalanced data]] |
* [[Evaluating Machine Learning Models]] | * [[Evaluating Machine Learning Models]] | ||
* [[Algorithm Administration#Automated Learning|Automated Learning]] | * [[Algorithm Administration#Automated Learning|Automated Learning]] | ||
| Line 15: | Line 18: | ||
* [[Principal Component Analysis (PCA)]] | * [[Principal Component Analysis (PCA)]] | ||
* [[Representation Learning]] | * [[Representation Learning]] | ||
| + | * [[Natural Language Processing (NLP)#Managed Vocabularies |Managed Vocabularies]] | ||
| + | * [[Excel - Data Analysis]] | ||
| + | * [[Development]] ...[[Development#AI Pair Programming Tools|AI Pair Programming Tools]] ... [[Analytics]] ... [[Visualization]] ... [[Diagrams for Business Analysis]] | ||
| + | * [https://en.wikipedia.org/wiki/Feature_selection Feature selection | Wikipedia] | ||
| + | * [https://www.kdnuggets.com/2018/10/notes-feature-preprocessing-what-why-how.html Notes on Feature Preprocessing: The What, the Why, and the How | Matthew Mayo - KDnuggets] | ||
* [https://bookdown.org/max/FES/ Feature Engineering and Selection: A Practical Approach for Predictive Models | Max Kuhn and Kjell Johnson] | * [https://bookdown.org/max/FES/ Feature Engineering and Selection: A Practical Approach for Predictive Models | Max Kuhn and Kjell Johnson] | ||
* [https://github.com/jontupitza Jon Tupitza's Famous Jupyter Notebooks:] | * [https://github.com/jontupitza Jon Tupitza's Famous Jupyter Notebooks:] | ||
| Line 24: | Line 32: | ||
** [https://github.com/JonTupitza/Data-Science-Process/blob/master/05-Feature-Selection.ipynb Feature Selection Techniques] | ** [https://github.com/JonTupitza/Data-Science-Process/blob/master/05-Feature-Selection.ipynb Feature Selection Techniques] | ||
* [[AI Governance]] / [[Algorithm Administration]] | * [[AI Governance]] / [[Algorithm Administration]] | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
* Tools: | * Tools: | ||
** [https://www.qubole.com/solutions/by-project/ What’s Your Project? | Qubole] | ** [https://www.qubole.com/solutions/by-project/ What’s Your Project? | Qubole] | ||
| Line 43: | Line 38: | ||
** [https://www.paxata.com/ The Data Prep for AI Toolkit: Smarter ML Models Through Faster, More Accurate Data Prep | Paxata] | ** [https://www.paxata.com/ The Data Prep for AI Toolkit: Smarter ML Models Through Faster, More Accurate Data Prep | Paxata] | ||
** [https://www.alteryx.com/e-book/age-badass-analyst The Age of The Badass Analyst | Alteryx] | ** [https://www.alteryx.com/e-book/age-badass-analyst The Age of The Badass Analyst | Alteryx] | ||
| + | |||
A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables. Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious. [https://www.infoworld.com/article/3394399/machine-learning-algorithms-explained.html Machine learning algorithms explained | Martin Heller - InfoWorld] | A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables. Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious. [https://www.infoworld.com/article/3394399/machine-learning-algorithms-explained.html Machine learning algorithms explained | Martin Heller - InfoWorld] | ||
Revision as of 21:03, 1 May 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- Data Science ... Governance ... Preprocessing ... Exploration ... Interoperability ... Master Data Management (MDM) ... Bias and Variances ... Benchmarks ... Datasets
- Data Quality ...validity, accuracy, cleaning, completeness, consistency, encoding, padding, augmentation, labeling, auto-tagging, normalization, standardization, and imbalanced data
- Evaluating Machine Learning Models
- Automated Learning
- Recursive Feature Elimination (RFE)
- Principal Component Analysis (PCA)
- Representation Learning
- Managed Vocabularies
- Excel - Data Analysis
- Development ...AI Pair Programming Tools ... Analytics ... Visualization ... Diagrams for Business Analysis
- Feature selection | Wikipedia
- Notes on Feature Preprocessing: The What, the Why, and the How | Matthew Mayo - KDnuggets
- Feature Engineering and Selection: A Practical Approach for Predictive Models | Max Kuhn and Kjell Johnson
- Jon Tupitza's Famous Jupyter Notebooks:
- AI Governance / Algorithm Administration
- Tools:
A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables. Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious. Machine learning algorithms explained | Martin Heller - InfoWorld
|
|
|
|
Feature Selection
YouTube search... ...Google search
- Beginner's Guide to Feature Selection in Python | Sayak Paul ...Learn about the basics of feature selection and how to implement and investigate various feature selection techniques in Python
- Feature Selection For Machine Learning in Python | Jason Brownlee
- How to Perform Feature Selection with Categorical Data | Jason Brownlee
|
|
|
|
|
|
Sparse Coding - Feature Extraction
|
|