Difference between revisions of "Feature Exploration/Learning"

From
Jump to: navigation, search
m
m
 
Line 206: Line 206:
 
||
 
||
 
<youtube>_XOKz5VlTQY</youtube>
 
<youtube>_XOKz5VlTQY</youtube>
<b>Recent Advances in Feature Selection: A Data Perspective part 1   
+
<b>Recent Advances in Feature Selection: A Data [[Perspective]] part 1   
</b><br>Authors: Huan Liu, Department of Computer Science and Engineering, Arizona State University  Jundong Li, School of Computing, Informatics and Decision Systems Engineering, Arizona State University  Jiliang Tang, Department of Computer Science and Engineering, Michigan State University  Feature selection, as a data preprocessing strategy, is imperative in preparing high-dimensional data for myriad of data mining and machine learning tasks. By selecting a subset of features of high quality, feature selection can help build simpler and more comprehensive models, improve data mining performance, and prepare clean and understandable data. The proliferation of big data in recent years has presented substantial challenges and opportunities for feature selection research. In this tutorial, we provide a comprehensive overview of recent advances in feature selection research from a data perspective. After we introduce some basic concepts, we review state-of-the-art feature selection algorithms and recent techniques of feature selection for structured, social, heterogeneous, and streaming data. In particular, we also discuss what the role of feature selection is in the [[context]] of deep learning and how feature selection is related to feature engineering. To facilitate and promote the research in this community, we present an open-source feature selection repository scikit-feature that consists of most of the popular feature selection algorithms. We conclude our discussion with some open problems and pressing issues in future research.   
+
</b><br>Authors: Huan Liu, Department of Computer Science and Engineering, Arizona State University  Jundong Li, School of Computing, Informatics and Decision Systems Engineering, Arizona State University  Jiliang Tang, Department of Computer Science and Engineering, Michigan State University  Feature selection, as a data preprocessing strategy, is imperative in preparing high-dimensional data for myriad of data mining and machine learning tasks. By selecting a subset of features of high quality, feature selection can help build simpler and more comprehensive models, improve data mining performance, and prepare clean and understandable data. The proliferation of big data in recent years has presented substantial challenges and opportunities for feature selection research. In this tutorial, we provide a comprehensive overview of recent advances in feature selection research from a data [[perspective]]. After we introduce some basic concepts, we review state-of-the-art feature selection algorithms and recent techniques of feature selection for structured, social, heterogeneous, and streaming data. In particular, we also discuss what the role of feature selection is in the [[context]] of deep learning and how feature selection is related to feature engineering. To facilitate and promote the research in this community, we present an open-source feature selection repository scikit-feature that consists of most of the popular feature selection algorithms. We conclude our discussion with some open problems and pressing issues in future research.   
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->

Latest revision as of 15:56, 28 April 2024

YouTube ... Quora ...Google search ...Google News ...Bing News


A feature is an individual measurable property or characteristic of a phenomenon being observed. The concept of a “feature” is related to that of an explanatory variable, which is used in statistical techniques such as linear regression. Feature vectors combine all of the features for a single row into a numerical vector. Part of the art of choosing features is to pick a minimum set of independent variables that explain the problem. If two variables are highly correlated, either they need to be combined into a single feature, or one should be dropped. Sometimes people perform principal component analysis to convert correlated variables into a set of linearly uncorrelated variables. Some of the transformations that people use to construct new features or reduce the dimensionality of feature vectors are simple. For example, subtract Year of Birth from Year of Death and you construct Age at Death, which is a prime independent variable for lifetime and mortality analysis. In other cases, feature construction may not be so obvious. Machine learning algorithms explained | Martin Heller - InfoWorld

Feature Examples

For example, if you were building a machine learning model to predict whether someone would like a particular movie, you might use features like the person's age, gender, and favorite genres of movies. You might also use features about the movie itself, such as the genre, director, and rating. Features are important because they allow machine learning models to learn about the world. By providing models with features, we can teach them to identify patterns and make predictions.

Here is an example of a feature in AI that a 7th grader might understand:

Imagine you are building a machine learning model to predict whether a student will pass or fail a math test. You might use the following features:

  • The student's grades on previous math tests
  • The student's attendance record in math class
  • The student's homework completion rate
  • The student's score on the math portion of the standardized test

Your machine learning model would learn to identify patterns in this data. For example, the model might learn that students who have high grades on previous math tests and good attendance are more likely to pass the test. The model could also learn that students who miss a lot of class or have incomplete homework are more likely to fail the test. Once your machine learning model is trained, you can use it to predict whether a new student is likely to pass or fail the math test. You can do this by providing the model with the student's features, such as their grades on previous math tests and their attendance record. The model will then use this information to make a prediction.

Feature Store

A feature store in AI is a system for managing and serving features to machine learning models. Features are measurable pieces of data that can be used to train and evaluate models. Feature stores provide a central repository for features, making them easier to discover, reuse, and manage. Feature stores are important because they can help to improve the quality, efficiency, and scalability of machine learning development and deployment. For example, feature stores can help to:

  • Reduce the time and effort required to develop and maintain machine learning models
  • Improve the performance and accuracy of machine learning models
  • Make machine learning models more reproducible and scalable
  • Ensure that machine learning models are using consistent and up-to-date data


Offerings

  • Continual Feature Store: Open source, designed for real-time machine learning
  • Databricks Feature Store: Fully integrated with Databricks
  • FEAST: Open source, cloud-native, scalable and performant
  • FeatureBase: Commercial, offered by Google Cloud, easy to use
  • Feathr: Commercial, offered by AWS, scalable and performant
  • Hopsworks Feature Store: Open source, versatile, offers open APIs
  • Jukebox Feature Store: Commercial, cloud-based, designed for real-time serving
  • Metarank Feature Store: Commercial, cloud-based, designed for machine learning ranking
  • Microsoft Azure Feature Store: Commercial, offered by Microsoft Azure, easy to use
  • Nexus Feature Store: Commercial, cloud-based, designed for large-scale machine learning
  • Salesforce Einstein Feature Store: Commercial, offered by Salesforce, easy to use
  • Vertex AI: Commercial, offered by Google Cloud, part of Vertex AI platform
  • Amazon SageMaker Feature Store: Commercial, offered by AWS, scalable and performant


Use Case

You could store these features in a feature store. This would make it easy to reuse the features for different machine learning models, and it would also make it easier to manage the features over time. For example, if you wanted to add a new feature, such as the student's participation in math class, you could simply add it to the feature store.

Here are some other examples of when you might use a feature store:

  • You are building a machine learning model to predict whether a customer will churn (cancel their subscription). You could use a feature store to store features such as the customer's past purchase history, their engagement with your product, and their support tickets.
  • You are building a machine learning model to recommend products to customers. You could use a feature store to store features such as the customer's past purchase history, their browsing history, and their product ratings.
  • You are building a machine learning model to detect fraud. You could use a feature store to store features such as the customer's transaction history, their device information, and their location.

To use a feature store API, you would first need to create an account with the feature store provider. Once you have an account, you can use the API to create and manage features, as well as to query and serve features to your machine learning models.

Most feature store APIs provide the following functionality:

  • Feature management: Create, read, update, and delete features.
  • Feature transformation: Preprocess and transform features before serving them to machine learning models.
  • Feature serving: Serve features to machine learning models in real time or in batches.


To use the feature store API, you would typically send HTTP requests to the feature store server. The requests would specify the operation you want to perform (e.g., create a feature, query features, or serve features), as well as the relevant parameters.

For example, the following HTTP request would create a new feature called customer_id:

POST /features HTTP/1.1 Host: featurestore.example.com Content-Type: application/json

{

 "name": "customer_id",
 "type": "string"

} The following HTTP request would query the feature store for the customer_id feature for a specific customer:

GET /features/customer_id?customer_id=12345 HTTP/1.1 Host: featurestore.example.com The following HTTP request would serve the customer_id feature for a list of customers to a machine learning model:

POST /features/customer_id/serve HTTP/1.1 Host: featurestore.example.com Content-Type: application/json

{

 "customer_ids": [12345, 67890, 24680]

} The feature store API would then return the appropriate response, depending on the operation you requested. For example, if you created a new feature, the API would return a confirmation message. If you queried the feature store for a feature, the API would return the value of the feature. If you served the feature to a machine learning model, the API would return a list of feature values.

Feature store APIs are a powerful tool for developing and deploying machine learning models. By using a feature store API, you can improve the quality, efficiency, and scalability of your machine learning development and deployment.

Here are some additional tips for using a feature store API:

  • Use the API documentation to learn about the specific features and functionality that are available.
  • Start by using the API to perform basic operations, such as creating and reading features.
  • Once you have a good understanding of the API, you can start using it to perform more complex operations, such as transforming and serving features.
  • If you have any questions or problems using the API, contact the feature store provider for support.

Feature Exploration

AI Explained: Feature Importance
Fiddler Labs Learn more about feature importance, the different techniques, and the pros and cons of each. #ExplainableAI

Visualize your Data with Facets
In this episode of AI Adventures, Yufeng explains how to use Facets, a project from Google Research, to visualize your dataset, find interesting relationships, and clean your data for machine learning. Learn more through our hands-on labs → https://goo.gle/38ZUlTD Associated Medium post "Visualize your data with Facets": https://goo.gl/7FDWwk Get Facets on GitHub: https://goo.gl/Xi8dTu Play with Facets in the browser: https://goo.gl/fFLCEV Watch more AI Adventures on the playlist: https://goo.gl/UC5usG Subscribe to get all the episodes as they come out: https://goo.gl/S0AS51 #AIAdventures

Stephen Elston - Data Visualization and Exploration with Python
Visualization is an essential method in any data scientist’s toolbox and is a key data exploration method and is a powerful tool for presentation of results and understanding problems with analytics. Attendees are introduced to Python visualization packages, Matplotlib, Pandas, and Seaborn. The Jupyter notebook Visualization of complex real-world datasets presents a number of challenges to data scientists. By developing skills in data visualization, data scientists can confidently explore and understand the relationships in complex data sets. Using the Python matplotlib, pandas plotting and seaborn packages attendees will learn to: • Explore complex data sets with visualization, to develop understanding of the inherent relationships. • Create multiple views of data to highlight different aspects of the inherent relationships, with different graph types. • Use plot aesthetics to project multiple dimensions. • Apply conditioning or faceting methods to project multiple dimensions www.pydata.org

The Best Way to Visualize a Dataset Easily
Siraj Raval In this video, we'll visualize a dataset of body metrics collected by giving people a fitness tracking device. We'll go over the steps necessary to preprocess the data, then use a technique called T-SNE to reduce the dimensionality of our data so we can visualize it.

Feature Selection

YouTube search... ...Google search

Pre-Modeling: Data Preprocessing and Feature Exploration in Python
April Chen Data preprocessing and feature exploration are crucial steps in a modeling workflow. In this tutorial, I will demonstrate how to use Python libraries such as scikit-learn, statsmodels, and matplotlib to perform pre-modeling steps. Topics that will be covered include: missing values, variable types, outlier detection, multicollinearity, interaction terms, and visualizing variable distributions. Finally, I will show the impact of utilizing these techniques on model performance. Interactive Jupyter notebooks will be provided.

Recent Advances in Feature Selection: A Data Perspective part 1
Authors: Huan Liu, Department of Computer Science and Engineering, Arizona State University Jundong Li, School of Computing, Informatics and Decision Systems Engineering, Arizona State University Jiliang Tang, Department of Computer Science and Engineering, Michigan State University Feature selection, as a data preprocessing strategy, is imperative in preparing high-dimensional data for myriad of data mining and machine learning tasks. By selecting a subset of features of high quality, feature selection can help build simpler and more comprehensive models, improve data mining performance, and prepare clean and understandable data. The proliferation of big data in recent years has presented substantial challenges and opportunities for feature selection research. In this tutorial, we provide a comprehensive overview of recent advances in feature selection research from a data perspective. After we introduce some basic concepts, we review state-of-the-art feature selection algorithms and recent techniques of feature selection for structured, social, heterogeneous, and streaming data. In particular, we also discuss what the role of feature selection is in the context of deep learning and how feature selection is related to feature engineering. To facilitate and promote the research in this community, we present an open-source feature selection repository scikit-feature that consists of most of the popular feature selection algorithms. We conclude our discussion with some open problems and pressing issues in future research.

Alexandru Agachi - Introductory tutorial on data exploration and statistical models
This tutorial will focus on analyzing a dataset and building statistical models from it. We will describe and visualize the data. We will then build and analyze statistical models, including linear and logistic regression, as well as chi-square tests of independence. We will then apply 4 machine learning techniques to the dataset: decision trees, random forests, lasso regression, and clustering. I would be happy to conduct an introductory level tutorial on exploring a dataset with the pandas/StatsModels/scikit-learn framework: 1. Descriptive statistics. Here we will describe each variable depending on its type, as well as the dataset overall. 2. Visualization for categorical and quantitative variables. We will learn effective visualization techniques for each type of variable in the dataset. 3. Statistical modeling for quantitative and categorical, explanatory and response variables: chi-square tests of independence, linear regression and logistic regression. We will learn to test hypotheses, and to interpret our models, their strengths, and their limitations. 4. I will then expand to the application of machine learning techniques, including decision trees, random forests, lasso regression, and clustering. Here we will explore the advantages and disadvantages of each of these techniques, as well as apply them to the dataset. This would be a very applied, introductory tutorial, to the statistical exploration of a dataset and the building of statistical models from it. I would be happy to send you the ipython notebook for this tutorial as well. www.pydata.org

Feature Selection in Machine learning| Variable selection| Dimension Reduction
Feature selection is an important step in machine learning model building process. The performance of models depends in the following : Choice of algorithm Feature Selection

How do I select features for Machine Learning?
Selecting the "best" features for your Machine Learning model will result in a better performing, easier to understand, and faster running model. But how do you know which features to select? In this video, I'll discuss 7 feature selection tactics used by the pros that you can apply to your own model. At the end, I'll give you my top 3 tips for effective feature selection.


Sparse Coding - Feature Extraction

Neural networks [8.1] : Sparse coding - definition
Hugo Larochelle

Neural networks [8.8] : Sparse coding - feature extraction
Hugo Larochelle