|
|
Line 9: |
Line 9: |
| | | |
| * [[AI Solver]] | | * [[AI Solver]] |
− | * [[Strategy & Tactics]] ...[[Strategy & Tactics#Business Case]]business case | + | * [[Strategy & Tactics]] ...[[Strategy & Tactics#Business Case|business case]] |
| * Evaluation | | * Evaluation |
| ** [[Evaluation - Measures]] | | ** [[Evaluation - Measures]] |
Revision as of 08:17, 21 September 2020
YouTube search...
...Google search
Lists...
Common Mistakes
YouTube search...
...Google search
Common mistakes made in Machine Learning Models
Analytics University You will learn the common mistake people make while building machine learning models. Machine learning models are easy to build but need attention to details.
The common mistakes could be: 1- taking Default Loss Function for granted, 2- Using one Algorithm / Method For All Problems: 3- Ignoring Outliers: 4- No Proper Dealing With Cyclical Features, 5- L1/L2 Regularisation Without Standardization, 6- Interpreting Coefficients From Linear or Logistic Regressions as features importance. Analytics Study Pack : http://analyticuniversity.com/
|
|
|
Cameron Davidson Pilon: Mistakes I've Made
PyData Seattle 2015
In this humbling talk, I'll describe some mistakes I've made in working in statistics and machine learning. I'll describe my original intentions, symptoms, how I eventually discovered the mistake, and possibly even a solution. The topics include mistakes in A/B testing, Kaggle competitions, data collection, and other fields. In this humbling talk, I'll describe some mistakes I've made in working in statistics and machine learning. I'll describe my original intentions, symptoms, how I eventually discovered the mistake, and possibly even a solution. The topics include mistakes in A/B testing, Kaggle competitions, data collection, and other fields. I'll also introduce some interesting statistical and machine learning counterexamples: examples where our original intuition fails, and solutions to these examples.
|
|
AI Simplified: Top 3 Rookie Mistakes in Machine Learning
John Boersma, Director of Education at DataRobot, shares his list of the top three rookie mistakes in machine learning for our AI Simplified series. Learn more about simplified AI terms on our wiki page: http://www.datarobot.com/wiki
|
|
|
Top 10 Machine Learning Pitfalls – Mark Landry
Over-fitting, misread data, NAs, collinear column elimination and other common issues play havoc in the day of practicing data scientist. In this talk, Mark Landry, one of the world’s leading Kagglers, will review the top 10 common pitfalls and steps to avoid them. View more talks from H2O Open Tour Dallas http://open.h2o.ai/dallas.html Powered by the open source machine learning software H2O.ai. Contributors welcome at http://github.com/h2oai To access slides on H2O open source machine learning software, go to: http://www.slideshare.net/0xdata
|
|
AI Failures
"Machine learning failures - for art!" by Janelle Shane
It's tough to write a machine learning algorithm that works well. Overfitting, noisy data, a problem that's too general - these problems plague the programmers who apply these algorithms to financial modeling and image labeling. But mistakes can also be fun. At her humor blog AIweirdness.com, Janelle Shane posts examples of machine learning algorithms going terribly, hilariously wrong. Here, she talks about some common machine learning mistakes - and how to use them deliberately.
|
|
|
Lessons Learned from Machine Learning Gone Wrong - Janelle Shane
It's tough to write a machine learning algorithm that works well. Overfitting, noisy data, a problem that's too general - these problems plague the programmers who apply these algorithms to financial modeling and image labeling. But mistakes can also be fun. At her humor blog AIweirdness.com, Janelle Shane posts examples of machine learning algorithms going terribly, hilariously wrong. Here, she talks about some common machine learning mistakes - and how to use them deliberately. About Janelle: Janelle Shane trains neural networks, a type of machine learning algorithm, to write unintentional humor as they struggle to imitate human datasets. Well, she intends the humor. The neural networks are just doing their best to understand what's going on. Currently located on the occupied land of the Arapahoe Nation.
|
|