Difference between revisions of "Regularization"

From
Jump to: navigation, search
m
Line 10: Line 10:
 
* [[Overfitting Challenge]]
 
* [[Overfitting Challenge]]
 
# Regularization
 
# Regularization
** [[Ridge Regression]]
 
 
# [[Boosting]]
 
# [[Boosting]]
 
# [[Multiclassifiers; Ensembles and Hybrids; Bagging, Boosting, and Stacking]]
 
# [[Multiclassifiers; Ensembles and Hybrids; Bagging, Boosting, and Stacking]]
Line 26: Line 25:
 
** [[Data Augmentation]]
 
** [[Data Augmentation]]
 
** [[Early Stopping]]
 
** [[Early Stopping]]
 +
** [[Ridge Regression]]
  
 
Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model’s performance on the unseen data as well. [http://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/ An Overview of Regularization Techniques in Deep Learning (with Python code) | Shubham Jain]
 
Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model’s performance on the unseen data as well. [http://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/ An Overview of Regularization Techniques in Deep Learning (with Python code) | Shubham Jain]

Revision as of 23:57, 12 July 2019

Youtube search... ...Google search

  1. Regularization
  2. Boosting
  3. Multiclassifiers; Ensembles and Hybrids; Bagging, Boosting, and Stacking

Good practices for addressing the Overfitting Challenge:

Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model’s performance on the unseen data as well. An Overview of Regularization Techniques in Deep Learning (with Python code) | Shubham Jain