Difference between revisions of "Regularization"

From
Jump to: navigation, search
Line 2: Line 2:
 
[http://www.google.com/search?q=Regularization+deep+machine+learning+ML ...Google search]
 
[http://www.google.com/search?q=Regularization+deep+machine+learning+ML ...Google search]
  
* [[Overfitting Challenge]]
+
Good practices for addressing the [[Overfitting Challenge]]:
  
 +
* add more data
 +
* use [[Data Augmentation]]
 +
* use [[Batch Normalization]]
 +
* use architectures that generalize well
 +
* reduce architecture complexity
 +
* add [[Regularization]]
 +
** [[L1 and L2 Regularization]] -  update the general cost function by adding another term known as the regularization term.
 +
** [[Dropout]] - at every iteration, it randomly selects some nodes and temporarily removes the nodes (along with all of their incoming and outgoing connections)
 +
** [[Data Augmentation]]
 +
** [[Early Stopping]]
  
Different Regularization techniques:
 
* [[L2 and L1 Regularization]] -  update the general cost function by adding another term known as the regularization term.
 
* [[Dropout]] - at every iteration, it randomly selects some nodes and temporarily removes the nodes (along with all of their incoming and outgoing connections)
 
* [[Data Augmentation]]
 
* [[Early Stopping]]
 
  
 
Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model’s performance on the unseen data as well. [http://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/ An Overview of Regularization Techniques in Deep Learning (with Python code) | Shubham Jain]
 
Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model’s performance on the unseen data as well. [http://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/ An Overview of Regularization Techniques in Deep Learning (with Python code) | Shubham Jain]

Revision as of 17:04, 30 December 2018

Youtube search... ...Google search

Good practices for addressing the Overfitting Challenge:


Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model’s performance on the unseen data as well. An Overview of Regularization Techniques in Deep Learning (with Python code) | Shubham Jain