Difference between revisions of "L1 and L2 Regularization"

From
Jump to: navigation, search
Line 1: Line 1:
 +
{{#seo:
 +
|title=PRIMO.ai
 +
|titlemode=append
 +
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS
 +
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
 +
}}
 
[http://www.youtube.com/results?search_query=L1+L2+Regularization+Dropout+Overfitting Youtube search...]
 
[http://www.youtube.com/results?search_query=L1+L2+Regularization+Dropout+Overfitting Youtube search...]
 
[http://www.google.com/search?q=L1+L2+Regularization+Dropout+deep+machine+learning+ML ...Google search]
 
[http://www.google.com/search?q=L1+L2+Regularization+Dropout+deep+machine+learning+ML ...Google search]

Revision as of 13:26, 3 February 2019

Youtube search... ...Google search

Mathematically speaking, L1 is just the sum of the weights as a regularization term in order to prevent the coefficients to fit so perfectly to overfit. There is also L2 regularization. where L2 is the sum of the square of the weights.


Good practices for addressing the Overfitting Challenge: