Difference between revisions of "Elastic Net Regression"

From
Jump to: navigation, search
Line 23: Line 23:
  
 
<youtube>1dKRdX9bfIo</youtube>
 
<youtube>1dKRdX9bfIo</youtube>
<youtube>jbwSCwoT51M</youtube>
 
  
  

Revision as of 01:23, 13 July 2019

YouTube search... ...Google search

Elastic-Net Regression is combines Lasso Regression with Ridge Regression to give you the best of both worlds. It works well when there are lots of useless variables that need to be removed from the equation and it works well when there are lots of useful variables that need to be retained. And it does better than either one when it comes to handling correlated variables. [StatQuest



vRJ4p.png

  • Left plot: Elastic net gradient vs. bridge regression weight along each dimension - The x axis represents one component of a set of weights w∗ selected by bridge regression. The y axis represents the corresponding component of the elastic net gradient, evaluated at w∗. Note that the weights are multidimensional, but we're just looking at the weights/gradient along a single dimension.
  • Right plot: Elastic net changes to bridge regression weights (2d) - Each point represents a set of 2d weights w∗ selected by bridge regression. For each choice of w∗, a vector is plotted pointing in the direction opposite the elastic net gradient, with magnitude proportional to that of the gradient. That is, the plotted vectors show how the elastic net wants to change the bridge regression solution. Bridge penalty vs. Elastic Net regularization | user20160