Difference between revisions of "Loss"

From
Jump to: navigation, search
Line 14: Line 14:
 
* [http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html Loss Functions | ML Cheatsheet]
 
* [http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html Loss Functions | ML Cheatsheet]
  
There are many options for loss in Tensorflow (Keras). The actual optimized objective is the mean of the output array across all datapoints. Loss is one of the two parameters required to compile a model. [http://keras.io/losses/ Click here For a list of Keras loss functions.]  
+
There are many options for loss in Tensorflow (Keras). The actual optimized objective is the mean of the output array across all datapoints. A loss function gives a distance between a model's predictions to the ground truth labels. This is the distance (loss value) that the network aims to minimize; the lower this value, the better the current model describes our training data set. [http://keras.io/losses/ Click here For a list of Keras loss functions.] Loss is one of the two parameters required to compile a model...
  
 
----
 
----

Revision as of 09:23, 31 August 2019

YouTube search... ...Google search

There are many options for loss in Tensorflow (Keras). The actual optimized objective is the mean of the output array across all datapoints. A loss function gives a distance between a model's predictions to the ground truth labels. This is the distance (loss value) that the network aims to minimize; the lower this value, the better the current model describes our training data set. Click here For a list of Keras loss functions. Loss is one of the two parameters required to compile a model...


model.compile(optimizer='sgd'. loss='mean_squared_error')