Difference between revisions of "Loss"
Line 9: | Line 9: | ||
* [[Optimizer]] Functions | * [[Optimizer]] Functions | ||
+ | * [[Cross-Entropy Loss]] | ||
* [[Objective vs. Cost vs. Loss vs. Error Function]] | * [[Objective vs. Cost vs. Loss vs. Error Function]] | ||
* [http://towardsdatascience.com/common-loss-functions-in-machine-learning-46af0ffc4d23 Common Loss functions in machine learning | Ravindra Parmar - Towards data Science] | * [http://towardsdatascience.com/common-loss-functions-in-machine-learning-46af0ffc4d23 Common Loss functions in machine learning | Ravindra Parmar - Towards data Science] |
Revision as of 20:06, 9 April 2020
YouTube search... ...Google search
- Optimizer Functions
- Cross-Entropy Loss
- Objective vs. Cost vs. Loss vs. Error Function
- Common Loss functions in machine learning | Ravindra Parmar - Towards data Science
- Loss Functions Explained | Siraj Raval
- Loss Functions | ML Cheatsheet
There are many options for loss in Tensorflow (Keras). The actual optimized objective is the mean of the output array across all datapoints. A loss function gives a distance between a model's predictions to the ground truth labels. This is the distance (loss value) that the network aims to minimize; the lower this value, the better the current model describes our training data set. Click here For a list of Keras loss functions. Loss is one of the two parameters required to compile a model...