Difference between revisions of "Loss"

From
Jump to: navigation, search
m
m
Line 18: Line 18:
  
 
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
 
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
 +
* [[Objective vs. Cost vs. Loss vs. Error Function]]
 
* [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative]] ... [[Optimizer]] ... [[Train, Validate, and Test]]
 
* [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative]] ... [[Optimizer]] ... [[Train, Validate, and Test]]
 
* [[Cross-Entropy Loss]]
 
* [[Cross-Entropy Loss]]
* [[Objective vs. Cost vs. Loss vs. Error Function]]
 
 
* [http://towardsdatascience.com/common-loss-functions-in-machine-learning-46af0ffc4d23 Common Loss functions in machine learning | Ravindra Parmar - Towards data Science]
 
* [http://towardsdatascience.com/common-loss-functions-in-machine-learning-46af0ffc4d23 Common Loss functions in machine learning | Ravindra Parmar - Towards data Science]
 
* [http://github.com/llSourcell/loss_functions_explained Loss Functions Explained |] [[Creatives#Siraj Raval|Siraj Raval]]
 
* [http://github.com/llSourcell/loss_functions_explained Loss Functions Explained |] [[Creatives#Siraj Raval|Siraj Raval]]

Revision as of 09:16, 6 August 2023

YouTube search... ...Google search

There are many options for loss in Tensorflow (Keras). The actual optimized objective is the mean of the output array across all datapoints. A loss function gives a distance between a model's predictions to the ground truth labels. This is the distance (loss value) that the network aims to minimize; the lower this value, the better the current model describes our training data set. Click here For a list of Keras loss functions. Loss is one of the two parameters required to compile a model...



model.compile(optimizer='sgd'. loss='mean_squared_error')