Difference between revisions of "Loss"
m |
m |
||
Line 17: | Line 17: | ||
[http://www.google.com/search?q=loss+machine+learning+ML+artificial+intelligence ...Google search] | [http://www.google.com/search?q=loss+machine+learning+ML+artificial+intelligence ...Google search] | ||
− | * [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]] | + | * [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]] |
* [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative]] ... [[Optimizer]] ... [[Train, Validate, and Test]] | * [[AI Solver]] ... [[Algorithms]] ... [[Algorithm Administration|Administration]] ... [[Model Search]] ... [[Discriminative vs. Generative]] ... [[Optimizer]] ... [[Train, Validate, and Test]] | ||
* [[Cross-Entropy Loss]] | * [[Cross-Entropy Loss]] |
Revision as of 01:04, 11 July 2023
YouTube search... ...Google search
- Backpropagation ... FFNN ... Forward-Forward ... Activation Functions ...Softmax ... Loss ... Boosting ... Gradient Descent ... Hyperparameter ... Manifold Hypothesis ... PCA
- AI Solver ... Algorithms ... Administration ... Model Search ... Discriminative vs. Generative ... Optimizer ... Train, Validate, and Test
- Cross-Entropy Loss
- Objective vs. Cost vs. Loss vs. Error Function
- Common Loss functions in machine learning | Ravindra Parmar - Towards data Science
- Loss Functions Explained | Siraj Raval
- Loss Functions | ML Cheatsheet
There are many options for loss in Tensorflow (Keras). The actual optimized objective is the mean of the output array across all datapoints. A loss function gives a distance between a model's predictions to the ground truth labels. This is the distance (loss value) that the network aims to minimize; the lower this value, the better the current model describes our training data set. Click here For a list of Keras loss functions. Loss is one of the two parameters required to compile a model...