Difference between revisions of "Evaluation - Measures"

From
Jump to: navigation, search
(Accuracy)
Line 36: Line 36:
 
http://cdn-images-1.medium.com/max/800/1*5XuZ_86Rfce3qyLt7XMlhw.png
 
http://cdn-images-1.medium.com/max/800/1*5XuZ_86Rfce3qyLt7XMlhw.png
  
<youtube>iIjtgrjgAug</youtube>
+
<youtube>g3sxDtlGlAM</youtube>
 
<youtube>j-EB6RqqjGI</youtube>
 
<youtube>j-EB6RqqjGI</youtube>
 
  
 
=== F1 Score (F-Measure) ===
 
=== F1 Score (F-Measure) ===

Revision as of 12:01, 22 September 2018

YouTube search...

Error Metric

YouTube search...

Predictive Modeling works on constructive feedback principle. You build a model. Get feedback from metrics, make improvements and continue until you achieve a desirable accuracy. Evaluation metrics explain the performance of a model. An important aspects of evaluation metrics is their capability to discriminate among model results. 7 Important Model Evaluation Error Metrics Everyone should know | Tavish Srivastava

Confusion Matrix

YouTube search...

Precision & Recall

YouTube search...

(also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instances. Both precision and recall are therefore based on an understanding and measure of relevance. Precision and recall | Wikipedia

525px-Precisionrecall.svg.png

Accuracy

YouTube search...

in classification problems is the number of correct predictions made by the model over all kinds predictions made.

1*5XuZ_86Rfce3qyLt7XMlhw.png

F1 Score (F-Measure)

YouTube search...

Receiver Operator Curves (ROC) and Area Under the Curve (AUC)

YouTube search...

Example Use: Tradeoffs

'Sensitivity' & 'Specificity':

'True Positive Rate' & 'False Positive Rate':