Precision = Positive Predictive Value
- When the test is positive, the probability you have the condition
- Or in object detection, the probability that a predicted identification is correct
Recall = Sensitivity
- When you have the condition, the test identifies it
- Or in object detection, the probability that an object is identified
Accuracy
- In machine learning, the proportion of correctly identified objects
F1 score
- In object detection machine learning, identifications can be
- True positive (the identification is correct)
- False positive (the identification is not an object)
- False negative (the object has not been identified)
- However, there are no ‘true negatives’ as the an object is either identified or it is not
The accuracy is not a good performance measure and often the F1 score is used. The F1 score is the harmonic mean of precision and recall (in ratios the harmonic mean is used rather than the arithmetic mean).
\( F1 = \frac{2}{Recall^{-1} + Precision^{-1}} = 2 \cdot\frac{Precision\cdot{Recall}}{Precision + Recall} = \frac{2\cdot{TP}}{2\cdot{TP} + FP + FN}\)
or in general (in the F1 score, \beta = 1):
\( F\beta = \frac{\beta^{2} + 1}{\beta^{2} \cdot Recall^{-1} + Precision^{-1} } = \frac{(1 + \beta^2) \cdot Precision \cdot Recall}{b^2 \cdot Precision + Recall} = \frac{(1 + \beta^2) \cdot TP}{(1 + \beta^2) \cdot TP + \beta^2 \cdot FN + FP)} \)
Sometimes, focus is more on recall, when the F2 score is used where false negatives are more costly than false positives:
\( F2 = \frac{5}{4 \cdot Recall^{-1} + Precision^{-1} } = \frac{5 \cdot Precision \cdot Recall}{4 \cdot Precision + Recall} = \frac{5 \cdot TP}{5 \cdot TP + 4 \cdot FN + FP)} \)
At other times, focus should be more on precision, when the F0.5 score is used where false positives are more costly than false negatives:
\( F0.5 = \frac{1.25}{0.25 \cdot Recall^{-1} + Precision^{-1} } = \frac{1.25 \cdot Precision \cdot Recall}{0.25 \cdot Precision + Recall} = \frac{1.25 \cdot TP}{1.25 \cdot TP + 0.25 \cdot FN + FP)} \)