Skip to content

How to interpret metrics from a training run? #78

Answered by tbepler
Guillawme asked this question in Q&A
Discussion options

You must be logged in to vote

An important thing to note for precision and AUPR/average-precision scores: these only go to 1 for a perfect classifier if all of the ground truth particles are labeled! Therefore, we should not expect this to go to 1. Here's a rough example:

Let A be the number of ground truth positives, let B be the number of predicted positives, and let TP be the number of true positives, that is the number of ground truth positives that are also predicted positives (A U B). The precision is TP/B. If all of the predicted positives are ground truth positives, then precision=1.

Now, imagine that A is incompletely labeled. What if instead of having all ground truth positives, A, we only have a labeled sub…

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
1 reply
@Guillawme
Comment options

Answer selected by Guillawme
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants