nucleus.evaluation_match

ConfusionCategory

Generic enumeration.

EvaluationMatch

EvaluationMatch is a result from a model run evaluation. It can represent a true positive, false positive,

class nucleus.evaluation_match.ConfusionCategory

Generic enumeration.

Derive from this class to define new enumerations.

name()

The name of the Enum member.

value()

The value of the Enum member.

class nucleus.evaluation_match.EvaluationMatch

EvaluationMatch is a result from a model run evaluation. It can represent a true positive, false positive, or false negative.

The matching only matches the strongest prediction for each annotation, so if there are multiple predictions that overlap a single annotation only the one with the highest overlap metric will be matched.

The model prediction label and the ground truth annotation label can differ for true positives if there is configured an allowed_label_mapping for the model run.

NOTE: There is no iou thresholding applied to these matches, so it is possible to have a true positive with a low iou score. If manually rejecting matches remember that a rejected match produces both a false positive and a false negative otherwise you’ll skew your aggregates.

model_run_id

The ID of the model run that produced this match.

Type:

str

model_prediction_id

The ID of the model prediction that was matched. None if the match was a false negative.

Type:

str

ground_truth_annotation_id

The ID of the ground truth annotation that was matched. None if the match was a false positive.

Type:

str

iou

The intersection over union score of the match.

Type:

int

dataset_item_id

The ID of the dataset item that was matched.

Type:

str

confusion_category

The confusion category of the match.

Type:

ConfusionCategory

model_prediction_label

The label of the model prediction that was matched. None if the match was a false negative.

Type:

str

ground_truth_annotation_label

The label of the ground truth annotation that was matched. None if the match was a false positive.

Type:

str