daze.measures¶

This module contains classes that allow for various confusion matrix evaluation measures to be computed.

All classes must be initialized with a confusion_matrix in the form of a square numpy.ndarray .

Types of measures¶

To ensure that multiple evaluation measures can be displayed alongside the confusion matrix without obstruction, they are divided into three types of measures — column, row and summary:

Measure & reference

Label

Specifier

Column?

Row?

Summary?

Accuracy (Accuracy)

Acc

'a'

Count (Count)

#

'c'

True Positives (TP)

TP

'tp'

False Positives (FP)

FP

'fp'

True Negatives (TN)

TN

'tn'

False Negatives (FN)

FN

'fn'

True Positive Rate (TPR)

TPR

'tpr'

False Negative Rate (FNR)

FNR

'fnr'

True Negative Rate (TNR)

TNR

'tnr'

False Positive Rate (FPR)

FPR

'fpr'

Precision (Precision)

P

'p'

Recall (Recall)

R

'r'

$$F_1$$ Score (F1)

F1

'f1'

Note that the allocation of measures to the column and row categories is somewhat arbitrary, but still maintains some level of reason.

All summary measures apart from accuracy are displayed as a macro (M) or micro ($$\mu$$) averaged quantity over the per-class measures, indicated by a subscript M or $$\mu$$.

These measures are displayed in the following way:

Accuracy (Accuracy)¶

class daze.measures.Accuracy(confusion_matrix)[source]

A summary measure that computes the categorical accuracy.

__call__()[source]
Returns
accuracy: 0 ≤ float ≤ 1

The categorical accuracy.

Count (Count)¶

class daze.measures.Count(confusion_matrix)[source]

A row and column measure that computes the counts of rows/columns.

__call__(axis)[source]
Parameters
axis: {0, 1}

The axis of the confusion matrix to perform the counts over.

Returns
count: numpy.ndarray (dtype=int) of shape (n_classes,)

The integer counts of each row/column.

True Positives (TP)¶

class daze.measures.TP(confusion_matrix)[source]

A column measure that computes the true positives of each class.

__call__()[source]
Returns
tp: numpy.ndarray (dtype=int) of shape (n_classes,)

The true positives for each class.

False Positives (FP)¶

class daze.measures.FP(confusion_matrix)[source]

A column measure that computes the false positives of each class.

__call__()[source]
Returns
fp: numpy.ndarray (dtype=int) of shape (n_classes,)

The false positives for each class.

False Negatives (FN)¶

class daze.measures.FN(confusion_matrix)[source]

A row measure that computes the false positives of each class.

__call__()[source]
Returns
fn: numpy.ndarray (dtype=int) of shape (n_classes,)

The false negatives for each class.

True Negatives (TN)¶

class daze.measures.TN(confusion_matrix)[source]

A row measure that computes the true negatives of each class.

__call__()[source]
Returns
tn: numpy.ndarray (dtype=int) of shape (n_classes,)

The true negatives for each class.

True Positive Rate (TPR)¶

class daze.measures.TPR(confusion_matrix)[source]

A row measure that computes the true positive rate of each class. Also an average/summary measure that can compute the micro and macro averaged true positive rate over all classes.

__call__(measure_type=None)[source]
Parameters
measure_type: {‘micro’, ‘macro’}, default=None

The averaging method. If None, then no averaging is done and per-class true positive rates are returned.

Returns
tpr: numpy.ndarray (dtype=int) of shape (n_classes,) or 0 ≤ float ≤ 1

The true positive rates for each class (if measure_type=None), otherwise the micro/macro-averaged true positive rate.

False Negative Rate (FNR)¶

class daze.measures.FNR(confusion_matrix)[source]

A row measure that computes the false negative rate of each class. Also an average/summary measure that can compute the micro and macro averaged false negative rate over all classes.

__call__(measure_type=None)[source]
Parameters
measure_type: {‘micro’, ‘macro’}, default=None

The averaging method. If None, then no averaging is done and per-class false negative rates are returned.

Returns
fnr: numpy.ndarray (dtype=int) of shape (n_classes,) or 0 ≤ float ≤ 1

The false negative rates for each class (if measure_type=None), otherwise the micro/macro-averaged false negative rate.

True Negative Rate (TNR)¶

class daze.measures.TNR(confusion_matrix)[source]

A column measure that computes the true negative rate of each class. Also an average/summary measure that can compute the micro and macro averaged true negative rate over all classes.

__call__(measure_type=None)[source]
Parameters
measure_type: {‘micro’, ‘macro’}, default=None

The averaging method. If None, then no averaging is done and per-class true negative rates are returned.

Returns
tnr: numpy.ndarray (dtype=int) of shape (n_classes,) or 0 ≤ float ≤ 1

The true negative rates for each class (if measure_type=None), otherwise the micro/macro-averaged true negative rate.

False Positive Rate (FPR)¶

class daze.measures.FPR(confusion_matrix)[source]

A column measure that computes the false positive rate of each class. Also an average/summary measure that can compute the micro and macro averaged false positive rate over all classes.

__call__(measure_type=None)[source]
Parameters
measure_type: {‘micro’, ‘macro’}, default=None

The averaging method. If None, then no averaging is done and per-class false positive rates are returned.

Returns
fpr: numpy.ndarray (dtype=int) of shape (n_classes,) or 0 ≤ float ≤ 1

The false positive rates for each class (if measure_type=None), otherwise the micro/macro-averaged false positive rate.

Precision (Precision)¶

class daze.measures.Precision(confusion_matrix)[source]

A column measure that computes the precision of each class. Also an average/summary measure that can compute the micro and macro averaged precision over all classes.

__call__(measure_type=None)[source]
Parameters
measure_type: {‘micro’, ‘macro’}, default=None

The averaging method. If None, then no averaging is done and per-class precisions are returned.

Returns
precision: numpy.ndarray (dtype=int) of shape (n_classes,) or 0 ≤ float ≤ 1

The precision for each class (if measure_type=None), otherwise the micro/macro-averaged precision.

Recall (Recall)¶

class daze.measures.Recall(confusion_matrix)[source]

A row measure that computes the recall of each class. Also an average/summary measure that can compute the micro and macro averaged recall over all classes.

Equivalent to TPR.

__call__(measure_type=None)
Parameters
measure_type: {‘micro’, ‘macro’}, default=None

The averaging method. If None, then no averaging is done and per-class true positive rates are returned.

Returns
tpr: numpy.ndarray (dtype=int) of shape (n_classes,) or 0 ≤ float ≤ 1

The true positive rates for each class (if measure_type=None), otherwise the micro/macro-averaged true positive rate.

$$F_1$$ Score (F1)¶

class daze.measures.F1(confusion_matrix)[source]

A row and column measure that computes the F1 score of each class. Also an average/summary measure that can compute the micro and macro averaged F1 score over all classes.

__call__(measure_type=None)[source]
Parameters
measure_type: {‘micro’, ‘macro’}, default=None

The averaging method. If None, then no averaging is done and per-class F1 scores are returned.

Returns
f1: numpy.ndarray (dtype=int) of shape (n_classes,) or 0 ≤ float ≤ 1

The F1 score for each class (if measure_type=None), otherwise the micro/macro-averaged F1 score.