ROC AUC

Created
TagsMetrics

ROC curve

Lowering the threshold allows more items to be classified as positive, thus increasing both true positive rate and false positive rate.

AUC

ROC (Receiver Operating Characteristic) curve and AUC (Area Under the Curve) are evaluation metrics commonly used for binary classification models. ROC curve is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. AUC represents the degree or measure of separability of classes.

ROC Curve:

AUC (Area Under the ROC Curve):

Interpretation:

Python Implementation (using scikit-learn):

from sklearn.metrics import roc_auc_score

# Example ground truth and predicted probabilities
y_true = [0, 1, 1, 0, 1]
y_prob = [0.1, 0.9, 0.8, 0.2, 0.7]  # Predicted probabilities of positive class

# Calculate ROC AUC score
roc_auc = roc_auc_score(y_true, y_prob)

print("ROC AUC Score:", roc_auc)

In this example, y_true contains the true labels of the samples (0 for negative class and 1 for positive class), and y_prob contains the predicted probabilities of the positive class. We calculate the ROC AUC score using the roc_auc_score function from scikit-learn's metrics module.