F1 score

Created
TagsMetrics

The F1 score is a commonly used metric for evaluating the performance of classification models. It considers both precision and recall to provide a single score that balances between them. The F1 score is the harmonic mean of precision and recall and is calculated using the following formula:

F1=2×precision×recallprecision+recall F_1 = 2 \times \frac{{\text{{precision}} \times \text{{recall}}}}{{\text{{precision}} + \text{{recall}}}} 

where:

Interpretation:

Python Implementation (using scikit-learn):

from sklearn.metrics import f1_score

# Example ground truth and predicted labels
y_true = [0, 1, 1, 0, 1]
y_pred = [0, 1, 0, 0, 1]

# Calculate F1 score
f1 = f1_score(y_true, y_pred)

print("F1 Score:", f1)

In this example, y_true contains the true labels of the samples, and y_pred contains the predicted labels. We calculate the F1 score using the f1_score function from scikit-learn's metrics module.