F-beta Score

Created
TagsMetrics

The F-beta score is a metric used to evaluate the performance of a binary classification model, taking into account both precision and recall. It's a generalization of the F1 score, allowing you to give more or less emphasis to precision or recall depending on the value of the beta parameter.

Formula:

The F-beta score is calculated using the following formula:

Fβ=(1+β2)×precision×recallβ2×precision+recall F_\beta = (1 + \beta^2) \times \frac{{\text{precision} \times \text{recall}}}{{\beta^2 \times \text{precision} + \text{recall}}} 

where:

Interpretation:

Python Implementation (using scikit-learn):

from sklearn.metrics import fbeta_score, precision_score, recall_score

# Example predictions and ground truth labels
y_true = [0, 1, 1, 0, 1]
y_pred = [0, 1, 0, 1, 1]

# Calculate F1 score (beta=1, same as F1 score)
f1 = fbeta_score(y_true, y_pred, beta=1)
print("F1 Score:", f1)

# Calculate F2 score (favoring recall)
f2 = fbeta_score(y_true, y_pred, beta=2)
print("F2 Score (favoring recall):", f2)

# Calculate F0.5 score (favoring precision)
f05 = fbeta_score(y_true, y_pred, beta=0.5)
print("F0.5 Score (favoring precision):", f05)

# Calculate precision and recall separately
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
print("Precision:", precision)
print("Recall:", recall)

F1 Score: 0.6666666666666666
F2 Score (favoring recall): 0.6666666666666666
F0.5 Score (favoring precision): 0.6666666666666666
Precision: 0.6666666666666666
Recall: 0.6666666666666666

Use Cases:

The choice of \(\beta\) depends on the specific context and the relative importance of precision and recall in the problem domain.