Huber loss
Created | |
---|---|
Tags | Loss |
def huber_loss(y_true, y_pred, delta=1.0):
error = y_true - y_pred
is_small_error = np.abs(error) <= delta
squared_loss= 0.5 * error ** 2
absolute_loss = delta * (np.abs(error) - 0.5 * delta)
return np.mean(np.where(is_small_error, squared_loss, absolute_loss))
Huber Loss fixed the outlier-sensitive problem of MSE, and it’s also differentiable at 00 (since MAE’s gradient is not continuous). The idea is
pretty simple:
if the error is not too big, Huber loss uses MSE; otherwise, it’s just MAE with some penalty.
The problem with Huber loss is that we need to tune the hyperparameter delta.
