The common error metrics, such as RMSE, MSE, etc., are all optimized in sklearn.

I want to speed up other error metrics, relative absolute error (RAE), for example, with numba. However, my implementation is still slow.

def relative_absolute_error(true, pred):

true_mean = np.mean(true)

squared_error_num = np.sum(np.abs(true - pred))

squared_error_den = np.sum(np.abs(true - true_mean))

rae_loss = squared_error_num / squared_error_den

return rae_loss

My numba attempt

@nb.njit(nopython=True, cache=True)

def relative_absolute_error(true: nb.float64[:], pred: nb.float64[:]):

true_mean = np.mean(true)

squared_error_num = np.sum(np.abs(true - pred))

squared_error_den = np.sum(np.abs(true - true_mean))

rae_loss = squared_error_num / squared_error_den

return rae_loss

For experienced numba users, given the structure of the pure Python RAE code, do you expect numba to speed up the calculation?

Another option is that I can try to write it in C and then I think that I’d have to use Cython to go back and forth with the Python code.