I have to arrays, one for predictions and another for true values.
Predictions:
array([[ 0.01365575, 0.01523909],
[-0.00044139, 0.00269908],
[ 0.03240077, 0.02496629],
[ 0.03238605, 0.03045709],
[ 0.03226778, 0.02878774],
[ 0.03238199, 0.03221421]])
Real values:
array([[0.01212121, 0.01529052],
[0. , 0.0030581 ],
[0.01818182, 0.01559633],
[0.00757576, 0.007263 ],
[0.00757576, 0.00382263],
[0.00757576, 0.01070336]])
I am trying to calculate the std of the mae and rmse with the formula:
std_error = (1/n * sum(error_i- mean_error)^2)^1/2
So far I am trying to create an array with the mae and rmse values incrementally but with no success. I am implementing this:
for x in range(len(preds)):
mae_std = (preds[:,0] - trues[['t1']])/x
for x in range(len(preds)):
rmse_std = (((trues[['t2']] - preds[:,1])**2)/x)**1/2
This way just takes forever and never really ends not sure why.
I expected the results to be an array with the incremental values of the error and then I could try to use them on the std_error formula.
What am I doing wrong? Is there a way to achieve what I describe faster?
np.mean((preds[:,1] - trues[:,1])**2)(just an e.g) or you can use scikit learn metric submodule as well. scikit-learn.org/stable/modules/generated/…