Metrics¶
Pure Python forecast accuracy metrics. These do not require Julia and can be used independently.
accuracy()¶
Compute all metrics at once:
from durbyn import accuracy
y_actual = [100, 110, 120, 115, 130]
y_predicted = [102, 108, 125, 112, 128]
y_train = [80, 85, 90, 95, 100, 105]
result = accuracy(y_actual, y_predicted, training_data=y_train)
print(result)
# {'ME': -0.4, 'RMSE': 3.56, 'MAE': 2.8, 'MPE': ..., 'MAPE': ..., 'ACF1': ..., 'MASE': ...}
Individual Metrics¶
ME — Mean Error¶
RMSE — Root Mean Squared Error¶
MAE — Mean Absolute Error¶
MPE — Mean Percentage Error¶
Only non-zero actuals are used. Returns NaN if all actuals are zero.
MAPE — Mean Absolute Percentage Error¶
from durbyn.metrics import mape
mape(actual, predicted) # mean(|actual - predicted| / |actual|) * 100
Only non-zero actuals are used.
MASE — Mean Absolute Scaled Error¶
Scaled by the in-sample mean absolute error of the naive forecast on training_data. Returns NaN if the training data has constant values (zero scale).
ACF1 — Lag-1 Autocorrelation of Errors¶
Returns NaN if fewer than 2 observations or zero variance.
Metric Definitions¶
| Metric | Formula | Notes |
|---|---|---|
| ME | mean(a - p) |
Signed; positive = under-prediction |
| RMSE | sqrt(mean((a - p)^2)) |
Penalises large errors |
| MAE | mean(\|a - p\|) |
Robust to outliers |
| MPE | mean((a - p) / a) * 100 |
Percentage; only non-zero actuals |
| MAPE | mean(\|a - p\| / \|a\|) * 100 |
Percentage; only non-zero actuals |
| MASE | MAE / mean(\|diff(train)\|) |
Scale-free; needs training data |
| ACF1 | lag-1 autocorrelation of errors | Residual serial correlation |