Time Series ForecastingModel Evaluation (MAPE, RMSE, Forecast Bias)Medium⏱️ ~2 min

What is Forecast Bias and Why Does It Matter?

Forecast Bias measures systematic over forecasting or under forecasting by calculating the average of (forecast minus actual) across all predictions. Positive bias means you consistently over forecast, negative bias means you consistently under forecast. A bias near zero is essential for healthy inventory and capacity planning, but zero bias alone can mask terrible accuracy if large positive and negative errors cancel out. In supply chain operations, bias directly impacts working capital and service levels. A persistent 5% under forecast on A mover SKUs (high volume items) can trigger stockouts worth millions in lost revenue and customer satisfaction. Amazon and similar retailers set bias alarms to trigger if seven day moving average drops below negative 5% for two consecutive days on high volume cohorts. Conversely, persistent over forecasting ties up warehouse space and capital in slow moving inventory. The critical insight is that bias must always be paired with a dispersion metric like Mean Absolute Error (MAE), RMSE, or Weighted Absolute Percentage Error (WAPE). Consider a portfolio with 100 predictions: 50 over forecast by 100 units, 50 under forecast by 100 units. Bias is perfectly zero, but you have 10,000 units of total error. Your service level would be catastrophic despite "unbiased" forecasts. This is why production systems report bias alongside WAPE or RMSE, never in isolation. Bias monitoring segments by cohort and horizon. High velocity items might have stricter bias bounds (absolute value under 2%) than long tail items (absolute value under 10%). Horizon matters too: one week forecasts should have tighter bias than 13 week forecasts. When promotions or demand shifts occur, bias often spikes temporarily as models lag reality. Teams correlate bias changes with known events and feature drift to distinguish systemic model issues from transient shocks.
💡 Key Takeaways
Bias equals mean of (forecast minus actual): positive bias indicates over forecasting, negative indicates under forecasting, calculated directionally not absolute
Negative 5% bias on A mover SKUs can cause stockouts worth millions, triggering alarms when seven day moving average crosses threshold for two consecutive days
Zero bias does not mean good forecasts: errors can cancel out, leaving you with zero bias but catastrophic dispersion and poor service levels
Must always pair bias with dispersion metrics like MAE, RMSE, or WAPE to triangulate both systematic skew and overall accuracy
Segment bias monitoring by cohort and horizon: high volume items need tighter bounds (absolute value under 2%) than long tail (under 10%)
Bias spikes during promotions and demand shifts are normal, teams correlate with known events and feature drift to distinguish model issues from transient shocks
📌 Examples
Amazon retail: Seven day bias alarm triggers at negative 5% on A movers, chosen because persistent under forecast causes multi million dollar stockout impact within days
Ridesharing ETA: Negative 20 second bias significantly increases cancellation rates, system monitors bias by city and time of day with tailored thresholds
Cancellation example: Portfolio with 50 over forecasts of +100 units and 50 under forecasts of -100 units has zero bias but 10,000 units total error and terrible service
← Back to Model Evaluation (MAPE, RMSE, Forecast Bias) Overview
What is Forecast Bias and Why Does It Matter? | Model Evaluation (MAPE, RMSE, Forecast Bias) - System Overflow