The medium range forecast model of the National Centers for Environmental Prediction (NCEP) is integrated operationally to produce an ensemble of 17 forecasts per day (12 integrations with 0000UTC initial conditions, and five with 1200UTC initial conditions). These integrations are carried out to 16 days, with prognostic variables saved at 12 hour intervals. There is potentially a tremendous amount of information in these ensembles that could be used to assess the potential skill of these forecasts a priori.
There are several approaches to estimating the skill of a forecast in advance of its verification using ensembles of integrations. The skill of antecedent forecasts by the same ensemble is examined an an indicator of skill at longer time t implies low error growth rate thru t+t. Similarily, the spread of forecasts within an ensemble has been assumed to be correlated with its skill based on the notion that a high confidence in the forecasts is associated with low variance among ensemble members. This approach is also examined. Likewise, the variation between forecasts verifying at the same time made on successive days (stationarity of the forecast for a given date) may be considered as a predictor of the skill of the forecast for that date.
It is found that for 500 hPa height over North America based on the analysis of the winter of 1995-1996, ensemble spread is an excellent predictor root-mean-square (RMS) error of the ensemble mean forecast when averaged over large areas. Locally, the correlation is not so robust. The RMS error of the same ensemble mean forecast at different lead times does appear to be correlated beyond day 1. However, we do not see the clear stratification that was apparent in the case of ensemble mean spread, particularly beyond the first week of the forecast. Cases with large error growth also saturate more quickly, often at lower levels of RMS error than slower saturating forecasts. This early saturation convolutes the relationship between error magnitude at short and long lead times. Consistency of sequential forecasts verifying on the same date is also a good predictor of large-scale forecast skill, especially at shorter lead times. Beyond day 9, the skill of this method vanishes. Evidence is found confirming that the forecast model performs better during periods of low large-scale temporal variance in the atmospheric circulation.
Complete copies of this report are available from:Center for Ocean-Land-Atmosphere Studies
last update: 16 June 1997
comments to: email@example.com