AbstractThis paper proposes a revised vertical diffusion package with a nonlocal turbulent mixing coefficient in the planetary boundary layer (PBL). Based on the study of Noh et al. and accumulated results of the behavior of the Hong and Pan algorithm, a revised vertical diffusion algorithm that is suitable for weather forecasting and climate prediction models is developed. The major ingredient of the revision is the inclusion of an explicit treatment of entrainment processes at the top of the PBL. The new diffusion package is called the Yonsei University PBL (YSU PBL). In a one-dimensional offline test framework, the revised scheme is found to improve several features compared with the Hong and Pan implementation. The YSU PBL increases boundary layer mixing in the thermally induced free convection regime and decreases it in the mechanically induced forced convection regime, which alleviates the well-known problems in the Medium-Range Forecast (MRF) PBL. Excessive mixing in the mixed layer in the presence of strong winds is resolved. Overly rapid growth of the PBL in the case of the Hong and Pan is also rectified. The scheme has been successfully implemented in the Weather Research and Forecast model producing a more realistic structure of the PBL and its development. In a case study of a frontal tornado outbreak, it is found that some systematic biases of the large-scale features such as an afternoon cold bias at 850 hPa in the MRF PBL are resolved. Consequently, the new scheme does a better job in reproducing the convective inhibition. Because the convective inhibition is accurately predicted, widespread light precipitation ahead of a front, in the case of the MRF PBL, is reduced. In the frontal region, the YSU PBL scheme improves some characteristics, such as a double line of intense convection. This is because the boundary layer from the YSU PBL scheme remains less diluted by entrainment leaving more fuel for severe convection when the front triggers it.
AbstractThis paper describes the Simple Ocean Data Assimilation (SODA) reanalysis of ocean climate variability. In the assimilation, a model forecast produced by an ocean general circulation model with an average resolution of 0.25° × 0.4° × 40 levels is continuously corrected by contemporaneous observations with corrections estimated every 10 days. The basic reanalysis, SODA 1.4.2, spans the 44-yr period from 1958 to 2001, which complements the span of the 40-yr European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric reanalysis (ERA-40). The observation set for this experiment includes the historical archive of hydrographic profiles supplemented by ship intake measurements, moored hydrographic observations, and remotely sensed SST. A parallel run, SODA 1.4.0, is forced with identical surface boundary conditions, but without data assimilation. The new reanalysis represents a significant improvement over a previously published version of the SODA algorithm. In particular, eddy kinetic energy and sea level variability are much larger than in previous versions and are more similar to estimates from independent observations. One issue addressed in this paper is the relative importance of the model forecast versus the observations for the analysis. The results show that at near-annual frequencies the forecast model has a strong influence, whereas at decadal frequencies the observations become increasingly dominant in the analysis. As a consequence, interannual variability in SODA 1.4.2 closely resembles interannual variability in SODA 1.4.0. However, decadal anomalies of the 0–700-m heat content from SODA 1.4.2 more closely resemble heat content anomalies based on observations.
Adrian E. Raftery, Tilmann Gneiting, Leonhard Held, Michael Polakowski
AbstractEnsembles used for probabilistic weather forecasting often exhibit a spread-error correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distributions from different sources. The BMA predictive probability density function (PDF) of any quantity of interest is a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts and reflect the models' relative contributions to predictive skill over the training period. The BMA weights can be used to assess the usefulness of ensemble members, and this can be used as a basis for selecting ensemble members; this can be useful given the cost of running large ensembles. The BMA PDF can be represented as an unweighted ensemble of any desired size, by simulating from the BMA predictive distribution.The BMA predictive variance can be decomposed into two components, one corresponding to the between-forecast variability, and the second to the within-forecast variability. Predictive PDFs or intervals based solely on the ensemble spread incorporate the first component but not the second. Thus BMA provides a theoretical explanation of the tendency of ensembles to exhibit a spread-error correlation but yet be underdispersive.The method was applied to 48-h forecasts of surface temperature in the Pacific Northwest in January–June 2000 using the University of Washington fifth-generation Pennsylvania State University–NCAR Mesoscale Model (MM5) ensemble. The predictive PDFs were much better calibrated than the raw ensemble, and the BMA forecasts were sharp in that 90% BMA prediction intervals were 66% shorter on average than those produced by sample climatology. As a by-product, BMA yields a deterministic point forecast, and this had root-mean-square errors 7% lower than the best of the ensemble members and 8% lower than the ensemble mean. Similar results were obtained for forecasts of sea level pressure. Simulation experiments show that BMA performs reasonably well when the underlying ensemble is calibrated, or even overdispersed.
Tilmann Gneiting, Adrian E. Raftery, Anton H. Westveld, Tom Goldman
AbstractEnsemble prediction systems typically show positive spread-error correlation, but they are subject to forecast bias and dispersion errors, and are therefore uncalibrated. This work proposes the use of ensemble model output statistics (EMOS), an easy-to-implement postprocessing technique that addresses both forecast bias and underdispersion and takes into account the spread-skill relationship. The technique is based on multiple linear regression and is akin to the superensemble approach that has traditionally been used for deterministic-style forecasts. The EMOS technique yields probabilistic forecasts that take the form of Gaussian predictive probability density functions (PDFs) for continuous weather variables and can be applied to gridded model output. The EMOS predictive mean is a bias-corrected weighted average of the ensemble member forecasts, with coefficients that can be interpreted in terms of the relative contributions of the member models to the ensemble, and provides a highly competitive deterministic-style forecast. The EMOS predictive variance is a linear function of the ensemble variance. For fitting the EMOS coefficients, the method of minimum continuous ranked probability score (CRPS) estimation is introduced. This technique finds the coefficient values that optimize the CRPS for the training data. The EMOS technique was applied to 48-h forecasts of sea level pressure and surface temperature over the North American Pacific Northwest in spring 2000, using the University of Washington mesoscale ensemble. When compared to the bias-corrected ensemble, deterministic-style EMOS forecasts of sea level pressure had root-mean-square error 9% less and mean absolute error 7% less. The EMOS predictive PDFs were sharp, and much better calibrated than the raw ensemble or the bias-corrected ensemble.
AbstractThe application of particle filters in geophysical systems is reviewed. Some background on Bayesian filtering is provided, and the existing methods are discussed. The emphasis is on the methodology, and not so much on the applications themselves. It is shown that direct application of the basic particle filter (i.e., importance sampling using the prior as the importance density) does not work in high-dimensional systems, but several variants are shown to have potential. Approximations to the full problem that try to keep some aspects of the particle filter beyond the Gaussian approximation are also presented and discussed.
Các tạp chí khác
Tạp chí Nhi khoa
Vietnam Journal of Science, Technology and Engineering