
Journal of Forecasting
SCOPUS (1982-2023)SSCI-ISI
0277-6693
1099-131X
Anh Quốc
Cơ quản chủ quản: WILEY , John Wiley and Sons Ltd
Các bài báo tiêu biểu
It is well known that a linear combination of forecasts can outperform individual forecasts. The common practice, however, is to obtain a weighted average of forecasts, with the weights adding up to unity. This paper considers three alternative approaches to obtaining linear combinations. It is shown that the best method is to add a constant term and not to constrain the weights to add to unity. These methods are tested with data on forecasts of quarterly hog prices, both within and out of sample. It is demonstrated that the optimum method proposed here is superior to the common practice of letting the weights add up to one.
Recently developed structural models of the global crude oil market imply that the surge in the real price of oil between mid 2003 and mid 2008 was driven by repeated positive shocks to the demand for all industrial commodities, reflecting unexpectedly high growth mainly in emerging Asia. We evaluate this proposition using an alternative data source and a different econometric methodology. Rather than inferring demand shocks from an econometric model, we utilize a direct measure of global demand shocks based on revisions of professional real gross domestic product (GDP) growth forecasts. We show that forecast surprises during 2003–2008 were associated primarily with unexpected growth in emerging economies (in conjunction with much smaller positive GDP‐weighted forecast surprises in the major industrialized economies), that markets were repeatedly surprised by the strength of this growth, that these surprises were associated with a hump‐shaped response of the real price of oil that reaches its peak after 12–16 months, and that news about global growth predict much of the surge in the real price of oil from mid 2003 until mid 2008 and much of its subsequent decline. Copyright © 2012 John Wiley & Sons, Ltd.
The definitions of forecasting vary to a certain extent, but they all have the view into the future in common. The future is unknown, but the broad, general directions can be guessed at and reasonably dealt with. Foresight goes further than forecasting, including aspects of networking and the preparation of decisions concerning the future. This is one reason why, in the 1990s, when foresight focused attention on a national scale in many countries, the wording also changed from forecasting to foresight. Foresight not only looks into the future by using all instruments of futures research, but includes utilizing implementations for the present. What does a result of a futures study mean for the present? Foresight is not planning, but foresight results provide ‘information’ about the future and are therefore one step in the planning and preparation of decisions. In this paper, some of the differences are described in a straightforward manner and demonstrated in the light of the German foresight process ‘Futur’. Copyright © 2003 John Wiley & Sons, Ltd.
A univariate structural time series model based on the traditional decomposition into trend, seasonal and irregular components is defined. A number of methods of computing maximum likelihood estimators are then considered. These include direct maximization of various time domain likelihood function. The asymptotic properties of the estimators are given and a comparison between the various methods in terms of computational efficiency and accuracy is made. The methods are then extended to models with explanatory variables.
This paper applies the GARCH‐MIDAS (mixed data sampling) model to examine whether information contained in macroeconomic variables can help to predict short‐term and long‐term components of the return variance. A principal component analysis is used to incorporate the information contained in different variables. Our results show that including low‐frequency macroeconomic information in the GARCH‐MIDAS model improves the prediction ability of the model, particularly for the long‐term variance component. Moreover, the GARCH‐MIDAS model augmented with the first principal component outperforms all other specifications, indicating that the constructed principal component can be considered as a good proxy of the business cycle. Copyright © 2013 John Wiley & Sons, Ltd.
An ordered probit regression model estimated using 10 years' data is used to forecast English league football match results. As well as past match results data, the significance of the match for end‐of‐season league outcomes, the involvement of the teams in cup competition and the geographical distance between the two teams' home towns all contribute to the forecasting model's performance. The model is used to test the weak‐form efficiency of prices in the fixed‐odds betting market. A strategy of selecting end‐of‐season bets with a favourable expected return according to the model appears capable of generating a positive return. Copyright © 2004 John Wiley & Sons, Ltd.
Wind power production data at temporal resolutions of a few minutes exhibit successive periods with fluctuations of various dynamic nature and magnitude, which cannot be explained (so far) by the evolution of some explanatory variable. Our proposal is to capture this regime‐switching behaviour with an approach relying on Markov‐switching autoregressive (MSAR) models. An appropriate parameterization of the model coefficients is introduced, along with an adaptive estimation method allowing accommodation of long‐term variations in the process characteristics. The objective criterion to be recursively optimized is based on penalized maximum likelihood, with exponential forgetting of past observations. MSAR models are then employed for one‐step‐ahead point forecasting of 10 min resolution time series of wind power at two large offshore wind farms. They are favourably compared against persistence and autoregressive models. It is finally shown that the main interest of MSAR models lies in their ability to generate interval/density forecasts of significantly higher skill. Copyright © 2010 John Wiley & Sons, Ltd.
Both international and US auditing standards require auditors to evaluate the risk of bankruptcy when planning an audit and to modify their audit report if the bankruptcy risk remains high at the conclusion of the audit. Bankruptcy prediction is a problematic issue for auditors as the development of a cause–effect relationship between attributes that may cause or be related to bankruptcy and the actual occurrence of bankruptcy is difficult. Recent research indicates that auditors only signal bankruptcy in about 50% of the cases where companies subsequently declare bankruptcy. Rough sets theory is a new approach for dealing with the problem of apparent indiscernibility between objects in a set that has had a reported bankruptcy prediction accuracy ranging from 76% to 88% in two recent studies. These accuracy levels appear to be superior to auditor signalling rates, however, the two prior rough sets studies made no direct comparisons to auditor signalling rates and either employed small sample sizes or non‐current data. This study advances research in this area by comparing rough set prediction capability with actual auditor signalling rates for a large sample of United States companies from the 1991 to 1997 time period.
Prior bankruptcy prediction research was carefully reviewed to identify 11 possible predictive factors which had both significant theoretical support and were present in multiple studies. These factors were expressed as variables and data for 11 variables was then obtained for 146 bankrupt United States public companies during the years 1991–1997. This sample was then matched in terms of size and industry to 145 non‐bankrupt companies from the same time period. The overall sample of 291 companies was divided into development and validation subsamples. Rough sets theory was then used to develop two different bankruptcy prediction models, each containing four variables from the 11 possible predictive variables. The rough sets theory based models achieved 61% and 68% classification accuracy on the validation sample using a progressive classification procedure involving three classification strategies. By comparison, auditors directly signalled going concern problems via opinion modifications for only 54% of the bankrupt companies. However, the auditor signalling rate for bankrupt companies increased to 66% when other opinion modifications related to going concern issues were included.
In contrast with prior rough sets theory research which suggested that rough sets theory offered significant bankruptcy predictive improvements for auditors, the rough sets models developed in this research did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies. The current research results should be fairly robust since this rough sets theory based research employed (1) a comparison of the rough sets model results to actual auditor decisions for the same companies, (2) recent data, (3) a relatively large sample size, (4) real world bankruptcy/non‐bankruptcy frequencies to develop the variable classifications, and (5) a wide range of industries and company sizes. Copyright © 2003 John Wiley & Sons, Ltd.
A storm surge barrier was constructed in 1987 in the Oosterschelde estuary in the south‐western delta of Holland to provide protection from flooding, while largely maintaining the tidal characteristics of the estuary. Despite efforts to minimize the hydraulic changes resulting from the barrage, it was expected that exchange with the North Sea, suspended sediment concentration and nutrient loads would decrease considerably. A model of the nutrients, algae and bottom organisms (mainly cockles and mussels) was developed to predict possible changes in the availability of food for these organisms. Although the model is based on standard constructs of ecology and hydraulics, many of its parameters are known with but low accuracy, being expressed as a range of possible values only. Running the model with all possible values of the parameters gives rise to a fairly wide range of model output responses. The calibration procedure used herein does not seek a single optimal value for the parameters but a decrease in the parameter range and thus a reduction in model prediction uncertainty. The field data available for calibration of the model are weighted according to their relationship with the model's objective, i.e. to predict food availability for shellfish. Despite the considerable physical changes resulting from the barrier food availability for shellfish is predicted to remain largely unchanged, due to the compensating effects of several other accompanying changes. There appears to be room for the extension of mussel culture, but at an increased risk of adverse conditions arising.
This paper investigates the time‐varying volatility patterns of some major commodities as well as the potential factors that drive their long‐term volatility component. For this purpose, we make use of a recently proposed generalized autoregressive conditional heteroskedasticity–mixed data sampling approach, which typically allows us to examine the role of economic and financial variables of different frequencies. Using commodity futures for Crude Oil (WTI and Brent), Gold, Silver and Platinum, as well as a commodity index, our results show the necessity for disentangling the short‐term and long‐term components in modeling and forecasting commodity volatility. They also indicate that the long‐term volatility of most commodity futures is significantly driven by the level of global real economic activity as well as changes in consumer sentiment, industrial production, and economic policy uncertainty. However, the forecasting results are not alike across commodity futures as no single model fits all commodities.