Journal of Forecasting
Công bố khoa học tiêu biểu
* Dữ liệu chỉ mang tính chất tham khảo
Forecasting financial distress of corporations is a difficult task in economies undergoing transition, as data are scarce and are highly imbalanced. This research tackles these difficulties by gathering reliable financial distress data in the context of a transition economy and employing the synthetic minority oversampling technique (SMOTE). The study employs seven different models, including linear discriminant analysis (LDA), logistic regression (LR), support vector machines (SVMs), neural networks (NNs), decision trees (DTs), random forests (RFs), and the Merton model, to predict financial distress among publicly traded companies in Vietnam between 2011 and 2021. The first six models use accounting‐based variables, while the Merton model utilizes market‐based variables. The findings indicate that while all models perform fairly well in predicting results for nondelisted firms, they perform somewhat poorly in predicting results for delisted firms in terms of various measures including balanced accuracy, Matthews correlation coefficient (MCC), precision, recall, and score. The study shows that the models that incorporate both Altman's and Ohlson's variables consistently outperform those that only use Altman's or Ohlson's variables in terms of balanced accuracy. Additionally, the study finds that NNs are generally the most effective models in terms of both balanced accuracy and MCC. The most important variable in Altman's variables as well as the combination of Altman's and Ohlson's variables is “reat” (retained earnings on total assets), whereas “ltat” (total liabilities on total assets) and “wcapat” (working capital on total assets) are the most important variables in Ohlson's variables. The study also reveals that in most cases, the models perform better in predicting results for big firms than for small firms and typically better than in good years than for bad years.
To forecast realized volatility, this paper introduces a multiplicative error model that incorporates heterogeneous components: weekly and monthly realized volatility measures. While the model captures the long‐memory property, estimation simply proceeds using quasi‐maximum likelihood estimation. This paper investigates its forecasting ability using the realized kernels of 34 different assets provided by the Oxford‐Man Institute's Realized Library. The model outperforms benchmark models such as ARFIMA, HAR, Log‐HAR and HEAVY‐RM in within‐sample fitting and out‐of‐sample (1‐, 10‐ and 22‐step) forecasts. It performed best in both pointwise and cumulative comparisons of multi‐step‐ahead forecasts, regardless of loss function (QLIKE or MSE). Copyright © 2015 John Wiley & Sons, Ltd.
This paper investigates the time‐varying volatility patterns of some major commodities as well as the potential factors that drive their long‐term volatility component. For this purpose, we make use of a recently proposed generalized autoregressive conditional heteroskedasticity–mixed data sampling approach, which typically allows us to examine the role of economic and financial variables of different frequencies. Using commodity futures for Crude Oil (WTI and Brent), Gold, Silver and Platinum, as well as a commodity index, our results show the necessity for disentangling the short‐term and long‐term components in modeling and forecasting commodity volatility. They also indicate that the long‐term volatility of most commodity futures is significantly driven by the level of global real economic activity as well as changes in consumer sentiment, industrial production, and economic policy uncertainty. However, the forecasting results are not alike across commodity futures as no single model fits all commodities.
We investigate whether crude oil price volatility is predictable by conditioning on macroeconomic variables. We consider a large number of predictors, take into account the possibility that relative predictive performance varies over the out‐of‐sample period, and shed light on the economic drivers of crude oil price volatility. Results using monthly data from 1983:M1 to 2018:M12 document that variables related to crude oil production, economic uncertainty and variables that either describe the current stance or provide information about the future state of the economy forecast crude oil price volatility at the population level 1 month ahead. On the other hand, evidence of finite‐sample predictability is very weak. A detailed examination of our out‐of‐sample results using the fluctuation test suggests that this is because relative predictive performance changes drastically over the out‐of‐sample period. The predictive power associated with the more successful macroeconomic variables concentrates around the Great Recession until 2015. They also generate the strongest signal of a decrease in the price of crude oil towards the end of 2008.
An ordered probit regression model estimated using 10 years' data is used to forecast English league football match results. As well as past match results data, the significance of the match for end‐of‐season league outcomes, the involvement of the teams in cup competition and the geographical distance between the two teams' home towns all contribute to the forecasting model's performance. The model is used to test the weak‐form efficiency of prices in the fixed‐odds betting market. A strategy of selecting end‐of‐season bets with a favourable expected return according to the model appears capable of generating a positive return. Copyright © 2004 John Wiley & Sons, Ltd.
Recently developed structural models of the global crude oil market imply that the surge in the real price of oil between mid 2003 and mid 2008 was driven by repeated positive shocks to the demand for all industrial commodities, reflecting unexpectedly high growth mainly in emerging Asia. We evaluate this proposition using an alternative data source and a different econometric methodology. Rather than inferring demand shocks from an econometric model, we utilize a direct measure of global demand shocks based on revisions of professional real gross domestic product (GDP) growth forecasts. We show that forecast surprises during 2003–2008 were associated primarily with unexpected growth in emerging economies (in conjunction with much smaller positive GDP‐weighted forecast surprises in the major industrialized economies), that markets were repeatedly surprised by the strength of this growth, that these surprises were associated with a hump‐shaped response of the real price of oil that reaches its peak after 12–16 months, and that news about global growth predict much of the surge in the real price of oil from mid 2003 until mid 2008 and much of its subsequent decline. Copyright © 2012 John Wiley & Sons, Ltd.
Both international and US auditing standards require auditors to evaluate the risk of bankruptcy when planning an audit and to modify their audit report if the bankruptcy risk remains high at the conclusion of the audit. Bankruptcy prediction is a problematic issue for auditors as the development of a cause–effect relationship between attributes that may cause or be related to bankruptcy and the actual occurrence of bankruptcy is difficult. Recent research indicates that auditors only signal bankruptcy in about 50% of the cases where companies subsequently declare bankruptcy. Rough sets theory is a new approach for dealing with the problem of apparent indiscernibility between objects in a set that has had a reported bankruptcy prediction accuracy ranging from 76% to 88% in two recent studies. These accuracy levels appear to be superior to auditor signalling rates, however, the two prior rough sets studies made no direct comparisons to auditor signalling rates and either employed small sample sizes or non‐current data. This study advances research in this area by comparing rough set prediction capability with actual auditor signalling rates for a large sample of United States companies from the 1991 to 1997 time period.
Prior bankruptcy prediction research was carefully reviewed to identify 11 possible predictive factors which had both significant theoretical support and were present in multiple studies. These factors were expressed as variables and data for 11 variables was then obtained for 146 bankrupt United States public companies during the years 1991–1997. This sample was then matched in terms of size and industry to 145 non‐bankrupt companies from the same time period. The overall sample of 291 companies was divided into development and validation subsamples. Rough sets theory was then used to develop two different bankruptcy prediction models, each containing four variables from the 11 possible predictive variables. The rough sets theory based models achieved 61% and 68% classification accuracy on the validation sample using a progressive classification procedure involving three classification strategies. By comparison, auditors directly signalled going concern problems via opinion modifications for only 54% of the bankrupt companies. However, the auditor signalling rate for bankrupt companies increased to 66% when other opinion modifications related to going concern issues were included.
In contrast with prior rough sets theory research which suggested that rough sets theory offered significant bankruptcy predictive improvements for auditors, the rough sets models developed in this research did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies. The current research results should be fairly robust since this rough sets theory based research employed (1) a comparison of the rough sets model results to actual auditor decisions for the same companies, (2) recent data, (3) a relatively large sample size, (4) real world bankruptcy/non‐bankruptcy frequencies to develop the variable classifications, and (5) a wide range of industries and company sizes. Copyright © 2003 John Wiley & Sons, Ltd.
Wind power production data at temporal resolutions of a few minutes exhibit successive periods with fluctuations of various dynamic nature and magnitude, which cannot be explained (so far) by the evolution of some explanatory variable. Our proposal is to capture this regime‐switching behaviour with an approach relying on Markov‐switching autoregressive (MSAR) models. An appropriate parameterization of the model coefficients is introduced, along with an adaptive estimation method allowing accommodation of long‐term variations in the process characteristics. The objective criterion to be recursively optimized is based on penalized maximum likelihood, with exponential forgetting of past observations. MSAR models are then employed for one‐step‐ahead point forecasting of 10 min resolution time series of wind power at two large offshore wind farms. They are favourably compared against persistence and autoregressive models. It is finally shown that the main interest of MSAR models lies in their ability to generate interval/density forecasts of significantly higher skill. Copyright © 2010 John Wiley & Sons, Ltd.
Recent multivariate extensions of the popular heterogeneous autoregressive model (HAR) for realized volatility leave substantial information unmodelled in residuals. We propose to employ a system of seemingly unrelated regressions to model and forecast a realized covariance matrix to capture this information. We find that the newly proposed generalized heterogeneous autoregressive (GHAR) model outperforms competing approaches in terms of economic gains, providing better mean–variance trade‐off, while, in terms of statistical precision, GHAR is not substantially dominated by any other model. Our results provide a comprehensive comparison of the performance when realized covariance, subsampled realized covariance and multivariate realized kernel estimators are used. We study the contribution of the estimators across different sampling frequencies, and show that the multivariate realized kernel and subsampled realized covariance estimators deliver further gains compared to realized covariance estimated on a 5‐minute frequency. In order to show economic and statistical gains, a portfolio of various sizes is used. Copyright © 2016 John Wiley & Sons, Ltd.
A storm surge barrier was constructed in 1987 in the Oosterschelde estuary in the south‐western delta of Holland to provide protection from flooding, while largely maintaining the tidal characteristics of the estuary. Despite efforts to minimize the hydraulic changes resulting from the barrage, it was expected that exchange with the North Sea, suspended sediment concentration and nutrient loads would decrease considerably. A model of the nutrients, algae and bottom organisms (mainly cockles and mussels) was developed to predict possible changes in the availability of food for these organisms. Although the model is based on standard constructs of ecology and hydraulics, many of its parameters are known with but low accuracy, being expressed as a range of possible values only. Running the model with all possible values of the parameters gives rise to a fairly wide range of model output responses. The calibration procedure used herein does not seek a single optimal value for the parameters but a decrease in the parameter range and thus a reduction in model prediction uncertainty. The field data available for calibration of the model are weighted according to their relationship with the model's objective, i.e. to predict food availability for shellfish. Despite the considerable physical changes resulting from the barrier food availability for shellfish is predicted to remain largely unchanged, due to the compensating effects of several other accompanying changes. There appears to be room for the extension of mussel culture, but at an increased risk of adverse conditions arising.
- 1
- 2