Priors via imaginary training samples of sufficient statistics for objective Bayesian hypothesis testing

Springer Science and Business Media LLC - Tập 77 - Trang 179-199 - 2019
D. Fouskakis1
1Department of Mathematics, National Technical University of Athens, Athens, Greece

Tóm tắt

The expected-posterior prior (EPP) and the power-expected-posterior (PEP) prior are based on random imaginary observations and offer several advantages in objective Bayesian hypothesis testing. The use of sufficient statistics, when these exist, as a way to redefine the EPP and PEP prior is investigated. In this way the dimensionality of the problem can be reduced, by generating samples of sufficient statistics instead of generating full sets of imaginary data. On the theoretical side it is proved that the new EPP and PEP definitions based on imaginary training samples of sufficient statistics are equivalent with the standard definitions based on individual training samples. This equivalence provides a strong justification and generalization of the definition of both EPP and PEP prior, since from the individual samples or from the sufficient samples the criteria coincide. This avoids potential inconsistencies or paradoxes when only sufficient statistics are available. The applicability of the new definitions in different hypotheses testing problems is explored, including the case of an irregular model. Calculations are simplified; and it is shown that when testing the mean of a normal distribution the EPP and PEP prior can be expressed as a beta mixture of normal priors. The paper concludes with a discussion about the interpretation and the benefits of the proposed approach.

Tài liệu tham khảo

Bartlett, M.: Comment on D. V. Lindley’s statistical paradox. Biometrika 44, 533–534 (1957) Bayarri, M., Garcia-Donato, G.: Generalization of Jeffreys divergence-based priors for Bayesian hypothesis testing. J. R. Stat. Soc. B 70, 981–1003 (2008) Berger, J., Bernardo, J., Sun, D.: The formal definition of reference priors. Ann. Stat. 37, 905–938 (2009) Berger, J., Pericchi, L.: The intrinsic Bayes factor for model selection and prediction. J. Am. Stat. Assoc. 91, 109–122 (1996) Berger, J., Pericchi, L.: Accurate and stable Bayesian model selection: the median intrinsic Bayes factor. Sankhyā Indian J. Stat. Spec. Issue Bayesian Anal. 60, 1–18 (1998) Berger, J., Pericchi, L.: Training samples in objective model selection. Ann. Stat. 32, 841–869 (2004) Bernardo, J., Rueda, R.: Bayesian hypothesis testing: a reference approach. Int. Stat. Rev. 70, 351–372 (2002) Consonni, G., Fouskakis, D., Liseo, B., Ntzoufras, I.: Prior distributions for objective Bayesian analysis. Bayesian Anal. 13, 627–679 (2018) Consonni, G., Veronese, P.: Compatibility of prior specifications across linear models. Stat. Sci. 23, 332–353 (2008) Fouskakis, D., Ntzoufras, I.: Power-conditional-expected priors. Using g-priors with random imaginary data for variable selection. J. Comput. Gr. Stat. 25, 647–664 (2015) Fouskakis, D., Ntzoufras, I.: Limiting behavior of the Jeffreys power-expected-posterior Bayes factor in Gaussian linear models. Braz. J. Probab. Stat. 30, 299–320 (2016) Fouskakis, D., Ntzoufras, I.: Information consistency of the Jeffreys power-expected-posterior prior in Gaussian linear models. Metron 75, 371–380 (2017) Fouskakis, D., Ntzoufras, I., Draper, D.: Power-expected-posterior priors for variable selection in Gaussian linear models. Bayesian Anal. 10, 75–107 (2015) Fouskakis, D., Ntzoufras, I., Perrakis, K.: Power-expected-posterior priors for generalized linear models. Bayesian Anal. 13, 721–748 (2018) Good, I.: Probability and the Weighting of Evidence. Haffner, New York (2004) Griffin, J., Brown, P.: Hierarchical shrinkage priors for regression models. Bayesian Anal. 12, 135–159 (2017) Ibrahim, J., Chen, M.: Power prior distributions for regression models. Stat. Sci. 15, 46–60 (2000) Johnson, V.E., Rossell, D.: On the use of non-local prior densities in Bayesian hypothesis tests. J. R. Stat. Soc. Ser. B 72, 143–170 (2010) Kass, R., Wasserman, L.: A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion. J. Am. Stat. Assoc. 90, 928–934 (1995) Lourenzutti, R., Duarte, D., Azevedo, M.: The Beta Truncated Pareto Distribution. Technical Report, Universidade Federal de Minas Gerais, Belo Horizonte, MG, Brazil (2014) Pérez, J., Berger, J.: Expected-posterior prior distributions for model selection. Biometrika 89, 491–511 (2002) Simpson, D., Rue, H., Riebler, A., Martins, T., Sørbye, S.: Penalising model component complexity: a principled, practical approach to constructing priors. Stat. Sci. 32, 1–28 (2017) Spiegelhalter, D., Abrams, K., Myles, J.: Bayesian Approaches to Clinical Trials and Health-Care Evaluation. Statistics in Practice. Wiley, Chichester (2004) Spiegelhalter, D., Smith, A.: Bayes factors for linear and log-linear models with vague prior information. J. R. Stat. Soc. Ser. B 44, 377–387 (1982) Zellner, A.: On assessing prior distributions and Bayesian regression analysis using g-prior distributions. In: Goel, P., Zellner, A. (eds.) Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti, pp. 233–243. North-Holland, Amsterdam (1986)