Comparison and improvement of the predictability and interpretability with ensemble learning models in QSPR applications
Tóm tắt
Ensemble learning helps improve machine learning results by combining several models and allows the production of better predictive performance compared to a single model. It also benefits and accelerates the researches in quantitative structure–activity relationship (QSAR) and quantitative structure–property relationship (QSPR). With the growing number of ensemble learning models such as random forest, the effectiveness of QSAR/QSPR will be limited by the machine’s inability to interpret the predictions to researchers. In fact, many implementations of ensemble learning models are able to quantify the overall magnitude of each feature. For example, feature importance allows us to assess the relative importance of features and to interpret the predictions. However, different ensemble learning methods or implementations may lead to different feature selections for interpretation. In this paper, we compared the predictability and interpretability of four typical well-established ensemble learning models (Random forest, extreme randomized trees, adaptive boosting and gradient boosting) for regression and binary classification modeling tasks. Then, the blending methods were built by summarizing four different ensemble learning methods. The blending method led to better performance and a unification interpretation by summarizing individual predictions from different learning models. The important features of two case studies which gave us some valuable information to compound properties were discussed in detail in this report. QSPR modeling with interpretable machine learning techniques can move the chemical design forward to work more efficiently, confirm hypothesis and establish knowledge for better results.
Tài liệu tham khảo
Mitchell JBO (2014) Machine learning methods in chemoinformatics. Wiley Interdiscip Rev Comput Mol Sci 4:468–481
Katritzky AR, Lobanov VS, Karelson M (1995) QSPR: the correlation and quantitative prediction of chemical and physical properties from structure. Chem Soc Rev 24:279–287
Hansch C, Maloney PP, Fujita T, Muir RM (1962) Correlation of biological activity of phenoxyacetic acids with Hammett substituent constants and partition coefficients. Nature 194:178
Breiman L, Friedman J, Stone CJ, Olshen RA (1984) Classification and regression trees. CRC Press, Boca Raton
Goh ATC (1995) Back-propagation neural networks for modeling complex systems. Artif Intell Eng 9:143–151. https://doi.org/10.1016/0954-1810(94)00011-S
Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297. https://doi.org/10.1007/BF00994018
Kim B, Khanna R, Koyejo OO (2016) Examples are not enough, learn to criticize! criticism for interpretability. In: Advances in neural information processing systems. pp 2280–2288
Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: A joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. pp 1675–1684
Breiman L (2001) Random forests. Mach Learn 45:5–32. https://doi.org/10.1023/A:1010933404324
Strobl C, Boulesteix A-L, Kneib T et al (2008) Conditional variable importance for random forests. BMC Bioinform 9:307
Svetnik V, Liaw A, Tong C, Wang T (2004) Application of Breiman’s random forest to modeling structure–activity relationships of pharmaceutical molecules BT. In: Roli F, Kittler J, Windeatt T (eds) Multiple classifier systems: 5th international workshop, MCS 2004, Cagliari, Italy, June 9–11, 2004. Proceedings. Springer Berlin Heidelberg, Berlin, pp 334–343
Teixeira AL, Leal JP, Falcao AO (2013) Random forests for feature selection in QSPR models—an application for predicting standard enthalpy of formation of hydrocarbons. J Cheminform 5:9
Guha R, Jurs PC (2004) Development of linear, ensemble, and nonlinear models for the prediction and interpretation of the biological activity of a set of PDGFR inhibitors. J Chem Inf Comput Sci 44:2179–2189. https://doi.org/10.1021/ci049849f
Polishchuk PG, Muratov EN, Artemenko AG et al (2009) Application of random forest approach to QSAR prediction of aquatic toxicity. J Chem Inf Model 49:2481–2488. https://doi.org/10.1021/ci900203n
Marchese Robinson RL, Palczewska A, Palczewski J, Kidley N (2017) Comparison of the predictive performance and interpretability of random forest and linear models on benchmark data sets. J Chem Inf Model 57:1773–1792
Breiman L (1996) Bagging predictors. Mach Learn 24:123–140. https://doi.org/10.1007/BF00058655
Freund Y, Schapire R, Abe N (1999) A short introduction to boosting. J Jpn Soc Artif Intell 14:1612
Zhu H, Tropsha A, Fourches D et al (2008) Combinatorial QSAR modeling of chemical toxicants tested against Tetrahymena pyriformis. J Chem Inf Model 48:766–784
Wolpert DH (1992) Stacked generalization. Neural Netw 5:241–259
Bennett J, Lanning S et al (2007) The netflix prize. In: Proceedings of KDD cup and workshop. p 35
fluorophores.org. http://www.fluorophores.tugraz.at/. Accessed 1 May 2007
Weber G, Farris FJ (1979) Synthesis and spectral properties of a hydrophobic fluorescent probe: 6-propionyl-2-(dimethylamino)naphthalene. Biochemistry 18:3075–3078. https://doi.org/10.1021/bi00581a025
Kucherak OA, Didier P, Mély Y, Klymchenko AS (2010) Fluorene analogues of prodan with superior fluorescence brightness and solvatochromism. J Phys Chem Lett 1:616–620. https://doi.org/10.1021/jz9003685
Lu Z, Lord SJ, Wang H et al (2006) Long-wavelength analogue of PRODAN: synthesis and properties of anthradan, a fluorophore with a 2,6-donor–acceptor anthracene structure. J Org Chem 71:9651–9657. https://doi.org/10.1021/jo0616660
Vill V (2005) LiqCryst 4.6 database. LCI, Fujitsu
Opitz D, Maclin R (1999) Popular ensemble methods: an empirical study. J Artif Intell Res 11:169–198
Polikar R (2006) Ensemble based systems in decision making. IEEE Circuits Syst Mag 6:21–45
Rokach L (2010) Ensemble-based classifiers. Artif Intell Rev 33:1–39
Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Mach Learn 63:3–42. https://doi.org/10.1007/s10994-006-6226-1
Breiman L (1997) Arcing the edge
Friedman JH (2016) Greedy function approximation: a gradient boosting machine. https://statweb.stanford.edu/~jhf/ftp/trebst.pdf
Friedman JH (2002) Stochastic gradient boosting. Comput Stat Data Anal 38:367–378
Breiman L (1996) Stacked regressions. Mach Learn 24:49–64
Muratov EN, Artemenko AG, Varlamova EV et al (2010) Per aspera ad astra: application of simplex QSAR approach in antiviral research. Future Med Chem 2:1205–1226
Raccuglia P, Elbert KC, Adler PDF et al (2016) Machine-learning-assisted materials discovery using failed experiments. Nature 533:73
Kode-Chemoinformatics (2016) Dragon version 7.0.4
Frisch MJ, Trucks GW, Schlegel HB, et al (2016) Gaussian 09 Revision A.02
RDKit. http://rdkit.org/. Accessed 1 Apr 2017
Becke AD (1993) A new mixing of Hartree–Fock and local density-functional theories. J Chem Phys 98:1372–1377. https://doi.org/10.1063/1.464304
Chen C-H, Tanaka K, Funatsu K (2018) Random forest approach to QSPR study of fluorescence properties combining quantum chemical descriptors and solvent conditions. J Fluoresc 28:695–706
Marini A, Muñoz-Losa A, Biancardi A, Mennucci B (2010) What is solvatochromism? J Phys Chem B 114:17128–17135. https://doi.org/10.1021/jp1097487
Chen C-H, Tanaka K, Funatsu K (2019) Random forest model with combined features: a practical approach to predict liquid-crystalline property. Mol Inform 38:1800095
Pedregosa F, Varoquaux G, Gramfort A et al (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830
Sheppard SE, Newsome PT (1942) The effect of solvents on the absorption spectra of dyes. II. Some dyes other than cyanines. J Am Chem Soc 64:2937–2946
Gray GW (1962) Molecular structure and the properties of liquid crystals. Academic Press, Cambridge
Priestly E (2012) Introduction to liquid crystals. Springer Science & Business Media, Berlin