Empirically evaluating flaky test detection techniques combining test case rerunning and machine learning models

Empirical Software Engineering - Tập 28 - Trang 1-52 - 2023
Owain Parry1, Gregory M. Kapfhammer2, Michael Hilton3, Phil McMinn1
1University of Sheffield, Sheffield, UK
2Allegheny College, Meadville, USA
3Carnegie Mellon University, Pittsburgh, USA

Tóm tắt

A flaky test is a test case whose outcome changes without modification to the code of the test case or the program under test. These tests disrupt continuous integration, cause a loss of developer productivity, and limit the efficiency of testing. Many flaky test detection techniques are rerunning-based, meaning they require repeated test case executions at a considerable time cost, or are machine learning-based, and thus they are fast but offer only an approximate solution with variable detection performance. These two extremes leave developers with a stark choice. This paper introduces CANNIER, an approach for reducing the time cost of rerunning-based detection techniques by combining them with machine learning models. The empirical evaluation involving 89,668 test cases from 30 Python projects demonstrates that CANNIER can reduce the time cost of existing rerunning-based techniques by an order of magnitude while maintaining a detection performance that is significantly better than machine learning models alone. Furthermore, the comprehensive study extends existing work on machine learning-based detection and reveals a number of additional findings, including (1) the performance of machine learning models for detecting polluter test cases; (2) using the mean values of dynamic test case features from repeated measurements can slightly improve the detection performance of machine learning models; and (3) correlations between various test case features and the probability of the test case being flaky.

Tài liệu tham khảo

(2022). Python Package Index, https://pypi.org/ Al-Qutaish R, Abran A (2010) Halstead metrics: analysis of their design. Wiley, pp 145–159 Alshammari A, Morris C, Hilton M, Bell J (2021) FlakeFlagger: predicting flakiness without rerunning tests. In: Proceedings of the international conference on software engineering (ICSE) Bell J, Kaiser G, Melski E, Dattatreya M (2015) Efficient dependency detection for safe Java test acceleration. In: Proceedings of the joint meeting of the European software engineering conference and the symposium on the foundations of software engineering (ESEC/FSE), pp 770–781 Bell J, Legunsen O, Hilton M, Eloussi L, Yung T, Marinov D (2018) DeFlaker: automatically detecting flaky tests. In: Proceedings of the international conference on software engineering (ICSE), pp 433–444 Bertolino A, Cruciani E, Miranda B, Verdecchia R (2021) Know your neighbor: fast static prediction of test flakiness. IEEE Access 9:76119–76134 Biagiola M, Stocco A, Mesbah A, Ricca F, Tonella P (2019) Web test dependency detection. In: Proceedings of the joint meeting on European software engineering conference and symposium on the foundations of software engineering (ESEC/FSE), pp 154–164 Breiman L (2001) Random forests. Mach Learn 45(1):5–32 CANNIER experiment (2022) https://github.com/flake-it/cannier-experiment CANNIER framework (2022) https://github.com/flake-it/cannier-framework Camara B, Silva M, Endo A, S. V (2021) On the use of test smells for prediction of flaky tests. In: Proceedings of the Brazilian symposium on systematic and automated software testing (SAST), pp 46–54 Camara B, Silva M, Endo A, S. V (2021) What is the vocabulary of flaky tests? An extended replication. In: Proceedings of the international conference on program comprehension (ICPC), pp 444–454 Candido J, Melo L, D’Amorim M (2017) Test suite parallelization in open-source projects: a study on its usage and impact. In: Proceedings of the international conference on automated software engineering (ASE), pp 153–158 Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357 Chicco D, Jurman G (2020) The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC Genomics 21(6):1471–2164 Coverage.py (2022) — Coverage.py 6.4.1 documentation. https://coverage.readthedocs.io/en/stable/ Dillon E, LaRiviere J, Lundberg S, Roth J, Syrgkanis V (2021) Be careful when interpreting predictive models in search of causal insights, https://towardsdatascience.com/be-careful-when-interpreting-predictive-models-in-search-of-causalinsights-e68626e664b6 Docker documentation (2022) https://docs.docker.com/ Durieux T, Goues CL, Hilton M, Abreu R (2020) Empirical study of restarted and flaky builds on Travis CI. In: Proceedings of the international conference on mining software repositories (MSR), pp 254–264 Eck M, Palomba F, Castelluccio M, Bacchelli A (2019) Understanding flaky tests: the developer’s perspective. In: Proceedings of the joint meeting of the European software engineering conference and the symposium on the foundations of software engineering (ESEC/FSE), pp 830–840 Gambi A, Bell J, Zeller A (2018) Practical test dependency detection. In: Proceedings of the international conference on software testing, verification and validation (ICST), pp 1–11 Garousi V, Ku̇ċu̇k B (2018) Smells in software test code: a survey of knowledge in industry and academia. J Syst Softw 138:52–81 Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Mach Learn 63(1):3–42 Gill GK, Kemerer CF (1991) Cyclomatic complexity density and software maintenance productivity. Trans Softw Eng 17(12):1284 Glossary (2022) — Python 3.10.4 documenation. https://docs.python.org/3/glossary.html#term-global-interpreter-lock Gruber M, Lukasczyk S, Kroiß F, Fraser G (2021) An empirical study of flaky tests in Python. In: Proceedings of the international conference on software testing, verification and validation (ICST) Haben G, Habchi S, Papadakis M, Cordy M, Le Traon Y (2021) A replication study on the usability of code vocabulary in predicting flaky tests. In: Proceedings of the international conference on mining software repositories (MSR) Harman M, O’hearn P (2018) From start-ups to scale-ups: opportunities and open problems for static and dynamic program analysis. In: Proceedings of the international working conference on source code analysis and manipulation (SCAM), pp 1–23 Hilton M, Bell J, Marinov D (2018) A large-scale study of test coverage evolution. In: Proceedings of the international conference on automated software engineering (ASE), pp 53–63 I/O statistics fields (2022) https://www.kernel.org/doc/Documentation/iostats.txt Keller JM, Gray MR, Givens JA (1985) A fuzzy k-nearest neighbor algorithm. Trans Syst Man Cybernet 15(4):580–585 Lam W, Godefroid P, Nath S, Santhiar A, Thummalapenta S (2019) Root causing flaky tests in a large-scale industrial setting. In: Proceedings of the international symposium on software testing and analysis (ISSTA), pp 204–215 Lam W, Muşlu K, Sajnani H, Thummalapenta S (2020) A study on the lifecycle of flaky tests. In: Proceedings of the international conference on software engineering (ICSE), pp 1471–1482 Lam W, Oei R, Shi A, Marinov D, Xie T (2019) IDFlakies: a framework for detecting and partially classifying flaky tests. In: Proceedings of the international conference on software testing, verification and validation (ICST), pp 312–322 Lam W, Shi A, Oei R, Zhang S, Ernst MD, Xie T (2020) Dependent-test-aware regression testing techniques. In: Proceedings of the international symposium on software testing and analysis (ISSTA), pp 298–311 Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, Nair B, Katz R, Himmelfarb J, Bansal N, Lee S (2020) From local explanations to global understanding with explainable AI for trees. Nat Mach Intell 2 (1):2522–5839 Luo Q, Hariri F, Eloussi L, Marinov D (2014) An empirical analysis of flaky tests. In: Proceedings of the symposium on the foundations of software engineering (FSE), pp 643–653 Machalica M, Samylkin A, Porth M, Chandra S (2019) Predictive test selection. In: Proceedings of the international conference on software engineering: software engineering in practice (ICSE-SEIP), pp 91–100 Memon A, Gao Z, Nguyen B, Dhanda S, Nickell E, Siemborski R, Micco J (2017) Taming Google-scale continuous testing. In: Proceedings of the international conference on software engineering: software engineering in practice (ICSE-SEIP), pp 233–242 New EC2 M5zn instances (2022) — Fastest Intel Xeon scalable CPU in the cloud — AWS news blog. https://aws.amazon.com/blogs/aws/new-ec2-m5zn-instances-fastest-intel-xeon-scalable-cpu-in-the-cloud/ Open source project criticality score (beta) (2022) https://github.com/ossf/criticality_score Parry O, Kapfhammer GM, Hilton M, McMinn P (2020) Flake it ‘till you make it: using automated repair to induce and fix latent test flakiness. In: Proceedings of the international workshop on automated program repair (APR), pp 11–12 Parry O, Kapfhammer GM, Hilton M, McMinn P (2021) A survey of flaky tests. Trans Softw Eng Methodol 31(1):1–74 Parry O, Kapfhammer GM, Hilton M, McMinn P (2022) Evaluating features for machine learning detection of order- and non-order-dependent flaky tests. In: Proceedings of the international conference on software testing, verification and validation (ICST), pp 93–104 Parry O, Kapfhammer GM, Hilton M, McMinn P (2022) Surveying the developer experience of flaky tests. In: Proceedings of the international conference on software engineering: software engineering in practice (ICSE-SEIP) Peitek N, Apel S, Parnin C, Brechmann A, Siegmund J (2021) Program comprehension and code complexity metrics: an fMRI study International conference on software engineering (ICSE), pp 524–536 Pinto G, Miranda B, Dissanayake S, Amorim MD, Treude C, Bertolino A, D’amorim M (2020) What is the vocabulary of flaky tests?. In: Proceedings of the international conference on mining software repositories (MSR), pp 492–502 Pontillo V, Palomba F, Ferrucci F (2021) Toward static test flakiness prediction: a feasibility study. In: Proceedings of the international workshop on machine learning techniques for software quality evoluton, pp 19–24 Pontillo V, Palomba F, Ferrucci F (2022) Static test flakiness prediction: how far can we go? Psutil documentation (2022) — Psutil 5.7.3 documenation. https://psutil.readthedocs.io/en/stable/ Pytest (2022) Helps you write better programs — Pytest documentation. https://docs.pytest.org/en/7.1.x/ Romano A, Song Z, Grandhi S, Yang W, Wang W (2021) An empirical analysis of UI-based flaky tests. In: Proceedings of the international conference on software engineering (ICSE) Safavian SR, Landgrebe D (1991) A survey of decision tree classifier methodology. Trans Syst Man Cybernet 21(3):660–674 Scikit-learn (2022) Machine learning in Python — Scikit-learn 1.1.1 documenation. https://scikit-learn.org/stable/ Shi A, Bell J, Marinov D (2019) Mitigating the effects of flaky tests on mutation testing. In: Proceedings of the international symposium on software testing and analysis (ISSTA), pp 296–306 Shi A, Gyori A, Legunsen O, Marinov D (2016) Detecting assumptions on deterministic implementations of non-deterministic specifications. In: Proceedings of the international conference on software testing, verification and validation (ICST), pp 80–90 Shi T, Horvath S (2006) Unsupervised learning with random forest predictors. J Comput Graph Stat 15(1):118–138 Shi A, Lam W, Oei R, Xie T, Marinov D (2019) iFixFlakies: a framework for automatically fixing order-dependent flaky tests. In: Proceedings of the joint meeting on European software engineering conference and symposium on the foundations of software engineering (ESEC/FSE), pp 545–555 Terragni V, Salza P, Ferrucci F (2020) A container-based infrastructure for fuzzy-driven root causing of flaky tests. In: Proceedings of the international conference on software engineering: new ideas and emerging results (ICSE-NIER), pp 69–72 Tomek I (1976) Two modifications of CNN. Trans Syst Man Cybernet 6:769–772 Unittest (2022) — Unit testing framework — Python 3.10.4 documenation. https://docs.python.org/3/library/unittest.html Virtual environments and packages (2022) — Python 3.10.4 documenation. https://docs.python.org/3/tutorial/venv.html Vysali S, Mcintosh S, Adams B (2020) Quantifying, characterizing, and mitigating flakily covered program elements. Transactions on Software Engineering Wei A, Yi P, Li Z, Xie T, Marinov D, Lam W (2022) Preempting flaky tests via non-idempotent-outcome tests. In: Proceedings of the international conference on tools and algorithms for the construction and analysis of systems (TACAS) Welcome to radon’s documenation! (2022) — Radon 4.1.0 documenation. https://radon.readthedocs.io/en/stable/index.html Welcome to the SHAP documenation! (2022) — SHAP latest documenation. https://shap.readthedocs.io/en/stable/index.html Welker KD (2001) The software maintainability index revisited. CrossTalk 14:18–21 Yao L, Chu Z, Li S, Li Y, Gao J, Zhang A (2021) A survey on causal inference. Trans Knowl Discov Data (TKDD) 15(5):1–46 Zavala VM, Flores-Tlacuahuac A (2012) Stability of multiobjective predictive control: a utopia-tracking approach. Automatica 48(10):2627–2632 Zeller A, Hildebrandt R (2002) Simplifying and isolating failure-inducing input. Trans Softw Eng 28(2):183–200 Zhang S, Jalali D, Wuttke J, Muşlu K, Lam W, Ernst MD, Notkin D (2014) Empirically revisiting the test independence assumption. In: Proceedings of the international symposium on software testing and analysis (ISSTA), pp 385–396 Zhang P, Jiang Y, Wei A, Stodden V, Marinov D, Shi A (2021) Domain-specific fixes for flaky tests with wrong assumptions on underdetermined specifications. In: Proceedings of the international conference on software engineering (ICSE), pp 50–61 airflow/test (2022) airflow/test_local_client.py at c743b95. https://github.com/apache/airflow/blob/c743b95a02ba1ec04013635a56ad042ce98823d2/tests/api/client/test_local_client.py#L127 apache/airflow at c743b95 (2022) https://github.com/apache/airflow/tree/c743b95a02ba1ec04013635a56ad042ce98823d2 ipython/test (2022) ipython/test_async_helpers.py at 95d2b79. https://github.com/ipython/ipython/blob/95d2b79a2bd889da7a29e7c3cf5f49c1d25ff43d/IPython/core/tests/test_async_helpers.py#L135 pytest-CANNIER (2022) https://github.com/flake-it/pytest-cannier