Optimal test case generation for boundary value analysis
Software Quality Journal - Trang 1-24 - 2024
Tóm tắt
Boundary value analysis (BVA) is a common technique in software testing that uses input values that lie at the boundaries where significant changes in behavior are expected. This approach is widely recognized and used as a natural and effective strategy for testing software. Test coverage is one of the criteria to measure how much the software execution paths are covered by the set of test cases. This paper focuses on evaluating test coverage with respect to BVA by defining a metric called boundary coverage distance (BCD). The BCD metric measures the extent to which a test set covers the boundaries. In addition, based on BCD, we consider the optimal test input generation to minimize BCD under the random testing scheme. We propose three algorithms, each representing a different test input generation strategy, and evaluate their fault detection capabilities through experimental validation. The results indicate that the BCD-based approach has the potential to generate boundary values and improve the effectiveness of software testing.
Tài liệu tham khảo
Ali, S., Yue, T., Qiu, X., et al. (2016). Generating boundary values from OCL constraints using constraints rewriting and search algorithms[C]//2016 IEEE Congress on Evolutionary Computation (CEC). IEEE, 379–386.
Awedikian, Z., Ayari, K., & Antoniol, G. (2009). MC/DC automatic test input data generation[C]//Proceedings of the 11th Annual conference on Genetic and evolutionary computation. 1657–1664.
Cadar, C., Dunbar, D., & Engler, D. R. (2008). KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs[C]//OSDI, 8:209–224.
Chilenski, J. J., & Miller, S. P. (1994). Applicability of modified condition/decision coverage to software testing[J]. Software Engineering Journal, 9(5), 193–200.
Chen, T. Y., Leung, H., & Mak, I. K. (2004). Adaptive random testing[C]//Annual Asian Computing Science Conference (pp. 320–329). Berlin, Heidelberg: Springer.
Chib, S., & Greenberg, E. (1995). Understanding the Metropolis-Hastings algorithm[J]. The American Statistician, 49(4), 327–335.
Clarke, L. A., Hassell, J., & Richardson, D. J. (1982). A close look at domain testing[J]. IEEE Transactions on Software Engineering, 4, 380–390.
Dobslaw, F., de Oliveira Neto, F. G., & Feldt, R. (2020). Boundary value exploration for software analysis[C]//2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 346–353.
Eén, N., & Sörensson, N. (2003). An extensible SAT-solver[C]//International conference on theory and applications of satisfiability testing. Berlin, Heidelberg: Springer Berlin Heidelberg, 502–518.
Feldt, R., & Dobslaw, F. (2019). Towards automated boundary value testing with program derivatives and search[C]//International Symposium on Search Based Software Engineering (pp. 155–163). Cham: Springer.
Ghani, K. (2009). Searching for test data[J]. Ph. D Thesis.
Guo, X., Okamura, H., & Tadashi, D. (2023). Towards high-quality test suite generation with ML-based boundary value analysis[C]//2023 IEEE 10th International Conference on Dependable Systems and Their Applications (DSA). IEEE.
Jamrozik, K., Fraser, G., Tillman, N., et al. (2013). Generating test suites with augmented dynamic symbolic execution[C]//Tests and Proofs: 7th International Conference, TAP 2013, Budapest, Hungary, June 16-20, 2013. Proceedings 7. Springer Berlin Heidelberg, 152–167.
Jeng, B., & Forgács, I. (1999). An automatic approach of domain test data generation[J]. Journal of Systems and Software, 49(1), 97–112.
Jia, Y., & Harman, M. (2010). An analysis and survey of the development of mutation testing[J]. IEEE Transactions on Software Engineering, 37(5), 649–678.
Kosmatov, N., Legeard, B., Peureux, F., et al. (2004). Boundary coverage criteria for test generation from formal models[C]//15th International Symposium on Software Reliability Engineering. IEEE, 139–150.
Li, L., & Miao, H. (2012). Model-based boundary coverage criteria for logic expressions[J]. Applied Mathematics, 6(1S), 31S-34S.
Pandita, R., Xie, T., Tillmann, N., et al. (2010). Guided test generation for coverage criteria[C]//2010 IEEE International Conference on Software Maintenance. IEEE, 1–10.
Reid, S. C. (1997). An empirical analysis of equivalence partitioning, boundary value analysis and random testing[C]//Proceedings Fourth International Software Metrics Symposium. IEEE, 64–73.
White, L. J., & Cohen, E. I. (1980). A domain strategy for computer program testing[J]. IEEE Transactions on Software Engineering, 3, 247–257.
Williams, N., Marre, B., Mouy, P., et al. (2005). PathCrawler: Automatic generation of path tests by combining static and dynamic analysis[C]//European Dependable Computing Conference (pp. 281–292). Berlin, Heidelberg: Springer.
Zhang, Z., Wu, T., & Zhang, J. (2015). Boundary value analysis in automatic white-box test generation[C]//2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE). IEEE, 239–249.
Zhao, R., Lyu, M. R., & Min, Y. (2010). Automatic string test data generation for detecting domain errors[J]. Software Testing, Verification and Reliability, 20(3), 209–236.
Zhu, H., Hall, P. A. V., & May, J. H. R. (1997). Software unit test coverage and adequacy[J]. ACM Computing Surveys (CSUR), 29(4), 366–427.