AFID: an automated approach to collecting software faults

Automated Software Engineering - Tập 17 - Trang 347-372 - 2010
Alex Edwards1, Sean Tucker2, Brian Demsky3
1Computer Science Department, University of California, Los Angeles, USA
2Western Digital, Irvine, USA
3Department of Electrical Engineering and Computer Science, University of California, Irvine, USA

Tóm tắt

We present a new approach for creating repositories of real software faults. We have developed a tool, the Automatic Fault IDentification Tool (AFID), that implements this approach. AFID records both a fault revealing test case and a faulty version of the source code for any crashing faults that the developer discovers and a fault correcting source code change for any crashing faults that the developer corrects. The test cases are a significant contribution, because they enable new research that explores the dynamic behaviors of the software faults. AFID uses an operating system level monitoring mechanism to monitor both the compilation and execution of the application. This technique makes it straightforward for AFID to support a wide range of programming languages and compilers. We present our experience using AFID in a controlled case study and in a real development environment to collect software faults in the internal development of our group’s compiler. The case studies collected several real software faults and validated the basic approach. The longer term internal study revealed weaknesses in using the original version of AFID for real development. This experience led to a number of refinements to the tool for use in real software development. We have collected over 20 real software faults in large programs and continue to collect software faults.

Tài liệu tham khảo

Cadar, C., Engler, D.: Execution generated test cases: how to make systems code crash itself. In: Proceedings of the SPIN Workshop, pp. 2–23 (2005) Chacon, S.: Git—The fast version control system. http://www.git-scm.com/ (2010) Choi, J.D., Srinivasan, H.: Deterministic replay of Java multithreaded applications. In: Proceedings of the SIGMETRICS Symposium on Parallel and Distributed Tools, pp. 48–59. ACM, New York (1998) Collins-Sussman, B.: The subversion project: building a better CVS. Linux J. 2002(94), 3 (2002) Dallmeier, V., Zimmermann, T.: Extraction of bug localization benchmarks from history. In: Proceedings of the Twenty-Second IEEE/ACM International Conference on Automated Software Engineering, pp. 433–436 (2007) Do, H., Elbaum, S., Rothermel, G.: Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. Empir. Softw. Eng. Int. J. 10(4), 405–435 (2005) Edwards, A., Tucker, S., Worms, S., Vaidya, R., Demsky, B.: AFID: an automated fault identification tool. In: Proceedings of the 2008 International Symposium on Software Testing and Analysis, pp. 179–188 (2008) Haardt, M., Coleman, M.: Ptrace(2). Linux programmer’s manual, Section 2 (1999) LeBlanc, T.J., Mellor-Crummey, J.M.: Debugging parallel programs with instant replay. IEEE Trans. Comput. 36(4), 471–482 (1987). doi:10.1109/TC.1987.1676929 Liblit, B., Aiken, A., Zheng, A.X., Jordan, M.I.: Bug isolation via remote program sampling. In: Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation, pp. 141–154 (2003) Livshits, B., Zimmermann, T.: Dynamine: finding common error patterns by mining software revision histories. In: ACM SIGSOFT Symposium on the Foundations of Software Engineering, pp. 296–305 (2005) Lu, S., Li, Z., Qin, F., Tan, L., Zhou, P., Zhou, Y.: BugBench: Benchmarks for evaluating bug detection tools. In: Proceedings of the Workshop on the Evaluation of Software Defect Detection Tools (2005) Nagappan, N., Ball, T., Zeller, A.: Mining metrics to predict component failures. In: Proceeding of the 28th International Conference on Software Engineering, pp. 452–461 (2006) Neuhaus, S., Zimmermann, T., Holler, C., Zeller, A.: Predicting vulnerable software components. In: Proceedings of the 14th ACM Conference on Computer and Communications Security, pp. 529–540 (2007) Sekar, R., Venkatakrishnan, V.N., Basu, S., Bhatkar, S., DuVarney, D.C.: Model-carrying code: a practical approach for safe execution of untrusted applications (2003) Śliwerski, J., Zimmermann, T., Zeller, A.: When do changes induce fixes? On Fridays. In: Proceedings of the International Workshop on Mining Software Repositories, pp. 1–5 (2005) Some, R.R., Kim, W.S., Khanoyan, G., Callum, L., Agrawal, A., Beahan, J.J.: A software-implemented fault injection methodology for design and validation of system fault tolerance. In: Proceedings of the 2001 International Conference on Dependable Systems and Networks, pp. 501–506 (2001) Spacco, J., Strecker, J., Hovemeyer, D., Pugh, W.: Software repository mining with Marmoset: an automated programming project snapshot and testing system. In: Proceedings of the Mining Software Repositories Workshop (2005) Spillane, R.P., Wright, C.P., Sivathanu, G., Zadok, E.: Rapid file system development using ptrace. In: Proceedings of the 2007 Workshop on Experimental Computer Science, pp. 1–13 (2007) Steven, J., Chandra, P., Fleck, B., Podgurski, A.: jRapture: a capture/replay tool for observation-based testing. In: Proceedings of the 2000 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 158–167. Portland, OR, USA (2000) Williams, C., Hollingsworth, J.K.: Bug driven bug finders. In: Proceedings of the International Workshop on Mining Software Repositories, pp. 70–74 (2004) Zeller, A.: Yesterday, my program worked. Today, it does not. Why? In: Proceedings of the 7th European Software Engineering Conference/7th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 253–267 (1999)