Khả năng tái lập của các đánh giá công cụ thực nghiệm trong kỹ thuật phần mềm và hệ thống dựa trên mô hình với MATLAB/Simulink

Alexander Boll1, Nicole Vieregg2, Timo Kehrer1
1Software Engineering Group, University of Bern, Bern, Switzerland
2Special Education in the Field of Learning, Europa-Universität, Flensburg, Germany

Tóm tắt

Tóm tắtNghiên cứu về các công cụ mới cho phát triển dựa trên mô hình không chỉ khác biệt như một nhiệm vụ kỹ thuật thuần túy mà còn cần cung cấp bằng chứng rằng nó hiệu quả. Điều này thường đạt được thông qua các đánh giá thực nghiệm. Theo các nguyên tắc thực hành khoa học tốt, cả công cụ và các mô hình sử dụng trong thí nghiệm nên được công bố cùng với bài báo, nhằm tái lập kết quả thực nghiệm. Chúng tôi điều tra mức độ đáp ứng yêu cầu về khả năng tái lập của kết quả thực nghiệm trong các nghiên cứu gần đây, báo cáo về các phương pháp, kỹ thuật, hoặc thuật toán mới hỗ trợ phát triển dựa trên mô hình với MATLAB/Simulink. Kết quả của chúng tôi từ việc nghiên cứu 65 bài báo khoa học thông qua tìm kiếm tài liệu có hệ thống là khá không hài lòng. Tóm lại, chúng tôi phát hiện rằng chỉ có 31% công cụ và 22% mô hình được dùng làm đối tượng thực nghiệm là có thể truy cập. Vì cả hai thành phần này đều cần thiết cho một nghiên cứu tái lập, chỉ có 9% đánh giá công cụ trình bày trong các bài báo được xem xét có thể được phân loại là có thể tái lập về nguyên tắc. Chúng tôi không tìm thấy bất kỳ kết quả thực nghiệm nào được trình bày trong các bài báo này hoàn toàn có thể tái lập, và 6% có thể tái lập một phần. Vì công cụ vẫn được coi là một trong những trở ngại lớn để áp dụng rộng rãi hơn các nguyên tắc dựa trên mô hình trong thực tiễn, chúng tôi xem đây là một tín hiệu đáng lo ngại. Trong khi chúng tôi tin rằng tình hình này chỉ có thể được cải thiện qua nỗ lực cộng đồng, bài báo này nhằm phục vụ như điểm khởi đầu cho các thảo luận, dựa trên các bài học kinh nghiệm từ nghiên cứu của chúng tôi.

Từ khóa

#khả năng tái lập #phát triển dựa trên mô hình #MATLAB/Simulink #đánh giá công cụ #thử nghiệm thực nghiệm

Tài liệu tham khảo

Brambilla M, Cabot J, Wimmer M (2017) Model-driven software engineering in practice. Synth Lect Softw Eng 3(1):1–207

Völter M, Stahl T, Bettin J, Haase A, Helsen S (2013) Model-driven software development: technology, engineering, management. John Wiley & Sons

Liggesmeyer P, Trapp M (2009) Trends in embedded software engineering. IEEE Softw 26(3):19–25

Robert France and Bernhard Rumpe, “Model-driven Development of Complex Software: A Research Roadmap,” Future of Software Engineering (FOSE ’07), 2007, pp. 37–4, https://doi.org/10.1109/FOSE.2007.14

Shaw M (2002) What makes good research in software engineering? Int J Softw Tools Technol Transf 4(1):1–7

Tichy WF (1998) Should computer scientists experiment more? Computer 31(5):32–40

Meyer B (2010) Empirical research: questions from software engineering. In: 4th international symposium on empirical software engineering and measurement (ESEM 2010)

Barba LA (2018) Terminologies for reproducible research. arXiv preprint arXiv:1802.03311

Juristo N., Gómez O.S. (2012) Replication of Software Engineering Experiments. In: Meyer B., Nordio M. (eds) Empirical Software Engineering and Verification. LASER 2010, LASER 2009, LASER 2008. Lecture Notes in Computer Science, vol 7007. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25231-0_2

Basili VR, Shull F, Lanubile F (1999) Building knowledge through families of experiments. IEEE Trans Softw Eng 25(4):456–473

Mark D. Wilkinson., Michel Dumontier, IJsbrand J. Aalbersberg, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016). https://doi.org/10.1038/sdata.2016.18

Lamprecht AL, Garcia L, Kuzak M, Martinez C, Arcila R, Martin Del Pico E, Dominguez Del Angel V, van de Sandt S, Ison J, Martinez PA, et al. (2019) Towards fair principles for research software. Data Sci pp. 1–23

Piwowar HA, Day RS, Fridsma DB (2007) Sharing detailed research data is associated with increased citation rate. PloS One 2(3):e308

Piwowar HA, Vision TJ (2013) Data reuse and the open data citation advantage. PeerJ 1:e175

Whittle J, Hutchinson J, Rouncefield M, Burden H, Heldal R (2013) Industrial adoption of model-driven engineering: Are the tools really the problem? In: International Conference on Model Driven Engineering Languages and Systems. pp. 1–17. Springer

Boll A, Kehrer T (2020) On the replicability of experimental tool evaluations in model-based development. In: International conference on systems modelling and management. Springer, pp. 111–130

Barbara Kitchenham, Stuart Charters, Guidelines for performing Systematic Literature Reviews in Software Engineering, Technical Report, Version 2.3, Keele University and University of Durham, 9 July 2007

Rebaya A, Gasmi K, Hasnaoui S (2018) A Simulink-based rapid prototyping workflow for optimizing software/hardware programming. In: 2018 26th International Conference on Software, Telecommunications and Computer Networks (SoftCOM). pp. 1–6. IEEE

Kuroki Y, Yoo M, Yokoyama T (2016) A Simulink to UML model transformation tool for embedded control software development. In: IEEE international conference on industrial technology, ICIT 2016, Taipei, Taiwan. IEEE, pp. 700–706. https://doi.org/10.1109/ICIT.2016.7474835

Stephan M, Cordy JR (2015) Identifying instances of model design patterns and antipatterns using model clone detection. In: Proceedings of the seventh international workshop on modeling in software engineering. MiSE ’15, IEEE Press, pp. 48–53

Matinnejad R, Nejati S, Briand LC, Bruckmann T (May 2016) Automated test suite generation for time-continuous Simulink models. In: 2016 IEEE/ACM 38th international conference on software engineering (ICSE), pp. 595–606. https://doi.org/10.1145/2884781.2884797

Matinnejad R, Nejati S, Briand LC, Bruckmann T (2019) Test generation and test prioritization for Simulink models with dynamic behavior. IEEE Trans Softw Eng 45(9):919–944. https://doi.org/10.1109/TSE.2018.2811489

Nejati S, Gaaloul K, Menghi C, Briand LC, Foster S, Wolfe D (2019) Evaluating model testing and model checking for finding requirements violations in Simulink models. In: Proceedings of the 2019 27th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering. ESEC/FSE 2019, Association for computing machinery, New York, NY, USA, pp. 1015–1025. https://doi.org/10.1145/3338906.3340444

Rao AC, Raouf A, Dhadyalla G, Pasupuleti V (2017) Mutation testing based evaluation of formal verification tools. In: 2017 international conference on dependable systems and their applications (DSA), pp. 1–7. https://doi.org/10.1109/DSA.2017.10

Gerlitz T, Kowalewski S (2016) Flow sensitive slicing for MATLAB/Simulink models. In: 2016 13th working IEEE/IFIP conference on software architecture (WICSA), pp. 81–90. https://doi.org/10.1109/WICSA.2016.23

Khelifi A, Ben Lakhal NM, Gharsallaoui H, Nasri O (2018) Artificial neural network-based fault detection. In: 2018 5th international conference on control, decision and information technologies (CoDIT), pp. 1017–1022. https://doi.org/10.1109/CoDIT.2018.8394963

Oussalem O, Kourchi M, Rachdy A, Ajaamoum M, Idadoub H, Jenkal S (2019) A low cost controller of PV system based on Arduino board and INC algorithm. Mater Today Proc. https://doi.org/10.1016/j.matpr.2019.07.689

Norouzi P, Kıvanç ÖC, Üstün Ö (2017) High performance position control of double sided air core linear brushless DC motor. In: 2017 10th international conference on electrical and electronics engineering (ELECO), pp. 233–238

Gallego-Calderon J, Natarajan A (2015) Assessment of wind turbine drive-train fatigue loads under torsional excitation. Eng Struct 103:189–202. https://doi.org/10.1016/j.engstruct.2015.09.008

Rashid M, Anwar MW, Khan AM (2015) Toward the tools selection in model based system engineering for embedded systems—a systematic literature review. J Syst Softw 106:150–163

Elberzhager F, Rosbach A, Bauer T (2013) Analysis and testing of Matlab simulink models: a systematic mapping study. In: Proceedings of the 2013 international workshop on joining AcadeMiA and industry contributions to testing automation. JAMAICA 2013, association for computing machinery, New York, NY, USA, pp. 29–34. https://doi.org/10.1145/2489280.2489285

Gusenbauer M, Haddaway NR (2019) Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed and 26 other resources. Res Synth Methods 11:181–217

Cohen J (1960) A coefficient of agreement for nominal scales. Educ Psychol Meas 20(1):37–46

Yakimenko OA (2019) Engineering computations and modeling in MATLAB®/Simulink®. American Institute of Aeronautics and Astronautics, Inc 12700 Sunrise Valley Drive, Suite 200 Reston, VA 20191-5807 ISBN (print): 978-1-62410-515-9 https://doi.org/10.2514/4.105159

Waltman L, van Eck NJ (2013) A systematic empirical comparison of different approaches for normalizing citation impact indicators. J Informetr 7(4):833–849

Boll A, Vieregg N, Kehrer T, The download link of digital artifacts of this paper, for reuse and replication. https://doi.org/10.6084/m9.figshare.13633928

Bourke T, Carcenac F, Colaço JL, Pagano B, Pasteur C, Pouzet M (2017) A synchronous look at the simulink standard library. ACM Trans Embed Comput Syst. https://doi.org/10.1145/3126516

Chowdhury SA, Mohian S, Mehra S, Gawsane S, Johnson TT, Csallner C (2018) Automatically finding bugs in a commercial cyber-physical system development tool chain with SLforge. In: 2018 IEEE/ACM 40th international conference on software engineering (ICSE), pp. 981–992. https://doi.org/10.1145/3180155.3180231

Boström P, Wiik J (2015) Contract-based verification of discrete-time multi-rate simulink models. Softw Syst Model. https://doi.org/10.1007/s10270-015-0477-x

Arrieta A, Wang S, Arruabarrena A, Markiegi U, Sagardui G, Etxeberria L (2018) Multi-objective black-box test case selection for cost-effectively testing simulation models. In: Proceedings of the genetic and evolutionary computation conference. GECCO ’18, association for computing machinery, New York, NY, USA, p. 1411 – 1418. https://doi.org/10.1145/3205455.3205490

Arrieta A, Wang S, Markiegi U, Arruabarrena A, Etxeberria L, Sagardui G (2019) Pareto efficient multi-objective black-box test case selection for simulation-based testing. Inf Softw Technol 114:137–154. https://doi.org/10.1016/j.infsof.2019.06.009

Mancini T, Mari F, Massini A, Melatti I, Tronci E (2015) Sylvaas: system level formal verification as a service. In: 2015 23rd euromicro international conference on parallel, distributed, and network-based processing, pp. 476–483. https://doi.org/10.1109/PDP.2015.119

Morozov A, Ding K, Chen T, Janschek K (2017) Test suite prioritization for efficient regression testing of model-based automotive software. In: 2017 international conference on software analysis, testing and evolution (SATE), pp. 20–29. https://doi.org/10.1109/SATE.2017.11

Holling D, Hofbauer A, Pretschner A, Gemmar M (2016) Profiting from unit tests for integration testing. In: 2016 IEEE international conference on software testing, verification and validation (ICST), pp. 353–363. https://doi.org/10.1109/ICST.2016.28

Strathmann T, Oehlerking J (2015) Verifying properties of an electro-mechanical braking system. In: 2nd workshop on applied verification of continuous and hybrid systems (ARCH 2015)

Bertram V, Maoz S, Ringert JO, Rumpe B, von Wenckstern M (2017) Component and connector views in practice: An experience report. In: Proceedings of the ACM/IEEE 20th International Conference on Model Driven Engineering Languages and Systems. p. 167–177. MODELS ’17, IEEE Press. https://doi.org/10.1109/MODELS.2017.29

Kusmenko E, Shumeiko I, Rumpe B, von Wenckstern M (2018) Fast simulation preorder algorithm. In: Proceedings of the 6th international conference on model-driven engineering and software development. MODELSWARD 2018, SCITEPRESS - Science and Technology Publications, Lda, Setubal, PRT. https://doi.org/10.5220/0006722102560267

Bertram V, Maoz S, Ringert JO, Rumpe B, von Wenckstern M (2017) Component and connector views in practice: an experience report. In: 2017 ACM/IEEE 20th International conference on model driven engineering languages and systems (MODELS), IEEE, pp. 167–177

Kitchenham B, Pretorius R, Budgen D, Brereton OP, Turner M, Niazi M, Linkman S (2010) Systematic literature reviews in software engineering-a tertiary study. Inf Softw Technol 52(8):792–805

Frâncila Weidt and Rodrigo Silva, Systematic literature review in computer science-a practical guide, Technical Report, Federal University of Juiz de Fora, November 2016, https://doi.org/10.13140/RG.2.2.35453.87524

Stapić Z, López EG, Cabot AG, de Marcos Ortega L, Strahonja V (2012) Performing systematic literature review in software engineering. In: CECIIS 2012-23rd international conference

Masuzzo P, Martens L (2017) Do you speak open science? resources and tips to learn the language. Tech. rep, PeerJ Preprints

Tomita T, Ishii D, Murakami T, Takeuchi S, Aoki T (2019) A scalable Monte-Carlo test-case generation tool for large and complex simulink models. In: 2019 IEEE/ACM 11th International Workshop on Modelling in Software Engineering (MiSE). pp. 39–46. https://doi.org/10.1109/MiSE.2019.00014

Hussain A, Sher HA, Murtaza AF, Al-Haddad K (2019) Improved restricted control set model predictive control (iRCS-MPC) based maximum power point tracking of photovoltaic module. IEEE Access 7:149422–149432. https://doi.org/10.1109/ACCESS.2019.2946747

Jiang Z, Wu X, Dong Z, Mu M Optimal test case generation for Simulink models using slicing. In: 2017 IEEE international conference on software quality, reliability and security companion (QRS-C), pp. 363–369. https://doi.org/10.1109/QRS-C.2017.67

Chowdhury SA (2018) Understanding and improving cyber-physical system models and development tools. In: 2018 IEEE/ACM 40th international conference on software engineering: companion (ICSE-Companion), pp. 452–453

Kehrer T, Kelter U, Pietsch P, Schmidt M (2012) Adaptability of model comparison tools. In: Proceedings of the 27th IEEE/ACM international conference on automated software engineering. IEEE, pp. 306–309

Kehrer T, Kelter U, Ohrndorf M, Sollbach T (2012) Understanding model evolution through semantically lifting model differences with SiLift. In: 28th IEEE international conference on software maintenance (ICSM). IEEE, pp. 638–641

Wille D, Babur Ö, Cleophas L, Seidl C, van den Brand M, Schaefer I (2018) Improving custom-tailored variability mining using outlier and cluster detection. Sci Comput Program 163:62–84. https://doi.org/10.1016/j.scico.2018.04.002

Chowdhury SA, Varghese LS, Mohian S, Johnson TT, Csallner C (2018) A curated corpus of Simulink models for model-based empirical studies. In: 2018 IEEE/ACM 4th international workshop on software engineering for smart cyber-physical systems (SEsCPS). IEEE, pp. 45–48

Chowdhury SA, Shrestha SL, Johnson TT, Csallner C (2020) SLEMI: Equivalence modulo input (EMI) based mutation of CPS models for finding compiler bugs in Simulink. In: Proceeding 42nd ACM/IEEE international conference on software engineering (ICSE), ACM. To appear

Sanchez B, Zolotas A, Rodriguez HH, Kolovos D, Paige R (2019) On-the-fly translation and execution of OCL-like queries on simulink models. In: 2019 ACM/IEEE 22nd international conference on model driven engineering languages and systems (MODELS). IEEE, pp. 205–215

Bourbouh H, Garoche PL, Garion C, Gurfinkel A, Kahsai T, Thirioux X (2017) Automated analysis of stateflow models. In: 21st International conference on logic for programming, artificial intelligence and reasoning (LPAR 2017), pp. 144–161

Ernst G, Arcaini P, Donze A, Fainekos G, Mathesen L, Pedrielli G, Yaghoubi S, Yamagata Y, Zhang Z (2019) Arch-comp 2019 category report: falsification. In: ARCH@ CPSIoTWeek, pp. 129–140

Dajsuren Y, van den Brand MG, Serebrenik A, Roubtsov S (2013) Simulink models are also software: modularity assessment. In: 9th international ACM sigsoft conference on quality of software architectures (QoSA), pp. 99–106

Dajsuren Y (2015) On the design of an architecture framework and quality evaluation for automotive software systems. Ph.D. thesis, Department of mathematics and computer science, Technische Universiteit Eindhoven

Hebig R, Quang TH, Chaudron MR, Robles G, Fernandez MA (2016) The quest for open source projects that use UML: mining GitHub. In: Proceedings of the ACM/IEEE 19th international conference on model driven engineering languages and systems, pp. 173–183

Philip Langer, Tanja Mayerhofer, Manuel Wimmer, Gerti Kappel, On the Usage of UML: Initial Results of Analyzing Open UML Models, Modellierung 2014, Editors: Hans-Georg Fill, Dimitris Karagiannis, Ulrich Reimer, Gesellschaft für Informatik e.V., Bonn, 2014 ISBN 978-388579-619-0 http://dl.gi.de/handle/20.500.12116/20950

Boll A, Brokhausen F, Amorim T, Kehrer T, Vogelsang A, Characteristics, potentials, and limitations of open-source simulink projects for empirical research. Softw Syst Model pp. 1–20 (2021)