Nội dung được dịch bởi AI, chỉ mang tính chất tham khảo
Sử Dụng Kép và Đáng Tin Cậy? Phân Tích Phương Pháp Hỗn Hợp Về Sự Lan Tỏa Của AI Giữa Nghiên Cứu và Phát Triển Dân Sự và Quốc Phòng
Tóm tắt
Trí tuệ nhân tạo (AI) dường như đang tác động đến tất cả các lĩnh vực công nghiệp, đồng thời trở thành động lực cho đổi mới. Sự lan tỏa của AI từ lĩnh vực dân sự sang lĩnh vực quốc phòng, cũng như tiềm năng sử dụng kép của AI đã thu hút sự chú ý của các học giả về an ninh và đạo đức. Với việc công bố hướng dẫn đạo đức AI Đáng Tin Cậy của Liên minh Châu Âu (EU), các câu hỏi quy phạm về việc ứng dụng AI đã được đánh giá thêm. Để rút ra kết luận về AI Đáng Tin Cậy như một điểm tham chiếu cho nghiên cứu và phát triển (R&D) có trách nhiệm, chúng tôi tiếp cận sự lan tỏa của AI trong cả lĩnh vực dân sự và quân sự ở EU. Chúng tôi ghi nhận mức độ lan tỏa công nghệ và suy luận các mạng trích dẫn bằng sáng chế của châu Âu và Đức. Cả hai mạng đều chỉ ra mức độ lan tỏa thấp của AI giữa các lĩnh vực dân sự và quốc phòng. Một nghiên cứu định tính về mô tả dự án của một viện nghiên cứu trong cả hai lĩnh vực dân sự và quân sự cho thấy các ứng dụng AI quân sự nhấn mạnh độ chính xác hoặc khả năng chịu đựng, trong khi AI dân sự phản ánh sự tập trung vào các giá trị lấy con người làm trung tâm. Công việc của chúng tôi đại diện cho một cách tiếp cận ban đầu bằng cách liên kết các quy trình lan tỏa công nghệ với các đánh giá quy phạm của R&D.
Từ khóa
#trí tuệ nhân tạo #sử dụng kép #nghiên cứu phát triển #dân sự #quốc phòng #đạo đứcTài liệu tham khảo
Acosta, M., Coronado, D., Ferrandiz, E., Marin, M. R., & Moreno, P. J. (2017). Patents and dual-use technology: An empirical study of the world’s largest defence companies. Defence and Peace Economics, 29(7), 821–839. https://doi.org/10.1080/10242694.2017.1303239
Acosta, M., Coronado, D., Ferrándiz, E., Marín, M. R., & Moreno, P. J. (2019). Civil-military patents and technological knowledge flows into the leading defense firms. Armed Forces and Society. https://doi.org/10.1177/0095327X18823823
Acosta, M., Coronado, D., & Marín, R. (2011). Potential dual-use of military technology: Does citing patents shed light on this process? Defence and Peace Economics, 22(3), 335–349. https://doi.org/10.1080/10242694.2010.491681
Acosta, M., Coronado, D., Marín, R., & Prats, P. (2013). Factors affecting the diffusion of patented military technology in the field of weapons and ammunition. Scientometrics. https://doi.org/10.1007/s11192-012-0857-8
Agrawal, A., Gans, J., & Goldfarb, A. (2018). Economic policy for artificial intelligence. In J. Lerner & S. Stern (Eds.), Innovation policy and the economy (pp. 139–159). National Bureau of Economic Research. https://doi.org/10.1086/699935
Arkin, R. C., Ulam, P., & Wagner, A. R. (2012). Moral decision making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception. Proceedings of the IEEE, 100(3), 571–589. https://doi.org/10.1109/JPROC.2011.2173265
Baruffaldi, S., von Beuzekom, B., Dernis, H., Harhoff, Di., Roa, N., Rosenfeld, D., & Squicciarini, M. (2020). Identifying and measuring developments in artificial intelligence: Making the impossible possible (Issue 5). OECD Publishing. https://doi.org/10.1787/5f65ff7e-en
Bouvry, P., Chaumette, S., Danoy, G., Guerrini, G., Jurquet, G., Kuwertz, A., Muller, W., Rosalie, M., & Sander, J. (2016). Using heterogeneous multilevel swarms of UAVs and high-level data fusion to support situation management in surveillance scenarios. In 2016 IEEE international conference on multisensor fusion and integration for intelligent systems (MFI) (pp. 424–429). IEEE.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., Héigeartaigh, S. Ó., Beard, S., Belfield, H., Farquhar, S., … Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation (Issue February).
Bulatov, D., Häufel, G., Meidow, J., Pohl, M., Solbrig, P., & Wernerus, P. (2014). Context-based automatic reconstruction and texturing of 3D urban terrain for quick-response tasks. ISPRS Journal of Photogrammetry and Remote Sensing, 93, 157–170.
Cady, F. (2017). The data science handbook. John Wiley Sons. https://doi.org/10.1002/9781119092919
Callari, F. G., Durand, J.-G. D., Yarlagadda, P. K. K., & Glozman, T. (2021). Techniques for managing processing resources (United States Patent Patent No. US 10,893,107 B1). https://patentimages.storage.googleapis.com/64/43/f2/7b8b2e6efe325b/US10893107.pdf.
Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A. Mathematical Physical and Engineering Sciences, 376, 20180080. https://doi.org/10.1098/rsta.2018.0080
Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068. https://doi.org/10.1007/s11948-019-00146-8
CPC. (2019). G06N: Computer systems based on specific computational models. https://www.uspto.gov/web/patents/classification/cpc/html/cpc-G06N.html.
Cummings, M. L. (2006). Integrating ethics in design through the value-sensitive design approach. Science and Engineering Ethics, 12(4), 701–715. https://doi.org/10.1007/s11948-006-0065-0
Edler, J., & James, A. D. (2015). Understanding the emergence of new science and technology policies: Policy entrepreneurship, agenda setting and the development of the European Framework Programme. Research Policy. https://doi.org/10.1016/j.respol.2014.12.008
European Commission. (2013). Towards a more competitive and efficient European defence and security sector. European Commission. https://ec.europa.eu/commission/presscorner/detail/en/IP_13_734.
European Commission. (2015). EU funding for Dual Use—A pratical guide to accessing EU funds for European Regional Authorities and SMEs. https://ec.europa.eu/docsroom/documents/12601/attachments/1/translations.
European Commission. (2019). Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
European Patent Office. (2021a). 3.3.1 Artificial intelligence and machine learning. In Guidelines for examination. https://www.epo.org/law-practice/legal-texts/html/guidelines/e/g_ii_3_3_1.htm.
European Patent Office. (2021b). Part G patentability. In Guidelines for examination. https://www.epo.org/law-practice/legal-texts/guidelines.html.
Evans, N. G. (2014). Dual-use decision making: Relational and positional issues. Monash Bioethics Review, 32(3–4), 268–283. https://doi.org/10.1007/s40592-015-0026-y
Favaro, M. (2021). Weapons of mass distortion: A new approach to emerging technologies, risk reductoin, and the global nuclear order. Comunicar (Issue May). https://doi.org/10.3916/c22-2004-09
Fleurant, A., Kuimova, A., Tian, N., Wezeman, P. D., & Wezeman, S. T. (2017). The SIPRI Top 100 arms-producing and military services companies, 2016. SIPRI Fact Sheet, December, 1–8.
Flick, U. (2014). An introduction to qualitative research. SAGE Publications.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Forge, J. (2010). A note on the definition of “dual use.” Science and Engineering Ethics, 16(1), 111–118. https://doi.org/10.1007/s11948-009-9159-9
Fraunhofer IOSB. (2018). Fraunhofer IOSB: Annual report 2017/2018. https://www.energie.fraunhofer.de/content/dam/energie/en/documents/05_PDF_annual_reports/iosb_jb_2017_2018_en.pdf.
Fraunhofer IOSB. (2020). Fraunhofer IOSB: Business units. https://www.iosb.fraunhofer.de/servlet/is/12576/.
German Federal Ministry of Defense. (2017). Military scientific research report annual report 2017: defence research for the German armed forces.
Gill, A. S. (2019). Artificial intelligence and international security: The long view. Ethics and International Affairs. https://doi.org/10.1017/S0892679419000145
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. The MIT Press.
Gray, P. S., Williamson, J. B., Karp, D. A., & Dalphin, J. R. (2007). The research imagination: An introduction to qualitative and quantitative methods. Cambridge University Press. https://doi.org/10.1017/cbo9780511819391
Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2011). Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?” Ethics and Information Technology, 13(1), 17–27. https://doi.org/10.1007/s10676-010-9255-1
Grunwald, A. (2020). The objects of technology assessment Hermeneutic extension of consequentialist reasoning. Journal of Responsible Innovation, 7(1), 96–112. https://doi.org/10.1080/23299460.2019.1647086
Guthrie, G. (2019, December 3). Machine learning as a service (MLaaS) is the next trend no one is talking about. DataDrivenInvestor.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
Harris, E. D. (Ed.). (2016). Governance of dual-use technologies: Theory and practice. American Academy of Arts & Sciences. https://www.amacad.org/sites/default/files/publication/downloads/GNF_Dual-Use-Technology.pdf.
IJsselmuiden, J., Münch, D., Grosselfinger, A. K., Arens, M., & Stiefelhagen, R. (2014). Automatic understanding of group behavior using fuzzy temporal logic. Journal of Ambient Intelligence and Smart Environments, 6(6), 623–649.
Kim, D. H., Lee, B. K., & Sohn, S. Y. (2016). Quantifying technology-industry spillover effects based on patent citation network analysis of unmanned aerial vehicle UAV. Technological Forecasting and Social Change, 105(C), 140–157. https://doi.org/10.1016/j.techfore.2016.01.025
Klinger, J., Mateos-Garcia, J., & Stathoulopoulos, K. (2018). Deep learning, deep change? Mapping the development of the Artificial Intelligence General Purpose Technology. CoRR, abs/1808.0.
Koenig, N. (2020). Leading beyond civilian power: Germany’s role re-conception in European crisis management. German Politics, 29(1), 79–96. https://doi.org/10.1080/09644008.2018.1496240
Korenberg, A., & Hamer, T. (2018, December 3). Assessing the EPO’s new guidelines on AI. IP STARS.
Liu, W., Tao, Y., Yang, Z., & Bi, K. (2019). Exploring and visualizing the patent collaboration network: A case study of smart grid field in China. Sustainability. https://doi.org/10.3390/su11020465
Luhmann, N. (1979). Trust: A mechanism for the reduction of social complexity. Wiley.
Lupu, M., Mayer, K., Kando, N., & Trippe, A. J. (2011). Preface. In M. Lupu, K. Mayer, N. Kando, & A. J. Trippe (Eds.), Current challenges in patent information retrieval (pp. 1–8). Springer. https://doi.org/10.1007/978-3-642-19231-9
Marzi, T., Knappertsbusch, V., Marzi, A., Naumann, S., Deerberg, G., & Waidner, E. (2018). Fragen zu einer biologischen Technik. UMSICHT-Diskurs Heft, 2.
Meunier, F. X., & Bellais, R. (2019). Technical systems and cross-sector knowledge diffusion: An illustration with drones. Technology Analysis and Strategic Management, 31(4), 433–446. https://doi.org/10.1080/09537325.2018.1518522
Mowery, D. C., & Simcoe, T. (2002). Is the internet a US invention? An economic and technological history of computer networking. Research Policy, 31(8–9), 1369–1387. https://doi.org/10.1016/S0048-7333(02)00069-0
Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron? Boston University Law Review, 81(3), 635–664.
Okakita, Y. (2019). Patent examination practices regarding AI-related inventions: Comparison in the EPO, USPTO and JPO [MIPLC Master Thesis Series]. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3652173.
Oltmann, S. (2015). Dual use research: Investigation across multiple science disciplines. Science and Engineering Ethics, 21(2), 327–341. https://doi.org/10.1007/s11948-014-9535-y
Pecotic, A. (2019, May 3). Whoever predicts the future will win the AI arms race. Foreign Policy. https://foreignpolicy.com/2019/03/05/whoever-predicts-the-future-correctly-will-win-the-ai-arms-race-russia-china-united-states-artificial-intelligence-defense/.
de Pereira, S. A. & Quoniam, L. (2017). Intellectual property and patent prospecting as a basis for knowledge and innovation: A study on mobile information technologies and virtual processes of communication and management. RAI Revista de Administração e Inovação. https://doi.org/10.1016/j.rai.2017.07.006
Philipp, P., Schreiter, L., Giehl, J., Fischer, Y., Raczkowsky, J., Schwarz, M., Woern, H., & Beyerer, J. (2016). Situation detection for an interactive assistance in surgical interventions based on dynamic bayesian networks. CRAS 2016, 6th joint workshop on new technologies for computer/robot assisted surgery.
Reppy, J. (2006). Managing dual-use technology in an age of uncertainty. The Forum: A Journal of Applied Research in Contemporary Politics, 4(1).
Riebe, T., & Reuter, C. (2019). Dual use and dilemmas for cybersecurity, peace and technology assessment. In C. Reuter (Ed.), Information technology for peace and security: IT-applications and infrastructures in conflicts, crises, war, and peace (pp. 165–184). Springer.
Riebe, T., Schmid, S., & Reuter, C. (2020). Meaningful human control of lethal autonomous weapon system: The CCW-debate and its implications for value-sensitive design. IEEE Technology and Society Magazine, 39(4), 36–51.
Riebe, T., Schmid, S., & Reuter, C. (2021). Measuring spillover effects from defense to civilian sectors: A quantitative approach using linkedIn. Defence and Peace Economis, 32(7), 773–785. https://doi.org/10.1080/10242694.2020.1755787
Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation. AI and Society, 36(1), 59–77. https://doi.org/10.1007/s00146-020-00992-2
Ronggui, H. (2019). RQDA. https://github.com/Ronggui/RQDA.
Ryan, M. (2020). In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y
Schmid, J. (2017). The diffusion of military technology. Defence and Peace Economics, 29(6), 1–19. https://doi.org/10.1080/10242694.2017.1292203
Shields, J. (2018). Smart machines and smarter policy: Foreign investment regulation, national security, and technology transfer in the age of artificial intelligence. SSRN, 51(2), 279. https://doi.org/10.2139/ssrn.3147091
Taddeo, M. (2010). Modelling trust in artificial agents, a first step toward the analysis of e-trust. Minds and Machines, 20(2), 243–257. https://doi.org/10.1007/s11023-010-9201-3
Taddeo, M. (2017). Trusting digital technologies correctly. Minds and Machines, 27(4), 565–568. https://doi.org/10.1007/s11023-017-9450-5
Taddeo, M., McCutcheon, T., & Floridi, L. (2019). Trusting artificial intelligence in cybersecurity is a double-edged sword. Nature Machine Intelligence, 1(12), 557–560. https://doi.org/10.1038/s42256-019-0109-1
Taebi, B., van den Hoven, J., & Bird, S. J. (2019). The importance of ethics in modern universities of technology. Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00164-6
Tavani, H. T. (2018). Can social robots qualify for moral consideration? Reframing the question about robot rights. Information Switzerland, 9(4), 73. https://doi.org/10.3390/info9040073
Thiebes, S., Lins, S., & Sunyaev, A. (2020). Trustworthy artificial intelligence. Electronic Markets, 31, 447–464.
Tiedrich, L. J., Discher, G. S., Argent, F., & Rios, D. (2020). 10 Best practices for artificial intelligence-related intellectual property. Intellectual Property & Technology Law Journal, 32(7), 3–8.
Tucker, J. B. (Ed.). (2012). Innovation, dual use, security: Managing the risks of emerging biological and chemical technologies. MIT Press.
Umbrello, S. (2019). Imaginative value sensitive design: Using moral imagination theory to inform responsible technology design. Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00104-4
Umbrello, S., & De Bellis, A. F. (2018). A value-sensitive design approach to intelligent agents. Artificial Intelligence Safety and Security, January, 395–410. https://doi.org/10.13140/RG.2.2.17162.77762
Urquhart, Q. E., & Sullivan, L. (2020, April 27). April 2020: The increasing importance of trade secret protection for artificial intelligence. JD SUPRA. https://www.jdsupra.com/legalnews/april-2020-the-increasing-importance-of-64465/
USPTO. (2019). Cooperative patent classification: B64G cosmonautics; vehicles or equipment thereof. https://www.uspto.gov/web/patents/classification/cpc/html/cpc-B64G.html.
Uttley, M. (2019). Review of ‘the emergence of EU defense research policy: From innovation to militarization.’ Defence and Peace Economics. https://doi.org/10.1080/10242694.2019.1571826
Verbruggen, M. (2019). The role of civilian innovation in the development of lethal autonomous weapon systems. Global Policy. https://doi.org/10.1111/1758-5899.12663
Verdiesen, I. (2017). Agency perception and moral values related to Autonomous Weapons: An empirical study using the Value-Sensitive Design approach. Delft University of Technology.
Wagner, A. R., & Arkin, R. C. (2011). Recognizing situations that demand trust. Proceedings-IEEE international workshop on robot and human interactive communication. https://doi.org/10.1109/ROMAN.2011.6005228
Winfield, A. F. T., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A Mathematical Physical and Engineering Sciences, 376(2133), 20180085. https://doi.org/10.1098/rsta.2018.0085
WIPO. (2019). WIPO Technology Trends 2019: Artificial intelligence. World Intellectual Property Organization. https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf.
Zambetti, M., Sala, R., Russo, D., Pezzotta, G., & Pinto, R. (2018). A patent review on machine learning techniques and applications: Depicting main players, relations and technology landscapes. Proceedings of the Summer School Francesco Turco, 2018-Septe, 115–128.
Ziehn, J. R., Ruf, M., Willersinn, D., Rosenhahn, B., Beyerer, J., & Gotzig, H. (2016). A tractable interaction model for trajectory planning in automated driving. In 2016 IEEE 19th international conference on intelligent transportation systems (ITSC) (pp. 1410–1417). IEEE.