Acquisition and usage of robotic surgical data for machine learning analysis

Surgical Endoscopy And Other Interventional Techniques - Tập 37 - Trang 6588-6601 - 2023
Nasseh Hashemi1,2,3,4, Morten Bo Søndergaard Svendsen5,6, Flemming Bjerrum5,7, Sten Rasmussen1, Martin G. Tolsgaard2,5, Mikkel Lønborg Friis1,2
1Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
2Nordsim—Centre for Skills Training and Simulation, Aalborg, Denmark
3ROCnord—Robot Centre, Aalborg University Hospital, Aalborg, Denmark
4Department of Urology, Aalborg University Hospital, Aalborg, Denmark
5Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
6Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
7Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark

Tóm tắt

The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: ‘Capturing image data from the surgical robot’, ‘Extracting event data’, ‘Capturing movement data of the surgeon’, ‘Annotation of image data’. 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons’ arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.

Tài liệu tham khảo

Hanzly MI, Al-Tartir T, Raza SJ, Khan A, Durrani MM, Fiorica T, Ginsberg P, Mohler JL, Kuvshinoff B, Guru KA (2015) Simulation-based training in robot-assisted surgery: current evidence of value and potential trends for the future. Curr Urol Rep 16:41 Tonutti M, Elson DS, Yang G-Z, Darzi AW, Sodergren MH (2017) The role of technology in minimally invasive surgery: state of the art, recent developments and future directions. Postgrad Med J 93:159 Peters BS, Armijo PR, Krause C, Choudhury SA, Oleynikov D (2018) Review of emerging surgical robotic technology. Surg Endosc 32:1636–1655 D’Annibale A, Fiscon V, Trevisan P, Pozzobon M, Gianfreda V, Sovernigo G, Morpurgo E, Orsini C, Del Monte D (2004) The da Vinci robot in right adrenalectomy: considerations on technique. Surg Laparosc Endosc Percutaneous Tech 14:38–41 Wang Z, Majewicz Fey A (2018) Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. Int J Comput Assist Radiol Surg 13:1959–1970 Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S (2022) Surgical data science – from concepts toward clinical translation. Med Image Anal 76:102306 Hung AJ, Chen J, Gill IS (2018) Automated performance metrics and machine learning algorithms to measure surgeon performance and anticipate clinical outcomes in robotic surgery. JAMA Surg 153:770–771 Hung AJ, Chen J, Che Z, Nilanon T, Jarc A, Titus M, Oh PJ, Gill IS, Liu Y (2018) Utilizing machine learning and automated performance metrics to evaluate robot-assisted radical prostatectomy performance and predict outcomes. J Endourol 32:438–444 Hung AJ, Chen J, Jarc A, Hatcher D, Djaladat H, Gill IS (2018) Development and validation of objective performance metrics for robot-assisted radical prostatectomy: a pilot study. J Urol 199:296–304 Mottrie A, Novara G, van der Poel H, Dasgupta P, Montorsi F, Gandaglia G (2016) The European association of urology robotic training curriculum: an update. Eur Urol Focus 2:105–108 Oquendo YA, Riddle EW, Hiller D, Blinman TA, Kuchenbecker KJ (2018) Automatically rating trainee skill at a pediatric laparoscopic suturing task. Surg Endosc 32:1840–1857 Hung AJ, Chen J, Ghodoussipour S, Oh PJ, Liu Z, Nguyen J, Purushotham S, Gill IS, Liu Y (2019) A deep-learning model using automated performance metrics and clinical features to predict urinary continence recovery after robot-assisted radical prostatectomy. BJU Int 124:487–495 Funke I, Mees ST, Weitz J, Speidel S (2019) Video-based surgical skill assessment using 3D convolutional neural networks. Int J Comput Assist Radiol Surg 14:1217–1225 Vilmann AS, Lachenmeier C, Svendsen MBS, Søndergaard B, Park YS, Svendsen LB, Konge L (2020) Using computerized assessment in simulated colonoscopy: a validation study. Endosc Int Open 8:E783-e791 Vilmann AS, Svendsen MBS, Lachenmeier C, Søndergaard B, Vilmann P, Park YS, Svendsen LB, Konge L (2022) Colonoscope retraction technique and predicting adenoma detection rate: a multicenter study. Gastrointest Endosc 95:1002–1010 Cold KM, Svendsen MBS, Bodtger U, Nayahangan LJ, Clementsen PF, Konge L (2021) Automatic and objective assessment of motor skills performance in flexible bronchoscopy. Respiration 100:347–355 Kirubarajan A, Young D, Khan S, Crasto N, Sobel M, Sussman D (2022) Artificial intelligence and surgical education: a systematic scoping review of interventions. J Surg Educ 79:500–515 Patel VL, Shortliffe EH, Stefanelli M, Szolovits P, Berthold MR, Bellazzi R, Abu-Hanna A (2009) The coming of age of artificial intelligence in medicine. Artif Intell Med 46:5–17 Tolsgaard MG, Pusic MV, Sebok-Syer SS, Gin B, Svendsen MB, Syer MD, Brydges R, Cuddy MM, Boscardin CK (2023) The fundamentals of artificial intelligence in medical education research: AMEE Guide No. 156. Med Teach 45:565–573 Moglia A, Georgiou K, Georgiou E, Satava RM, Cuschieri A (2021) A systematic review on artificial intelligence in robot-assisted surgery. Int J Surg 95:106151 Ward TM, Fer DM, Ban Y, Rosman G, Meireles OR, Hashimoto DA (2021) Challenges in surgical video annotation. Comput Assist Surg 26:58–68 Lee D, Yu HW, Kim S, Yoon J, Lee K, Chai YJ, Choi JY, Kong H-J, Lee KE, Cho HS (2020) Vision-based tracking system for augmented reality to localize recurrent laryngeal nerve during robotic thyroid surgery. Sci Rep 10:8437 Nayahangan LJ, Svendsen MBS, Bodtger U, Rahman N, Maskell N, Sidhu JS, Lawaetz J, Clementsen PF, Konge L (2021) Assessment of competence in local anaesthetic thoracoscopy: development and validity investigation of a new assessment tool. J Thorac Dis 13:3998 Nerup N, Svendsen MBS, Rønn JH, Konge L, Svendsen LB, Achiam MP (2022) Quantitative fluorescence angiography aids novice and experienced surgeons in performing intestinal resection in well-perfused tissue. Surg Endosc 36:2373–2381 Ferguson JM, Pitt B, Kuntz A, Granna J, Kavoussi NL, Nimmagadda N, Barth EJ, Herrell SD III, Webster RJ III (2020) Comparing the accuracy of the da Vinci Xi and da Vinci Si for image guidance and automation. Int J Med Robot Comput Assist Surg 16:1–10 Takács A, Nagy DÁ, Rudas I, Haidegger T (2016) Origins of surgical robotics: from space to the operating room. Acta Polytechnica Hungarica 13:13–30 Abboudi H, Khan MS, Guru KA, Froghi S, De Win G, Van Poppel H, Dasgupta P, Ahmed K (2014) Learning curves for urological procedures: a systematic review. BJU Int 114:617–629 Soomro N, Hashimoto D, Porteous A, Ridley C, Marsh W, Ditto R, Roy S (2020) Systematic review of learning curves in robot-assisted surgery. BJS open 4:27–44 Hashemi N, Hashemi M (2023) Count Event Data, https://github.com/NasHas/Count-Event-Data.git. Github Hashemi N, Hashemi M (2023) Bag-file to video, https://github.com/NasHas/Bag-file-to-video.git. Github Bewley A, Ge Z, Ott L, Ramos F, Upcroft B (2016) Simple online and realtime tracking. 2016 IEEE International Conference on Image Processing (ICIP), IEEE, pp 3464–3468 Hashemi N, Hashemi M (2023) Surgeon Hand Arm Tracking, https://github.com/NasHas/Surgeon-Hand-Arm-Tracking.git. Github Smith R, Patel V, Satava R (2014) Fundamentals of robotic surgery: a course of basic robotic surgery skills based upon a 14-society consensus template of outcomes measures and curriculum development. Int J Med Robot + Comput Assist Surg: MRCAS 10:379–384 Christensen JB, Nodin E, Zetner DB, Fabrin A, Thingaard E (2018) Basic open surgical training course. Dan Med J 65:A5519 Friard O, Gamba M (2016) BORIS: a free, versatile open-source event-logging software for video/audio coding and live observations. Methods Ecol Evol 7:1325–1330 Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A (2011) Real-time human pose recognition in parts from single depth images. CVPR 2011. IEEE, Piscataway, pp 1297–1304 Clark RA, Mentiplay BF, Hough E, Pua YH (2019) Three-dimensional cameras and skeleton pose tracking for physical function assessment: a review of uses, validity, current developments and Kinect alternatives. Gait Posture 68:193–200 Kidziński Ł, Yang B, Hicks JL, Rajagopal A, Delp SL, Schwartz MH (2020) Deep neural networks enable quantitative movement analysis using single-camera videos. Nat Commun 11:4054 Nugraha F, Djamal EC (2019) Video recognition of American sign language using two-stream convolution neural networks. 2019 International Conference on Electrical Engineering and Informatics (ICEEI), IEEE, pp 400–405 Wagner M, Brandenburg JM, Bodenstedt S, Schulze A, Jenke AC, Stern A, Daum MTJ, Mündermann L, Kolbinger FR, Bhasker N, Schneider G, Krause-Jüttler G, Alwanni H, Fritz-Kebede F, Burgert O, Wilhelm D, Fallert J, Nickel F, Maier-Hein L, Dugas M, Distler M, Weitz J, Müller-Stich BP, Speidel S (2022) Surgomics: personalized prediction of morbidity, mortality and long-term outcome in surgery using machine learning on multimodal data. Surg Endosc 36:8568–8591