Multi-task recurrent convolutional network with correlation loss for surgical video analysis
Tài liệu tham khảo
Ahmidi, 2017, A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery, IEEE Trans. Biomed. Eng., 64, 2025, 10.1109/TBME.2016.2647680
Al Hajj, 2017, Surgical tool detection in cataract surgery videos through multi-image fusion inside a convolutional neural network, 2002
Augenstein, I., Ruder, S., Søgaard, A., 2018. Multi-task learning of pairwise sequence classification tasks over disparate label spaces. In: arXiv preprint. UCL: 1802.09913.
Bachman, 2014, Learning with pseudo-ensembles, 3365
Bhatia, 2007, Real-time identification of operating room state from video, 2, 1761
Blum, 2010, Modeling and segmentation of surgical workflow from laparoscopic video, 400
Bouget, 2017, Vision-based and marker-less surgical tool detection and tracking: a review of the literature, Med. Image Anal., 35, 633, 10.1016/j.media.2016.09.003
Bouget, 2015, Detecting surgical tools by modelling local appearance and global shape, IEEE Trans. Med. Imaging, 34, 2603, 10.1109/TMI.2015.2450831
Bragman, 2018, Uncertainty in multitask learning: Joint representations for probabilistic MR-only radiotherapy planning, 3
Bricon-Souf, 2007, Context awareness in health care: a review, Int. J. Med. Inf., 76, 2, 10.1016/j.ijmedinf.2006.01.003
Cadene, R., Robert, T., Thome, N., Cord, M., 2016. M2CAI workflow challenge: convolutional neural networks with time smoothing and hidden Markov model for video frames classification. In: arXiv preprint. UCL: 1610.05541.
Choi, 2017, Surgical-tools detection based on convolutional neural network in laparoscopic robot-assisted surgery, 1756
Cleary, 2005, OR 2020: The operating room of the future, J. Laparosc. Adv. Surg. Techn. Part A, 15, 495, 10.1089/lap.2005.15.495
Dergachyova, 2016, Automatic data-driven real-time segmentation and recognition of surgical workflow, Int. J. Comput. Ass. Radiol. Surg., 1
DiPietro, 2016, Recognizing surgical activities with recurrent neural networks, 551
Donahue, 2015, Long-term recurrent convolutional networks for visual recognition and description, 2625
Dou, 2017, Automated pulmonary nodule detection via 3D ConvNets with online sample filtering and hybrid-loss residual learning, 630
Forestier, 2013, Multi-site study of surgical practice in neurosurgery based on surgical process models, J. Biomed. Inf., 46, 822, 10.1016/j.jbi.2013.06.006
Forestier, 2015, Automatic phase prediction from low-level surgical activities, Int. J. Comput. Ass. Radiol. Surgery, 10, 833, 10.1007/s11548-015-1195-0
Gebru, 2017, Fine-grained recognition in the wild: A multi-task domain adaptation approach, 1358
He, 2016, Deep residual learning for image recognition, 770
Hinami, 2017, Joint detection and recounting of abnormal events by learning deep generic knowledge, 3619
James, 2007, Eye-gaze driven surgical workflow segmentation, 110
Jin, Y., Cheng, K., Dou, Q., Heng, P.-A., 2019. Incorporating temporal prior from motion flow for instrument segmentation in minimally invasive surgery video. In: arXiv preprint. UCL: 1907.07899.
Jin, 2018, SV-RCNet: Workflow recognition from surgical videos using recurrent convolutional network, IEEE Trans. Med. Imaging, 37, 1114, 10.1109/TMI.2017.2787657
Klank, 2008, Automatic feature generation in endoscopic images, Int. J. Comput. Ass. Radiol. Surg., 3, 331, 10.1007/s11548-008-0223-8
Laina, 2017, Concurrent segmentation and localization for tracking of surgical instruments, 664
Lalys, 2013, Automatic knowledge-based recognition of low-level tasks in ophthalmological procedures, Int. J. Comput. Ass. Radiol. Surg., 8, 39, 10.1007/s11548-012-0685-6
Lalys, 2014, Surgical process modelling: a review, Int. J. Comput. Ass. Radiol. Surg., 9, 495, 10.1007/s11548-013-0940-5
Lalys, 2012, A framework for the recognition of high-level surgical tasks from video images for cataract surgeries, IEEE Trans. Biomed. Eng., 59, 966, 10.1109/TBME.2011.2181168
Lea, 2016, Surgical phase recognition: from instrumented ORs to hospitals around the world, 45
Liu, 2017, Hierarchical clustering multi-task learning for joint human action grouping and recognition, IEEE Trans. Pattern Anal. Mach. Intell., 39, 102, 10.1109/TPAMI.2016.2537337
Liu, 2018, Deep reinforcement learning for surgical gesture segmentation and classification, 247
Luo, H., Hu, Q., Jia, F., 2016. Surgical tool detection via multiple convolutional neural networks. http://camma.u-strasbg.fr/m2cai2016/reports/Luo-Tool.pdf.
Mahmud, 2017, Joint prediction of activity labels and starting times in untrimmed videos, 5773
Nakawala, 2019, Âǣdeep-ontoâǥ network for surgical workflow and context recognition, Int. J. Comput. Ass. Radiol. Surg., 14, 685, 10.1007/s11548-018-1882-8
Padoy, 2012, Statistical modeling and recognition of surgical workflow, Med. Image Anal., 16, 632, 10.1016/j.media.2010.10.001
Padoy, 2008, On-line recognition of surgical activity for monitoring in the operating room, 1718
Quellec, 2014, Real-time recognition of surgical tasks in eye surgery videos, Med. Image Anal., 18, 579, 10.1016/j.media.2014.02.007
Quellec, 2015, Real-time task recognition in cataract surgery videos using adaptive spatiotemporal polynomials, IEEE Trans. Med. Imaging, 34, 877, 10.1109/TMI.2014.2366726
Roth, 2018, Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation, Med. Image Anal., 45, 94, 10.1016/j.media.2018.01.006
Sahu, 2017, Addressing multi-label imbalance problem of surgical tool detection using CNN, Int. J. Comput. Ass. Radiol. Surg., 1
Sarikaya, 2017, Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection, IEEE Trans. Med. Imaging, 36, 1542, 10.1109/TMI.2017.2665671
Speidel, S., Bodenstedt, S., Kenngott, H., Wagner, M., Mller-Stich, B., Maier-Hein, L., 2018. 2018 MICCAI Surgical Workflow Challenge. https://endovissub2017-workflow.grand-challenge.org/.
Twinanda, 2017
Twinanda, A. P., Shehata, S., Mutter, D., Marescaux, J., de Mathelin, M., Padoy, N., 2016. Cholec80 dataset. http://camma.u-strasbg.fr/datasets.
Twinanda, 2017, Endonet: a deep architecture for recognition tasks on laparoscopic videos, IEEE Trans. Med. Imaging, 36, 86, 10.1109/TMI.2016.2593957
Wang, 2017, Deep learning based multi-label classification for surgical tool presence detection in laparoscopic videos, 620
Wang, 2019, Graph convolutional nets for tool presence detection in surgical videos, 467
Wesierski, 2018, Instrument detection and pose estimation with rigid part mixtures model in video-assisted surgeries, Med. Image Anal., 46, 244, 10.1016/j.media.2018.03.012
Xue, 2017, Full quantification of left ventricle via deep multitask learning network respecting intra-and inter-task relatedness, 276
Yengera, G., Mutter, D., Marescaux, J., Padoy, N., 2018. Less is more: surgical phase recognition with less annotations through self-supervised pre-training of CNN-LSTM networks. In: arXiv preprint. UCL: 1805.08569.
Yi, 2019, Hard frame detection and online mapping for surgical phase recognition
Yu, 2019, Assessment of automated identification of phases in videos of cataract surgery using machine learning and deep learning techniques, JAMA Netw. Open, 2, 10.1001/jamanetworkopen.2019.1860
Zappella, 2013, Surgical gesture classification from video and kinematic data, Med. Image Anal., 17, 732, 10.1016/j.media.2013.04.007
Zhou, 2018, SFCN-OPI: Detection and fine-grained classification of nuclei using sibling FCN with objectness prior interaction
Zisimopoulos, 2018, DeepPhase: Surgical phase recognition in CATARACTS videos, 265