Improving the Generalizability of Infantile Cataracts Detection via Deep Learning-Based Lens Partition Strategy and Multicenter Datasets

Jiewei Jiang1, Shutao Lei2, Mingmin Zhu3, Ruiyang Li4, Jiayun Yue1, Jingjing Chen4, Zhongwen Li4, Jiamin Gong1, Duoru Lin4, Xiaohang Wu4, Zhuoling Lin4, Haotian Lin4
1School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
2School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
3School of Mathematics and Statistics, Xidian University, Xi’an, China
4State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China

Tóm tắt

Infantile cataract is the main cause of infant blindness worldwide. Although previous studies developed artificial intelligence (AI) diagnostic systems for detecting infantile cataracts in a single center, its generalizability is not ideal because of the complicated noises and heterogeneity of multicenter slit-lamp images, which impedes the application of these AI systems in real-world clinics. In this study, we developed two lens partition strategies (LPSs) based on deep learning Faster R-CNN and Hough transform for improving the generalizability of infantile cataracts detection. A total of 1,643 multicenter slit-lamp images collected from five ophthalmic clinics were used to evaluate the performance of LPSs. The generalizability of Faster R-CNN for screening and grading was explored by sequentially adding multicenter images to the training dataset. For the normal and abnormal lenses partition, the Faster R-CNN achieved the average intersection over union of 0.9419 and 0.9107, respectively, and their average precisions are both > 95%. Compared with the Hough transform, the accuracy, specificity, and sensitivity of Faster R-CNN for opacity area grading were improved by 5.31, 8.09, and 3.29%, respectively. Similar improvements were presented on the other grading of opacity density and location. The minimal training sample size required by Faster R-CNN is determined on multicenter slit-lamp images. Furthermore, the Faster R-CNN achieved real-time lens partition with only 0.25 s for a single image, whereas the Hough transform needs 34.46 s. Finally, using Grad-Cam and t-SNE techniques, the most relevant lesion regions were highlighted in heatmaps, and the high-level features were discriminated. This study provides an effective LPS for improving the generalizability of infantile cataracts detection. This system has the potential to be applied to multicenter slit-lamp images.

Từ khóa


Tài liệu tham khảo

Long, 2017, An artificial intelligence platform for the multihospital collaborative management of congenital cataracts, Nat Biomed Eng., 1, 1, 10.1038/s41551-016-0024

Wang, 2017, Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images, Sci Rep., 7, 41545, 10.1038/srep41545

Liu, 2017, Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network, PLoS ONE., 12, e0168606, 10.1371/journal.pone.0168606

Jiang, 2017, Automatic diagnosis of imbalanced ophthalmic images using a cost-sensitive deep convolutional neural network, Biomed Eng Online., 16, 132, 10.1186/s12938-017-0420-1

Gulshan, 2016, Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs, JAMA., 316, 2402, 10.1001/jama.2016.17216

Ting, 2017, Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes, JAMA., 318, 2211, 10.1001/jama.2017.18152

Burlina, 2018, Utility of deep learning methods for referability classification of age-related macular degeneration, JAMA Ophthalmol., 136, 1305, 10.1001/jamaophthalmol.2018.3799

Peng, 2019, DeepSeeNet: a deep learning model for automated classification of patient-based age-related macular degeneration severity from color fundus photographs, Ophthalmology., 126, 565, 10.1016/j.ophtha.2018.11.015

Asaoka, 2019, Using deep learning and transfer learning to accurately diagnose early-onset glaucoma from macular optical coherence tomography images, Am J Ophthalmol., 198, 136, 10.1016/j.ajo.2018.10.007

Yala, 2019, A deep learning mammography-based model for improved breast cancer risk prediction, Radiology., 292, 60, 10.1148/radiol.2019182716

Bejnordi, 2017, Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer, JAMA., 318, 2199, 10.1001/jama.2017.14580

Esteva, 2017, Dermatologist-level classification of skin cancer with deep neural networks, Nature., 542, 115, 10.1038/nature21056

Hazlett, 2017, Early brain development in infants at high risk for autism spectrum disorder, Nature., 542, 348, 10.1038/nature21369

Shin, 2016, Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning, IEEE Trans Med Imaging., 35, 1285, 10.1109/TMI.2016.2528162

Ting, 2018, AI for medical imaging goes deep, Nat Med., 24, 539, 10.1038/s41591-018-0029-3

Jaiswal, 2020, Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning, J Biomol Struct Dyn., 7, 1, 10.1080/07391102.2020.1788642

Liang, 2020, A transfer learning method with deep residual network for pediatric pneumonia diagnosis, Comput Methods Programs Biomed., 187, 104964, 10.1016/j.cmpb.2019.06.023

Lin, 2015, Documenting rare disease data in China, Science., 349, 1064, 10.1126/science.349.6252.1064-b

Razzak, 2018, Deep learning for medical image processing: Overview, challenges and the future, Classification in BioApps. Lecture Notes in Computational Vision and Biomechanics, vol 26., 323

Koppe, 2021, Deep learning for small and big data in psychiatry, Neuropsychopharmacol., 46, 176, 10.1038/s41386-020-0767-z

Faviez, 2020, Diagnosis support systems for rare diseases: a scoping review, Orphanet J Rare Dis., 15, 1, 10.1186/s13023-020-01374-z

Medsinge, 2015, Pediatric cataract: challenges and future directions, Clin ophthalmol., 9, 77, 10.2147/OPTH.S59009

Lenhart, 2015, Global challenges in the management of congenital cataract: proceedings of the 4th international congenital cataract symposium, J AAPOS., 19, e1, 10.1016/j.jaapos.2015.01.013

Chak, 2006, Long-term visual acuity and its predictors after surgery for congenital cataract: findings of the British congenital cataract study, Invest Ophthalmol Vis Sci., 47, 4262, 10.1167/iovs.05-1160

Vasavada, 2006, Pediatric cataract surgery, Curr Opin Ophthalmol., 17, 54, 10.1097/01.icu.0000193069.32369.e1

Daugman, 2007, New methods in iris recognition, IEEE Trans Syst Man Cybern B Cybern., 37, 1167, 10.1109/TSMCB.2007.903540

Masek, 2003, Recognition of Human Iris Patterns for Biometric Identification

Lin, 2019, Diagnostic efficacy and therapeutic decision-making capacity of an artificial intelligence platform for childhood cataracts in eye clinics: a multicentre randomized controlled trial, EClinicalMedicine., 9, 52, 10.1016/j.eclinm.2019.03.001

Long, 2020, Artificial intelligence manages congenital cataract with individualized prediction and telehealth computing, NPJ Digit Med., 3, 1, 10.1038/s41746-020-00319-x

Kohavi, 1995, A study of cross-validation and bootstrap for accuracy estimation and model selection, Ijcai

Ren, 2016, Faster r-cnn: towards real-time object detection with region proposal networks, IEEE Trans Pattern Anal Mach Intell., 39, 1137, 10.1109/TPAMI.2016.2577031

He, 2016, Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 10.1109/CVPR.2016.90

Grewal, 2018, Deep learning in ophthalmology: a review, Can J Ophthalmol., 53, 309, 10.1016/j.jcjo.2018.04.019

Ting, 2019, Artificial intelligence and deep learning in ophthalmology, Br J Ophthalmol., 103, 167, 10.1136/bjophthalmol-2018-313173

Li, 2020, Dense anatomical annotation of slit-lamp images improves the performance of deep learning for the diagnosis of ophthalmic disorders, Nat Biomed Eng., 4, 767, 10.1038/s41551-020-0577-y

Krizhevsky, 2017, Imagenet classification with deep convolutional neural networks, Commun ACM., 60, 84, 10.1145/3065386

Going deeper with convolutions SzegedyC LiuW JiaY SermanetP ReedS AnguelovD 10.1109/CVPR.2015.7298594.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition2015

Highway networks SrivastavaRK GreffK SchmidhuberJ arXiv2015

Selvaraju, 2017, Grad-cam: visual explanations from deep networks via gradient-based localization, Proceedings of the IEEE International Conference on Computer Vision, 10.1109/ICCV.2017.74

Visualizing data using t-SNE2579605 Van der MaatenL HintonG J Mach Learn Res.92008

Pytorch: an imperative style, high-performance deep learning library PaszkeA GrossS MassaF LererA BradburyJ ChananG Adv Neural Inf Process Syst.2019