Generating synthetic medical images with limited data using auxiliary classifier generative adversarial network: a study on thyroid ultrasound images
Journal of Ultrasound - Trang 1-17 - 2023
Tóm tắt
The availability of labeled data is crucial for training deep neural networks. However, in some cases, the available data is limited or unlabeled, which poses a significant obstacle in developing accurate models. Various approaches exist to address this issue, such as Image Augmentation, Transfer Learning, and GANs. However, these approaches often require a significant amount of training data or may not generate desired results. In this article, we present a novel method for generating synthetic images from very limited data using the ACGAN. We conducted experiments on a real dataset consisting of 198 ultrasound images of calcified and cystic thyroid gland nodules. We explored and improved different architectures and techniques in the Axillary Classifier Generative Adversarial Network (ACGAN) to generate high-quality synthetic images. To evaluate the generated images, we used the Fréchet Inception Distance (FID) test and human observation. Additionally, we developed an image blending method to generate larger images that simulate the output of an ultrasound device. To validate the accuracy of the merged images, a specialist doctor reviewed the generated data. The modified ACGAN architecture successfully generated new synthetic images from limited data. The output images were assessed based on the image progress ratio with the FID test and human observation. Moreover, the Image blending method was successful in producing larger output images that mimic the nature of the ultrasound device output images. The final merged images were validated by a specialist doctor who confirmed their accuracy. Our method has significant implications for medical imaging, as it enables the generation of synthetic labeled data for training deep learning models, leading to better diagnostic accuracy and improved patient outcomes. This study provides a proof-of-concept for generating synthetic medical images from limited labeled data and can inspire future research in this area.
Tài liệu tham khảo
Richards BJ, Taylor M, Jacobson SS (2022) Technology, innovation and healthcare: an evolving relationship. Edward Elgar Publishing, Cheltenham
Singh NK, Raza K (2021) Medical image generation using generative adversarial networks: a review. Health Inf: Comput Perspect Healthc. https://doi.org/10.1007/978-981-15-9735-0_5
Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville AC, Bengio Y (2020) Generative adversarial networks. Commun Acm 63(11):139–144. https://doi.org/10.1145/3422622
Yi X, Walia E, Babyn P (2019) Generative adversarial network in medical imaging: a review. Med Image Anal 58:101552. https://doi.org/10.1016/j.media.2019.101552
Zhang Q, Wang H, Lu H, Won D, Yoon SW (2018) Medical image synthesis with generative adversarial networks for tissue recognition. In: 2018 IEEE International Conference on Healthcare Informatics (ICHI) pp 199–207. IEEE, https://doi.org/10.1109/ICHI.2018.00030
Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, Greenspan H (2018) GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 321:321–331. https://doi.org/10.1016/j.neucom.2018.09.013
Odena A, Olah C, Shlens J (2017) Conditional image synthesis with auxiliary classifier gans. In: International Conference on Machine Learning pp 2642–2651. PMLR, https://doi.org/10.48550/arXiv.1610.09585
Wang Y, Wu C, Herranz L, Van de Weijer J, Gonzalez-Garcia A, Raducanu B (2018) Transferring gans: generating images from limited data. In: Proceedings of the European Conference on Computer Vision (ECCV) pp 218–234, https://doi.org/10.48550/arXiv.1805.01677
Zhan F, Zhu H, Lu S (2019) Spatial fusion gan for image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition pp 3653–3662, https://doi.org/10.48550/arXiv.1812.05840
Lin CH, Yumer E, Wang O, Shechtman E, Lucey S (2018) St-gan: spatial transformer generative adversarial networks for image compositing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp 9455–9464, https://doi.org/10.48550/arXiv.1803.01837
Starčević Đ, Ostojić V, Petrović V (2017) Homomorphic alpha blending of long bone digital radiography images. In: International Conference on Electrical, Electronics and Computing Engineering (IcETRAN), Kladovo, Serbia.
Shi G, Wang J, Qiang Y, Yang X, Zhao J, Hao R, Yang W, Du Q, Kazihise NG (2020) Knowledge-guided synthetic medical image adversarial augmentation for ultrasonography thyroid nodule classification. Comput Methods Progr Biomed 196:105611. https://doi.org/10.1016/j.cmpb.2020.105611
Sun Y, Kekec T, Moelker A, Niessen WJ, Van Walsum T (2020) Transformation optimization and image blending for 3D liver ultrasound series stitching. Medical imaging 2020: image-guided procedures, robotic interventions, and modeling. SPIE, Bellingham. https://doi.org/10.1117/12.2551439
Kumar A, Bandaru RS, Rao BM, Kulkarni S, Ghatpande N (2010) Automatic image alignment and stitching of medical images with seam blending. Int J Biomed Biol Eng 4(5):170–175. https://doi.org/10.5281/zenodo.1080078
Chollet F, Yee A, Prokofyev R (2015) Keras: deep learning for humans. URL https://github.com/keras-team/keras. Last accessed 16 Feb 2020
Barnett SA. (2018) Convergence problems with generative adversarial networks (gans). arXiv preprint arXiv:1806.11382, https://doi.org/10.48550/arXiv.1806.11382. Accessed 29 Jun 2018
Arjovsky M, Chintala S, Bottou L. (2017) Wasserstein generative adversarial networks. In: International Conference on Machine Learning 2017 Jul 17 pp 214–223. PMLR, https://doi.org/10.48550/arXiv.1701.07875
Zhu JY, Park T, Isola P, Efros AA. (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision 2017 pp 2223–2232, https://doi.org/10.1109/ICCV.2017.244
Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. Adv Neural Inf Process Syst. arXiv:1606.03498, https://doi.org/10.48550/arXiv.1606.03498
Sønderby CK, Caballero J, Theis L, Shi W, Huszár F. (2016) Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, https://doi.org/10.48550/arXiv.1610.04490
Denton EL, Chintala S, Fergus R (2015) Deep generative image models using a laplacian pyramid of adversarial networks. Adv Neural Inf Process Syst. https://doi.org/10.48550/arXiv.1506.05751
Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv Neural Inf Proces Syst. https://doi.org/10.48550/arXiv.1706.08500