FDGNet: A pair feature difference guided network for multimodal medical image fusion

Biomedical Signal Processing and Control - Tập 81 - Trang 104545 - 2023
Gucheng Zhang1, Rencan Nie1,2, Jinde Cao3,4, Luping Chen1, Ya Zhu1
1School of Information Science and Engineering, Yunnan University, Kunming 650500, China
2Yunnan Key Laboratory of Intelligent Systems and Computing, China
3School of Mathematics, Southeast University, Nanjing, 210096, China
4Yonsei Frontier Laboratory, Yonsei University, Seoul 03722, South Korea

Tài liệu tham khảo

Nie, 2020, Multi-source information exchange encoding with pcnn for medical image fusion, IEEE Trans. Circuits Syst. Video Technol., 31, 986, 10.1109/TCSVT.2020.2998696 Zhu, 2022, CEFusion: Multi-modal medical image fusion via cross encoder, IET Image Process. Bhatnagar, 2013, Human visual system inspired multi-modal medical image fusion framework, Expert Syst. Appl., 40, 1708, 10.1016/j.eswa.2012.09.011 Yang, 2020, Multimodal medical image fusion based on weighted local energy matching measurement and improved spatial frequency, IEEE Trans. Instrum. Meas., 70, 1, 10.1109/TIM.2020.2986875 Zhu, 2021, Multimodal medical image fusion using adaptive co-occurrence filter-based decomposition optimization model, Bioinformatics Fu, 2021, A multiscale residual pyramid attention network for medical image fusion, Biomed. Signal Process. Control, 66, 10.1016/j.bspc.2021.102488 Kong, 2018, Multimodal sensor medical image fusion based on local difference in non-subsampled domain, IEEE Trans. Instrum. Meas., 68, 938, 10.1109/TIM.2018.2865046 Wang, 2022, AMFNet: An attention-guided generative adversarial network for multi-model image fusion, Biomed. Signal Process. Control, 78, 10.1016/j.bspc.2022.103990 Li, 1995, Multisensor image fusion using the wavelet transform, Graph. Models Image Process., 57, 235, 10.1006/gmip.1995.1022 Yu, 2016, Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion, Neurocomputing, 182, 1, 10.1016/j.neucom.2015.10.084 Yang, 2016, Multimodal sensor medical image fusion based on type-2 fuzzy logic in NSCT domain, IEEE Sens. J., 16, 3735, 10.1109/JSEN.2016.2533864 Liu, 2018, Multi-modality medical image fusion based on image decomposition framework and nonsubsampled shearlet transform, Biomed. Signal Process. Control, 40, 343, 10.1016/j.bspc.2017.10.001 Sahu, 2014, Medical image fusion with Laplacian pyramids, 448 Li, 2022, MSENet: A multi-scale enhanced network based on unique features guidance for medical image fusion, Biomed. Signal Process. Control, 74, 10.1016/j.bspc.2022.103534 Xu, 2021, EMFusion: An unsupervised enhanced medical image fusion network, Inf. Fusion, 10.1016/j.inffus.2021.06.001 Wang, 2022, IGNFusion: An unsupervised information gate network for multimodal medical image fusion, IEEE J. Sel. Top. Sign. Proces., 16, 854, 10.1109/JSTSP.2022.3181717 Zhang, 2022, SWTRU: Star-shaped window transformer reinforced U-net for medical image segmentation, Comput. Biol. Med. Liang, 2022, FCF: Feature complement fusion network for detecting COVID-19 through CT scan images, Appl. Soft Comput., 10.1016/j.asoc.2022.109111 Tian, 2022 Guo, 2019, FuseGAN: Learning to fuse multi-focus image via conditional generative adversarial network, IEEE Trans. Multimed., 21, 1982, 10.1109/TMM.2019.2895292 Liu, 2017, Multi-focus image fusion with a deep convolutional neural network, Inf. Fusion, 36, 191, 10.1016/j.inffus.2016.12.001 Chen, 2022, Multi-level difference information replenishment for medical image fusion, Appl. Intell., 1 Song, 2019, MSDNet for medical image fusion, 278 Lahoud, 2019 Li, 2012, Group-sparse representation with dictionary learning for medical image denoising and fusion, IEEE Trans. Biomed. Eng., 59, 3450, 10.1109/TBME.2012.2217493 Li, 2021, Joint image fusion and denoising via three-layer decomposition and sparse representation, Knowl.-Based Syst., 224, 10.1016/j.knosys.2021.107087 Li, 2020, Laplacian redecomposition for multimodal medical image fusion, IEEE Trans. Instrum. Meas., 69, 6880, 10.1109/TIM.2020.2975405 Yin, 2018, Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain, IEEE Trans. Instrum. Meas., 68, 49, 10.1109/TIM.2018.2838778 Tan, 2020, Multimodal medical image fusion algorithm in the era of big data, Neural Comput. Appl., 1 G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4700–4708. Liang, 2019, MCFNet: Multi-layer concatenation fusion network for medical images fusion, IEEE Sens. J., 19, 7107, 10.1109/JSEN.2019.2913281 Xu, 2020, U2Fusion: A unified unsupervised image fusion network, IEEE Trans. Pattern Anal. Mach. Intell. Zhang, 2020, IFCNN: A general image fusion framework based on convolutional neural network, Inf. Fusion, 54, 99, 10.1016/j.inffus.2019.07.011 Zhang, 2021, SDNet: A versatile squeeze-and-decomposition network for real-time image fusion, Int. J. Comput. Vis., 129, 2761, 10.1007/s11263-021-01501-8 Kirkpatrick, 2017, Overcoming catastrophic forgetting in neural networks, Proc. Natl. Acad. Sci., 114, 3521, 10.1073/pnas.1611835114 Ma, 2020, DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion, IEEE Trans. Image Process., 29, 4980, 10.1109/TIP.2020.2977573 Huang, 2020, MGMDcGAN: Medical image fusion using multi-generator multi-discriminator conditional generative adversarial network, IEEE Access, 8, 55145, 10.1109/ACCESS.2020.2982016 Fu, 2021, DSAGAN: A generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion, Inform. Sci., 576, 484, 10.1016/j.ins.2021.06.083 Nie, 2021, A total variation with joint norms for infrared and visible image fusion, IEEE Trans. Multimed. Simonyan, 2014 K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. Le, 2022, UIFGAN: An unsupervised continual-learning generative adversarial network for unified image fusion, Inf. Fusion, 10.1016/j.inffus.2022.07.013 H. Zhang, H. Xu, Y. Xiao, X. Guo, J. Ma, Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 12797–12804. J. Zhang, S. Sclaroff, Saliency detection: A boolean map approach, in: Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 153–160. Kingma, 2014 Hossny, 2008, Comments on’information measure for performance of image fusion’, Electron. Lett., 44, 1066, 10.1049/el:20081754 Yang, 2008, A novel similarity based quality metric for image fusion, Inf. Fusion, 9, 156, 10.1016/j.inffus.2006.09.001 Piella, 2003, A new quality metric for image fusion, III Han, 2013, A new image fusion performance metric based on visual information fidelity, Inf. Fusion, 14, 127, 10.1016/j.inffus.2011.08.002 Shibu, 2021, Multi scale decomposition based medical image fusion using convolutional neural network and sparse representation, Biomed. Signal Process. Control, 69, 10.1016/j.bspc.2021.102789 Li, 2013, Image fusion with guided filtering, IEEE Trans. Image Process., 22, 2864, 10.1109/TIP.2013.2244222