Rethinking multi-exposure image fusion with extreme and diverse exposure levels: A robust framework based on Fourier transform and contrastive learning
Tài liệu tham khảo
K. Ram Prabhakar, V. Sai Srikar, R. Venkatesh Babu, Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4714–4722.
Zhang, 2021, Benchmarking and comparing multi-exposure image fusion algorithms, Inf. Fusion, 74, 111, 10.1016/j.inffus.2021.02.005
Deng, 2021, Deep coupled feedback network for joint exposure fusion and image super-resolution, IEEE Trans. Image Process., 30, 3098, 10.1109/TIP.2021.3058764
Deng, 2021, Deep convolutional neural network for multi-modal image restoration and fusion, IEEE Trans. Pattern Anal. Mach. Intell., 43, 3333, 10.1109/TPAMI.2020.2984244
Wang, 2018, End-to-end exposure fusion using convolutional neural network, IEICE Trans. Inf. Syst., 101, 560, 10.1587/transinf.2017EDL8173
Zhang, 2020, IFCNN: A general image fusion framework based on convolutional neural network, Inf. Fusion, 54, 99, 10.1016/j.inffus.2019.07.011
Ma, 2019, Deep guided learning for fast multi-exposure image fusion, IEEE Trans. Image Process., 29, 2808, 10.1109/TIP.2019.2952716
H. Zhang, H. Xu, Y. Xiao, X. Guo, J. Ma, Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity, in: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vol. 34, (07) 2020, pp. 12797–12804.
Xu, 2020, U2Fusion: A unified unsupervised image fusion network, IEEE Trans. Pattern Anal. Mach. Intell., 1
Cai, 2018, Learning a deep single image contrast enhancer from multi-exposure images, IEEE Trans. Image Process., 27, 2049, 10.1109/TIP.2018.2794218
Yin, 2020, Deep prior guided network for high-quality image fusion, 1
H. Xu, J. Ma, Z. Le, J. Jiang, X. Guo, Fusiondn: A unified densely connected network for image fusion, in: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vol. 34, (07) 2020, pp. 12484–12491.
Han, 2022, Multi-exposure image fusion via deep perceptual enhancement, Inf. Fusion, 79, 248, 10.1016/j.inffus.2021.10.006
L. Qu, S. Liu, M. Wang, Z. Song, Transmef: A transformer-based multi-exposure image fusion framework using self-supervised multi-task learning, in: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vol. 36, (2) 2022, pp. 2126–2134.
Ma, 2022, SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer, IEEE/CAA J. Autom. Sin., 9, 1200, 10.1109/JAS.2022.105686
Zhang, 2021, SDNet: A versatile squeeze-and-decomposition network for real-time image fusion, Int. J. Comput. Vis., 129, 2761, 10.1007/s11263-021-01501-8
Zeng, 2014, Perceptual evaluation of multi-exposure image fusion algorithms, 7
Deng, 2009, Imagenet: A large-scale hierarchical image database, 248
Lin, 2014, Microsoft coco: Common objects in context, 740
Liu, 2015, Dense SIFT for ghost-free multi-exposure fusion, J. Vis. Commun. Image Represent., 31, 208, 10.1016/j.jvcir.2015.06.021
Ma, 2017, Robust multi-exposure image fusion: a structural patch decomposition approach, IEEE Trans. Image Process., 26, 2519, 10.1109/TIP.2017.2671921
Ma, 2017, Multi-exposure image fusion by optimizing a structural similarity index, IEEE Trans. Comput. Imaging, 4, 60, 10.1109/TCI.2017.2786138
Burt, 1993, Enhanced image capture through fusion, 173
Mertens, 2007, Exposure fusion, 382
Wang, 2019, Detail-enhanced multi-scale exposure fusion in YUV color space, IEEE Trans. Circuits Syst. Video Technol., 30, 2418, 10.1109/TCSVT.2019.2919310
Ma, 2015, Perceptual quality assessment for multi-exposure image fusion, IEEE Trans. Image Process., 24, 3345, 10.1109/TIP.2015.2442920
Simonyan, 2014
Li, 2018, DenseFuse: A fusion approach to infrared and visible images, IEEE Trans. Image Process., 28, 2614, 10.1109/TIP.2018.2887342
Ma, 2021, SESF-fuse: An unsupervised deep model for multi-focus image fusion, Neural Comput. Appl., 33, 5793, 10.1007/s00521-020-05358-9
Henaff, 2020, Data-efficient image recognition with contrastive predictive coding, 4182
Tian, 2020, Contrastive multiview coding, 776
K. He, H. Fan, Y. Wu, S. Xie, R. Girshick, Momentum contrast for unsupervised visual representation learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9729–9738.
Chen, 2020, A simple framework for contrastive learning of visual representations, 1597
Gutmann, 2010, Noise-contrastive estimation: A new estimation principle for unnormalized statistical models, 297
Hermans, 2017
Sohn, 2016, Improved deep metric learning with multi-class n-pair loss objective, 1857
Park, 2020, Contrastive learning for unpaired image-to-image translation, 319
H. Wu, Y. Qu, S. Lin, J. Zhou, R. Qiao, Z. Zhang, Y. Xie, L. Ma, Contrastive Learning for Compact Single Image Dehazing, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 10551–10560.
X. Qin, Z. Wang, Y. Bai, X. Xie, H. Jia, FFA-Net: Feature fusion attention network for single image dehazing, in: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vol. 34, (07) 2020, pp. 11908–11915.
Y. Yang, S. Soatto, Fda: Fourier domain adaptation for semantic segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 4085–4095.
Q. Xu, R. Zhang, Y. Zhang, Y. Wang, Q. Tian, A Fourier-based Framework for Domain Generalization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 14383–14392.
Gonzalez, 2002
Q. Xu, R. Zhang, Y. Zhang, Y. Wang, Q. Tian, A Fourier-based Framework for Domain Generalization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (CVPR), 2021, pp. 14383–14392.
Y. Yang, S. Soatto, Fda: Fourier domain adaptation for semantic segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (CVPR), 2020, pp. 4085–4095.
Wang, 2004, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., 13, 600, 10.1109/TIP.2003.819861
Wang, 2004, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., 13, 600, 10.1109/TIP.2003.819861
Hou, 2020, VIF-Net: an unsupervised framework for infrared and visible image fusion, IEEE Trans. Comput. Imaging, 6, 640, 10.1109/TCI.2020.2965304
Cai, 2018, Learning a deep single image contrast enhancer from multi-exposure images, IEEE Trans. Image Process., 27, 2049, 10.1109/TIP.2018.2794218
Li, 2020, Fast multi-scale structural patch decomposition for multi-exposure image fusion, IEEE Trans. Image Process., 29, 5805, 10.1109/TIP.2020.2987133
Li, 2013, Image fusion with guided filtering, IEEE Trans. Image Process., 22, 2864, 10.1109/TIP.2013.2244222
Lee, 2018, A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient, 1737
Xu, 2020, MEF-GAN: Multi-exposure image fusion via generative adversarial networks, IEEE Trans. Image Process., 29, 7203, 10.1109/TIP.2020.2999855
Hossny, 2008, Comments on’Information measure for performance of image fusion’, Electron. Lett., 44, 1066, 10.1049/el:20081754
Wang, 2008, Performance evaluation of image fusion techniques, Imag. Fusion: Algorithms Appl., 19, 469, 10.1016/B978-0-12-372529-5.00017-2
Haghighat, 2014, Fast-FMI: Non-reference image fusion metric, 1
Jagalingam, 2015, A review of quality metrics for fused image, Aquat. Procedia, 4, 133, 10.1016/j.aqpro.2015.02.019
Ma, 2019, FusionGAN: A generative adversarial network for infrared and visible image fusion, Inf. Fusion, 48, 11, 10.1016/j.inffus.2018.09.004
Rao, 1997, In-fibre Bragg grating sensors, Meas. Sci. Technol., 8, 355, 10.1088/0957-0233/8/4/002
Ma, 2015, Perceptual quality assessment for multi-exposure image fusion, IEEE Trans. Image Process., 24, 3345, 10.1109/TIP.2015.2442920
Chen, 2007, A human perception inspired quality metric for image fusion based on regional information, Inf. Fusion, 8, 193, 10.1016/j.inffus.2005.10.001
Chen, 2009, A new automated quality assessment algorithm for image fusion, Image Vis. Comput., 27, 1421, 10.1016/j.imavis.2007.12.002
Han, 2013, A new image fusion performance metric based on visual information fidelity, Inf. Fusion, 14, 127, 10.1016/j.inffus.2011.08.002
Song, 2021