FusionGAN: A generative adversarial network for infrared and visible image fusion

Information Fusion - Tập 48 - Trang 11-26 - 2019
Jiayi Ma1, Wei Yu1, Pengwei Liang1, Chang Li2, Junjun Jiang3
1Electronic Information School, Wuhan University, Wuhan, 430072, China
2Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China
3School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China

Tóm tắt

Từ khóa


Tài liệu tham khảo

Dogra, 2017, From multi-scale decomposition to non-multi-scale decomposition methods: a comprehensive survey of image fusion techniques and its applications, IEEE Access, 5, 16040, 10.1109/ACCESS.2017.2735865

Ma, 2016, Infrared and visible image fusion using total variation model, Neurocomputing, 202, 12, 10.1016/j.neucom.2016.03.009

Toet, 1989, Image fusion by a ratio of low-pass pyramid, Pattern Recognit. Lett., 9, 245, 10.1016/0167-8655(89)90003-2

Jin, 2017, A survey of infrared and visual image fusion methods, Infrared Phys. Technol., 85, 478, 10.1016/j.infrared.2017.07.010

Li, 2011, Performance comparison of different multi-resolution transforms for image fusion, Inf. Fusion, 12, 74, 10.1016/j.inffus.2010.03.002

Pajares, 2004, A wavelet-based image fusion tutorial, Pattern Recognit., 37, 1855, 10.1016/j.patcog.2004.03.010

Zhang, 1999, A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application, Proc. IEEE, 87, 1315, 10.1109/5.775414

Wang, 2014, Fusion method for infrared and visible images by using non-negative sparse representation, Infrared Phys. Technol., 67, 477, 10.1016/j.infrared.2014.09.019

Li, 2012, Group-sparse representation with dictionary learning for medical image denoising and fusion, IEEE Trans. Biomed. Eng., 59, 3450, 10.1109/TBME.2012.2217493

Xiang, 2015, A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking pcnn in nsct domain, Infrared Phys. Technol., 69, 53, 10.1016/j.infrared.2015.01.002

Kong, 2014, Novel fusion method for visible light and infrared images based on nsst–sf–pcnn, Infrared Phys. Technol., 65, 103, 10.1016/j.infrared.2014.04.003

Bavirisetti, 2017, Multi-sensor image fusion based on fourth order partial differential equations, 1

Kong, 2014, Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization, Infrared Phys. Technology, 67, 161, 10.1016/j.infrared.2014.07.019

Zhang, 2017, Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition, JOSA A, 34, 1400, 10.1364/JOSAA.34.001400

Zhao, 2014, Infrared image enhancement through saliency feature analysis based on multi-scale decomposition, Infrared Phys. Technol., 62, 86, 10.1016/j.infrared.2013.11.008

Liu, 2015, A general framework for image fusion based on multi-scale transform and sparse representation, Inf. Fusion, 24, 147, 10.1016/j.inffus.2014.09.004

Ma, 2017, Infrared and visible image fusion based on visual saliency map and weighted least square optimization, Infrared Phys. Technol., 82, 8, 10.1016/j.infrared.2017.02.005

Ma, 2016, Infrared and visible image fusion via gradient transfer and total variation minimization, Inf. Fusion, 31, 100, 10.1016/j.inffus.2016.02.001

Zhao, 2017, Fusion of visible and infrared images using global entropy and gradient constrained regularization, Infrared Phys. Technol., 81, 201, 10.1016/j.infrared.2017.01.012

Li, 2017, Pixel-level image fusion: a survey of the state of the art, Inf. Fusion, 33, 100, 10.1016/j.inffus.2016.05.004

Liu, 2018, Deep learning for pixel-level image fusion: recent advances and future prospects, Information Fusion, 42, 158, 10.1016/j.inffus.2017.10.007

Li, 2013, Image fusion with guided filtering, IEEE Trans. Image Process., 22, 2864, 10.1109/TIP.2013.2244222

Piella, 2003, A general framework for multiresolution image fusion: from pixels to regions, Inf. Fusion, 4, 259, 10.1016/S1566-2535(03)00046-0

Zhang, 2018, Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review, Inf. Fusion, 40, 57, 10.1016/j.inffus.2017.05.006

Rajkumar, 2014, Infrared and visible image fusion using entropy and neuro-fuzzy concepts, 93

Liu, 2017, Multi-focus image fusion with a deep convolutional neural network, Inf. Fusion, 36, 191, 10.1016/j.inffus.2016.12.001

Liu, 2018, Infrared and visible image fusion with convolutional neural networks, Int. J. Wavelets Multiresolution Inf. Process., 16, 1850018, 10.1142/S0219691318500182

Zhong, 2016, Image fusion and super-resolution with convolutional neural network, 78

Liu, 2016, Image fusion with convolutional sparse representation, IEEE Signal Process. Lett., 23, 1882, 10.1109/LSP.2016.2618776

Masi, 2016, Pansharpening by convolutional neural networks, Remote Sens., 8, 594, 10.3390/rs8070594

Goodfellow, 2014, Generative adversarial nets, 2672

A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, arXiv:1511.06434v1 (2015).

M. Arjovsky, S. Chintala, L. Bottou, Wasserstein gan, arXiv:1701.07875v1 (2017).

Mao, 2017, Least squares generative adversarial networks, 2813

F. Yu, V. Koltun, Multi-scale context aggregation by dilated convolutions, arXiv:1511.07122v1 (2015).

D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, arXiv:1412.6980v1 (2014).

Yang, 2014, Visual attention guided image fusion with sparse representation, Optik-Int. J. Light Electron Opt., 125, 4881, 10.1016/j.ijleo.2014.04.036

Nencini, 2007, Remote sensing image fusion using the curvelet transform, Inf. Fusion, 8, 143, 10.1016/j.inffus.2006.02.001

Lewis, 2007, Pixel-and region-based image fusion with complex wavelets, Inf. fusion, 8, 119, 10.1016/j.inffus.2005.09.006

Bavirisetti, 2016, Two-scale image fusion of visible and infrared images using saliency detection, Infrared Phys. Technol., 76, 52, 10.1016/j.infrared.2016.01.009

Ma, 2019, Infrared and visible image fusion methods and applications: a survey, Inf. Fusion, 45, 153, 10.1016/j.inffus.2018.02.004

Roberts, 2008, Assessment of image fusion procedures using entropy, image quality, and multispectral classification, J. Appl. Remote Sens., 2, 023522, 10.1117/1.2945910

Rao, 1997, In-fibre bragg grating sensors, Meas. Sci. Technol., 8, 355, 10.1088/0957-0233/8/4/002

Wang, 2002, A universal image quality index, IEEE Signal Process. Lett., 9, 81, 10.1109/97.995823

Deshmukh, 2010, Image fusion and image quality assessment of fused images, Int. J. Image Process., 4, 484

Eskicioglu, 1995, Image quality measures and their performance, IEEE Trans. Commun., 43, 2959, 10.1109/26.477498

Han, 2013, A new image fusion performance metric based on visual information fidelity, Inf. fusion, 14, 127, 10.1016/j.inffus.2011.08.002

A. Holzinger, C. Biemann, C.S. Pattichis, D.B. Kell, What do we need to build explainable ai systems for the medical domain?, arXiv:1712.09923v1 (2017).

Chen, 2015, Sirf: simultaneous satellite image registration and fusion in a unified framework, IEEE Trans. Image Process., 24, 4213, 10.1109/TIP.2015.2456415