Rolling Guidance Filtering-Orientated Saliency Region Extraction Method for Visible and Infrared Images Fusion

Sensing and Imaging - Tập 21 - Trang 1-18 - 2020
Jiangjiang Li1, Lijuan Feng1
1College of Electrical Engineering, Zhengzhou University of Science and Technology, Zhengzhou, China

Tóm tắt

Different image types can be obtained by different sensors, but all the useful information cannot be extracted from a single image. Infrared images can capture the heat source information of scene targets in low light or severe weather conditions. Visible images provide more detail information about the scene. To obtain the rich image information, we propose a visible and infrared images fusion method based on rolling guidance filtering and saliency region extraction in this paper. A multi-scale image decomposition framework is built by using the edge preserving-smoothing algorithm. The image is decomposed into one base layer with different scales and several detail layers. Meanwhile, the saliency region extraction is implemented on each decomposition layer by combining with the rolling guidance filtering. Weight reconstruction is adopted to obtain the final fusion result. The results show that the proposed algorithm has good subjective and objective evaluation results, better fusion performance and robustness compared to other state-of-the-art fusion methods.

Tài liệu tham khảo

Ding, I.-J., Tsai, C.-Y., & Yen, C.-Y. (2019). A design on recommendations of sensor development platforms with different sensor modalities for making gesture biometrics-based service applications of the specific group. Microsystem Technologies. https://doi.org/10.1007/s00542-019-04503-2. Peng, L., Chen, Z., Yang, L. T., et al. (2018). Deep convolutional computation model for feature learning on big data in internet of things. IEEE Transactions on Industrial Informatics, 14(2), 790–798. https://doi.org/10.1109/tii.2017.2739340. Aymaz, S., & Köse, C. (2018). A novel image decomposition-based hybrid technique with super-resolution method for multi-focus image fusion. Information Fusion. https://doi.org/10.1016/j.inffus.2018.01.015. Yang, X., Wang, J., & Zhu, R. (2018). Random walks for synthetic aperture radar image fusion in framelet domain. IEEE Transactions on Image Processing, 27(2), 851. https://doi.org/10.1109/TIP.2017.2747093. Yan, Z., Yan, X., Xie, L., et al. (2011). The research of weighted-average fusion method in inland traffic flow detection. In International conference on information computing and applications. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25255-6_12. Zhu, Z., Yin, H., Chai, Y., et al. (2018). A novel multi-modality image fusion method based on image decomposition and sparse representation. Information Sciences. https://doi.org/10.1016/j.ins.2017.09.010. Guo, Q., Wang, Y., & Li, H. (2018). Anti-halation method of visible and infrared image fusion based on improved IHS-Curvelet transform. Infrared and Laser Engineering. https://doi.org/10.3788/irla201847.1126002. Sharma, K. K., & Sharma, M. (2014). Image fusion based on image decomposition using self-fractional Fourier functions. Signal, Image and Video Processing, 8(7), 1335–1344. https://doi.org/10.1007/s11760-012-0363-8. Teng, L., Li, H., & Yin, S. (2018). Modified pyramid dual tree direction filter-based image de-noising via curvature scale and non-local mean multi-grade remnant multi-grade remnant filter. International Journal of Communication Systems. https://doi.org/10.1002/dac.3486. Yin, S., Zhang, Y., & Karim, S. (2018). Large scale remote sensing image segmentation based on fuzzy region competition and gaussian mixture model. IEEE Access, 6, 26069–26080. https://doi.org/10.1109/ACCESS.2018.2834960. Siddique, A, Xiao, B., Li, W., et al. (2018). Multi-focus image fusion using block-wise color-principal component analysis. In 2018 IEEE 3rd international conference on image, vision and computing (ICIVC). IEEE. https://doi.org/10.1109/icivc.2018.8492725 Zhiliang, W., Huang, Y., & Zhang, K. (2018). Remote sensing image fusion method based on PCA and curvelet transform. Journal of the Indian Society of Remote Sensing, 46(5), 687–695. https://doi.org/10.1007/s12524-017-0736-0. Kong, W., Zhang, L., & Lei, Y. (2014). Novel fusion method for visible light and infrared images based on NSST–SF–PCNN. Infrared Physics & Technology, 65, 103–112. https://doi.org/10.1016/j.infrared.2014.04.003. Yin, M., Duan, P., Liu, W., et al. (2017). A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation. Neurocomputing, 226, 182–191. https://doi.org/10.1016/j.neucom.2016.11.051. Zhou, Y., Geng, A., Wang, Y., et al. (2014). Contrast enhanced fusion of infrared and visible images. Chinese Journal of Lasers, 41(9), 223–229. https://doi.org/10.3788/CJL201441.0909001. Chatterjee, A., Biswas, M., Maji, D., et al. (2017). Discrete wavelet transform based V–I image fusion with artificial bee colony optimization. In 2017 IEEE 7th annual computing and communication workshop and conference (CCWC). IEEE. https://doi.org/10.1109/CCWC.2017.7868491. Yan, Y., Ren, J., Zhao, H., et al. (2017). Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos. Cognitive Computation, 9, 1–11. https://doi.org/10.1007/s12559-017-9529-6. Yumei, W., Daimei, C., & Genbao, Z. (2017). Image fusion algorithm of infrared and visible images based on target extraction and laplace transformation. Laser and Optoelectronics Progress, 54(1), 011002. https://doi.org/10.3788/LOP54.011002. Kalistratov, D. (2019). Wireless video monitoring of the megacities transport infrastructure. Civil Engineering Journal, 5(5), 1033–1040. https://doi.org/10.28991/cej-2019-03091309. Razian, S. A., & MahvashMohammadi, H. (2017). Optimizing raytracing algorithm using CUDA. Emerging Science Journal, 1(3), 167–178. https://doi.org/10.28991/ijse-01119. Yin, S., Zhang, Y., & Karim, S. (2019). Region search based on hybrid convolutional neural network in optical remote sensing images. International Journal of Distributed Sensor Networks. https://doi.org/10.1177/1550147719852036. Dai, W., Jiang, J., Ding, G., et al. (2019). Development and application of fire video image detection technology in China’s road tunnels. Civil Engineering Journal, 5(1), 1–17. https://doi.org/10.28991/cej-2019-03091221. Espejel-García, D., Ortíz-Anchondo, L. R., Alvarez-Herrera, C., et al. (2017). An alternative vehicle counting tool using the Kalman filter within MATLAB. Civil Engineering Journal, 3(11), 1029–1035. https://doi.org/10.28991/cej-030935. Shen, C.-T., Chang, F.-J., Hung, Y.-P., et al. (2012). Edge-preserving image decomposition using L1 fidelity with L0 gradient. In SIGGRAPH Asia 2012 technical briefs. https://doi.org/10.1145/2407746.2407752. Wang, Z., Xing, C., Ouyang, Q., et al. (2018). A method based on bitonic filtering decomposition and sparse representation for fusion of infrared and visible images. IET Image Processing. https://doi.org/10.1049/iet-ipr.2018.5554. Zhang, Q., Shen, X., Xu, L., et al. (2014). Rolling guidance filter. In European conference on computer vision. Springer, Cham. https://doi.org/10.1007/978-3-319-10578-9_53. Zhao, C., & Huang, Y. (2019). Infrared and visible image fusion method based on rolling guidance filter and NSST. International Journal of Wavelets, Multiresolution and Information Processing. https://doi.org/10.1142/s0219691319500450. Yin, S., & Zhang, Y. (2018). Singular value decomposition-based anisotropic diffusion for fusion of infrared and visible images. International Journal of Image and Data Fusion. https://doi.org/10.1080/19479832.2018.1487886. Jian, L., Yang, X., Zhou, Z., et al. (2018). Multi-scale image fusion through rolling guidance filter. Future Generation Computer Systems, 83, 310–325. https://doi.org/10.1016/j.future.2018.01.039. Zhang, P., Yuan, Y., Fei, C., et al. (2018). Infrared and visible image fusion using co-occurrence filter. Infrared Physics & Technology, 93, 223–231. https://doi.org/10.1016/j.infrared.2018.08.004.