Superpixel-based Structural Similarity Metric for Image Fusion Quality Evaluation

Sensing and Imaging - Tập 22 - Trang 1-25 - 2021
Eryan Wang1, Bin Yang1, Lihui Pang1
1College of Electric Engineering, University of South China, Hengyang, China

Tóm tắt

Image fusion refers to integrate multiple images of the same scene into a high-quality fused image. Universal quality evaluation for fused image is one of the urgent problems in the field of image fusion. Typically, local features extracted from rectangular blocks of the fused images are used to achieve objective evaluation. However, the fixed shape of image block is neither suitable for the natural attributes of an image, nor for the perceptual characteristics of human visual system. To deal with the problem, a superpixel-based structural similarity metric for image fusion quality evaluation is proposed in this paper. The image features extracted from adaptive superpixels are used to calculate the structural similarity between the corresponding superpixels. Then all local structural similarity indicators are weighted and averaged according to their significance to obtain the final evaluation score. Several classical image fusion quality evaluation metrics are used for comparative experimental analysis. A series of experimental results show that the stability of the proposed quality evaluation index is about 10−6 orders of magnitude, whose accuracy and performance are more advantageous than the latest evaluation index. Meanwhile, the evaluation results obtained by the proposed metric is closer to the human visual evaluation results.

Tài liệu tham khảo

Yang, B., & Li, S. T. (2014). Visual attention guided image fusion with sparse representation. Optik- International Journal for Light and Electron Optics, 125(17), 4881–4888. Farid, M. S., Mahmood, A., & Al-Maadeed, S. A. (2019). Multi-focus image fusion using content adaptive blurring. Information fusion, 45, 96–112. Martinez, J., Pistonesi, S., & Maciel, M. C. (2019). Multi-scale fidelity measure for image fusion quality assessment. Information Fusion, 50, 197–211. Ma, K., Zeng, K., & Wang, Z. (2015). Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 24(11), 3345–3356. Athar, S., & Wang, Z. (2019). A comprehensive performance evaluation of image quality assessment algorithms. IEEE Access, 7, 140030–140070. Hu, Y., Gao, Q., Zhang, B., et al. (2019). On the use of joint sparse representation for image fusion quality evaluation and analysis. Journal of Visual Communication and Image Representation, 61, 225–235. Nawaz, Q., Xiao, B., Hamid, I., & Jiao, D. (2016). Multi-modal color medical image fusion using quaternion discrete Fourier transform. Sensing and Imaging, 17(7), 95–109. Li, J., & Feng, L. (2020). Rolling guidance filtering-orientated saliency region extraction method for visible and infrared images fusion. Sensing and Imaging, 21, 18. Khodaei, H., Hajiali, M., Darvishan, A., et al. (2018). Fuzzy-based heat and power hub models for cost-emission operation of an industrial consumer using compromise programming. Applied Thermal Engineering, 137, 395–405. Gao, W., Darvishan, A., Toghani, M., et al. (2019). Different states of multi-block based forecast engine for price and load prediction. International Journal of Electrical Power and Energy Systems, 104, 423–435. Abedinia, O., Zareinejad, M., Doranehgard, M. H., et al. (2019). Optimal offering and bidding strategies of renewable energy based large consumer using a novel hybrid robust-stochastic approach. Journal of Cleaner Production, 215, 878–889. Xydeas, C. S., & Petrovic, V. (2000). Objective image fusion performance measure. Military Technical Courier, 36(4), 308–309. Wang, P. W. and Liu, B. (2008). A novel image fusion metric based on multi-scale analysis. In Proceedings of International Conference on Signal Processing (pp. 965–968). Beijing, China. Zheng, Y., Essock, E. A., Hansen, B. C., & Haun, A. M. (2007). A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Information Fusion, 8(2), 177–192. Wang, Z., & Bovik, A. C. (2002). A universal image quality index. IEEE Signal Processing Letters, 9(3), 81–84. Piella, G. & Heijmans, H. J. (2003). A new quality metric for image fusion. international conference on image processing, 3, 173–176. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612. Cvejic, N., Loza, A., Bull, D., & Canagarajah, N. (2006). A similarity metric for assessment of image fusion algorithms. International Journal of Signal Processing, 2(3), 178–182. Chen, G. H., Yang, C. L., & Xie, S. L. (2006) Gradient-based structure similarity for image quality assessment. In Proceedings of the 2006 International Conference on Image Processing (pp. 2929–2932). Atlanta, GA, USA. Chen, H., & Varshney, P. K. (2007). A human perception inspired quality metric for image fusion based on regional information. Information Fusion, 8(2), 193–207. Yang, B., & Li, S. (2012). Pixel-level image fusion with simultaneous orthogonal matching pursuit. Information Fusion, 13(1), 10–19. L. Zhang & H. Li. (2012). SR-SIM: A fast and high performance IQA index based on spectral residual. In Proceedings of the 2012 19th IEEE International Conference on Image Processing (pp. 1473–1476). Orlando, FL, USA. Ma, K., Duanmu, Z., Yeganeh, H., & Wang, Z. (2018). Multi-exposure image fusion by optimizing a structural similarity index. IEEE Transactions on Computational Imaging, 4(1), 60–72. Xing, L., Zeng, H., Chen, J., Zhu, J., Cai, C., & Ma, K. (2017). Multi-exposure image fusion quality assessment using contrast information. International Symposium on Intelligent Signal Processing and Communication Systems (pp.34–38). Xiamen, China. Ma, K., Liu, W., Zhang, K., Duanmu, Z., Wang, Z., & Zuo, W. (2018). End-to-end blind image quality assessment using deep neural networks. IEEE Transactions on Image Processing, 27(3), 1202–1213. Fang, Y., Zhu, H., Ma, K., Wang, Z., & Li, S. (2020). Perceptual evaluation for multi-exposure image fusion of dynamic scenes. IEEE Transactions on Image Processing, 29, 1127–1138. Sun, W., Liao, Q., Xue, J., & Zhou, F. (2018). SPSIM: A superpixel-based similarity index for full-reference image quality assessment. IEEE Transactions on Image Processing, 27(9), 4232–4244. Ren, X. & Malik, J. (2003). Learning a classification model for segmentation. International Conference on Computer Vision (pp. 10–17), Nice, France. Vargasmunoz, J. E., Chowdhury, A. S., Alexandre, E. B., Galvao, F. L., Miranda, P. A., & Falcao, A. X. (2019). An iterative spanning forest framework for superpixel segmentation. IEEE Transactions on Image Processing, 28(7), 3477–3489. Shu, G., Dehghan, A., & Shah, M. (2013). Improving an object detector and extracting regions using superpixels. Computer Vision and Pattern Recognition, 3721–3727. Xue, W., Zhang, L., Mou, X., & Bovik, A. C. (2014). Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Transactions on Image Processing, 23(2), 684–695. TN_Image_Fusion_Dataset, http://figshare.com/articles/TN_Image_Fusion_Dataset/1008-092. Accessed 5 July 2019. Nejati, M., Samavi, S., & Shirani, S. (2015). Multi-focus image fusion using dictionary-based sparse representation. Information Fusion, 25, 72–84. Medical images, http://www.med.harvard.edu/aanlib//home.html. Accessed 20 July 2019. Pritika, N. A., & Budhiraja, S. (2016). Multimodal medical image fusion based on guided filtered multi-scale decomposition. International Journal of Biomedical Engineering and Technology, 20(4), 285–301. Du, J., Li, W., Xiao, B., & Nawaz, Q. (2016). Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing, 194, 326–339. Haribabu M. & Bindu C. H. (2017). Visibility based multi modal medical image fusion with DWT. 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI) (pp. 1561–1566), Chennai, India. Li, X., Guo, X., Han, P., Wang, X., Li, H., & Luo, T. (2020). Laplacian re-decomposition for multimodal medical image fusion. IEEE Transactions on Instrumentation and Measurement. https://doi.org/10.1109/TIM.2020.2975405 Yang, C., Zhang, J., Wang, X., & Liu, X. (2008). A novel similarity based quality metric for image fusion. Information Fusion, 9(2), 156–160. M. Haghighat & M. A. Razian. (2014). Fast-FMI: Non-reference image fusion metric. 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT) (pp. 1–3), Astana, Kazakhstan. Chen, Y., & Blum, R. S. (2009). A new automated quality assessment algorithm for image fusion. Image and Vision Computing, 27(10), 1421–1432. Gu, B., Li, W., Wong, J., Zhu, M., & Wang, M. (2012). Gradient field multi-exposure images fusion for high dynamic range image visualization. Journal of Visual Communication and Image Representation, 23(4), 604–610. Li, Z., Zheng, J., & Rahardja, S. (2012). Detail-enhanced exposure fusion. IEEE Transactions on Image Processing, 21(11), 4672–4676. Li, S., Kang, X., & Hu, J. (2013). Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7), 2864–2875. Mertens, T., & Kautz Van Reeth, J. F. (2009). Exposure fusion: A simple and practical alternative to high dynamic range photography. Comput Graph Forum, 28(1), 161–171. Raman, S., & Chaudhuri, S. (2009). Bilateral filter based compositing for variable exposure photography. Eurographics, 1–4. Li, S., & Kang, X. (2012). Fast multi-exposure image fusion with median filter and recursive filter. IEEE Transactions on Consumer Electronics, 58(2), 626–632. Cvejic, N., Canagarajah, C. N., & Bull, D. (2006). Image fusion metric based on mutual information and Tsallis entropy. Electronics Letters, 42(11), 626–627. Hossny, M., Nahavandi, S., & Creighton, D. (2008). Comments on information measure for performance of image fusion. Electronics Letters, 44(18), 1066–1067. Wang, Q., Shen, Y., & Jin, J. (2008). Performance evaluation of image fusion techniques. In I. Fusion (Ed.), Algorithms and applications (pp. 469–492). New York, USA: Academic.