Synthesis of True Color Images from the Fengyun Advanced Geostationary Radiation Imager
Tóm tắt
Từ khóa
Tài liệu tham khảo
Aldahdooh, A., E. Masala, G. Van Wallendael, et al., 2018: Framework for reproducible objective video quality research with case study on PSNR implementations. Dig. Signal Process., 77, 195–206, doi: https://doi.org/10.1016/j.dsp.2017.09.013.
Aria, M., C. Cuccurullo, and A. Gnasso, 2021: A comparison among interpretative proposals for Random Forests. Mach. Learn. Appl., 6, 100094, doi: https://doi.org/10.1016/j.mlwa.2021.100094.
Bah, M. K., M. M. Gunshor, and T. J. Schmit, 2018: Generation of GOES-16 true color imagery without a green band. Earth Space Sci., 5, 549–558, doi: https://doi.org/10.1029/2018EA000379.
Bessho, K., K. Date, M. Hayashi, et al., 2016: An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites. J. Meteor. Soc. Japan Ser. II, 94, 151–183, doi: https://doi.org/10.2151/jmsj.2016-009.
Bodhaine, B. A., N. B. Wood, E. G. Dutton, et al., 1999: On Rayleigh optical depth calculations. J. Atmos. Oceanic Technol., 16, 1854–1861, doi: https://doi.org/10.1175/1520-0426(1999)016<1854:Orodc>2.0.Co;2.
Breiman, L., 2001: Random forests. Mach. Learn., 45, 5–32, doi: https://doi.org/10.1023/A:1010933404324.
Broomhall, M. A., L. J. Majewski, V. O. Villani, et al., 2019: Correcting Himawari-8 advanced Himawari imager data for the production of vivid true-color imagery. J. Atmos. Oceanic Technol., 36, 427–442, doi: https://doi.org/10.1175/jtech-d-18-0060.1.
Cai, J. R., S. H. Gu, and L. Zhang, 2018: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process., 27, 2049–2062, doi: https://doi.org/10.1109/TTP.2018.2794218.
Chen, C., Q. F. Chen, J. Xu, et al., 2018: Learning to see in the dark. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, UT, USA, 3291–3300, doi: https://doi.org/10.1109/CVPR.2018.00347.
Gladkova, I., F. Shahriar, M. Grossberg, et al., 2011: Virtual green band for GOES-R. Proc. Volume 8153, Earth Observing Systems XVI, SPIE, San Diego, California, United States, 81531C, doi: https://doi.org/10.1117/12.893660.
Grossberg, M. D., F. Shahriar, I. Gladkova, et al., 2011: Estimating true color imagery for GOES-R. Proc. Volume 8048, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVII, SPIE, Orlando, Florida, United States, 80481A, doi: https://doi.org/10.1117/12.884020.
Hillger, D., T. Kopp, T. Lee, et al., 2013: First-light imagery from Suomi NPP VIIRS. Bull. Amer. Meteor. Soc., 94, 1019–1029, doi: https://doi.org/10.1175/bams-d-12-00097.1.
Hillger, D. W., L. Grasso, S. D. Miller, et al., 2011: Synthetic advanced baseline imager true-color imagery. J. Appl. Remote Sens., 5, 053520, doi: https://doi.org/10.1117/1.3576112.
Huang, Z. H., T. X. Zhang, Q. Li, et al., 2016: Adaptive gamma correction based on cumulative histogram for enhancing near-infrared images. Infrared Phys. Technol., 79, 205–215, doi: https://doi.org/10.1016/j.infrared.2016.11.001.
Hyndman, R. J., and A. B. Koehler, 2006: Another look at measures of forecast accuracy. Int. J. Forecasting, 22, 679–688, doi: https://doi.org/10.1016/j.ijforecast.2006.03.001.
Isola, P., J.-Y. Zhu, T. H. Zhou, et al., 2017: Image-to-image translation with conditional adversarial networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Honolulu, Hawaii, United States, 5967–5976, doi: https://doi.org/10.1109/CVPR.2017.632.
Jeong, I., and C. Lee, 2021: An optimization-based approach to gamma correction parameter estimation for low-light image enhancement. Multimed. Tools Appl., 80, 18027–18042, doi: https://doi.org/10.1007/s11042-021-10614-8.
Jose, A., and A. Francis, 2021: Reversible colour density compression of images using cGANs. Available at https://arxiv.org/abs/2106.10542. Accessed on 8 September 2021.
Kingma, D. P., and M. Welling, 2014: Auto-encoding variational Bayes. Available at https://arxiv.org/abs/1312.6114. Accessed on 8 September 2021.
Lyapustin, A., J. Martonchik, Y. J. Wang, et al., 2011: Multiangle implementation of atmospheric correction (MAIAC): 1. Radiative transfer basis and look-up tables. J. Geophys. Res. Atmos., 116, D03210, doi: https://doi.org/10.1029/2010JD014985.
Miller, S. D., T. L. Schmit, C. J. Seaman, et al., 2016: A sight for sore eyes: The return of true color to geostationary satellites. Bull. Amer. Meteor. Soc., 97, 1803–1816, doi: https://doi.org/10.1175/bamsd-15-00154.1.
Miller, S. D., D. T. Lindsey, C. J. Seaman, et al., 2020: GeoColor: A blending technique for satellite imagery. J. Atmos. Oceanic Technol., 37, 429–448, doi: https://doi.org/10.1175/jtech-d-19-0134.1.
Mirza, M., and S. Osindero, 2014: Conditional generative adversarial nets. Available at https://arxiv.org/abs/1411.1784. Accessed on 8 September 2021.
Pech-Pacheco, J. L., G. Cristobal, J. Chamorro-Martinez, et al., 2000: Diatom autofocusing in brightfield microscopy: A comparative study. Proc. 15th International Conference on Pattern Recognition, IEEE, Barcelona, Spain, 314–317, doi: https://doi.org/10.1109/ICPR.2000.903548.
Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. Proc. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Munich, doi: https://doi.org/10.1007/978-3-319-24574-4_28.
van den Oord, A., N. Kalchbrenner, O. Vinyals, et al., 2016: Conditional image generation with PixelCNN decoders. Proc. 30th International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, 4797–4805.
Wang, M. H., 2016: Rayleigh radiance computations for satellite remote sensing: Accounting for the effect of sensor spectral response function. Opt. Express, 24, 12,414–12,429, doi: https://doi.org/10.1364/OE.24.012414.
Wang, Z., A. C. Bovik, H. R. Sheikh, et al., 2004: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process., 13, 600–612, doi: https://doi.org/10.1109/TIP.2003.819861.