A Review on Deep Learning in Medical Image Reconstruction
Tóm tắt
Từ khóa
Tài liệu tham khảo
Pavlovic, G., Tekalp, A.M.: Maximum likelihood parametric blur identification based on a continuous spatial domain model. IEEE Trans. Image Process. 1(4), 496–504 (1992)
Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 417–424. ACM Press/Addison-Wesley Publishing Co. (2000)
Brown, R.W., Haacke, E.M., Cheng, Y.C.N., Thompson, M.R., Venkatesan, R.: Magnetic Resonance Imaging: Physical Principles and Sequence Design. Wiley, Hoboken (2014)
Buzug, T.M.: Computed Tomography: From Photon Statistics to Modern Cone-Beam CT. Springer, Berlin (2008)
Choi, J.K., Park, H.S., Wang, S., Wang, Y., Seo, J.K.: Inverse problem in quantitative susceptibility mapping. SIAM J. Imaging Sci. 7(3), 1669–1689 (2014)
Natterer, F.: Image reconstruction in quantitative susceptibility mapping. SIAM J. Imaging Sci. 9(3), 1127–1131 (2016)
de Rochefort, L., Liu, T., Kressler, B., Liu, J., Spincemaille, P., Lebon, V., Wu, J., Wang, Y.: Quantitative susceptibility map reconstruction from MR phase data using Bayesian regularization: validation and application to brain imaging. Magn. Reson. Med. 63(1), 194–206 (2010)
Wang, Y., Liu, T.: Quantitative susceptibility mapping (QSM): decoding MRI data for a tissue magnetic biomarker. Magn. Reson. Med. 73(1), 82–101 (2015)
Rudin, L., Lions, P.L., Osher, S.: Multiplicative denoising and deblurring: theory and algorithms. In: Osher, S., Paragios, N. (eds.) Geometric Level Set Methods in Imaging, Vision, and Graphics, pp. 103–119. Springer, Berlin (2003)
Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations. Springer, Berlin (2006)
Chan, T.F., Shen, J.: Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods. SIAM, Philadelphia (2005)
Dong, B., Shen, Z.: Image restoration: a data-driven perspective. In: Proceedings of the International Congress of Industrial and Applied Mathematics (ICIAM), pp. 65–108 (2015)
Shen, Z.: Wavelet frames and image restorations. In: Proceedings of the International Congress of Mathematicians, vol. 4, pp. 2834–2863. World Scientific (2010)
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11(12), 3371–3408 (2010)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer Assisted Intervention Society, pp. 234–241 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: European Conference on Computer Vision, pp. 630–645 (2016)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1), 259–268 (1992)
Perona, P., Shiota, T., Malik, J.: Anisotropic diffusion. In: Romeny, B.M.H. (ed.) Geometry-Driven Diffusion in Computer Vision, pp. 73–92. Springer, Berlin (1994)
Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990)
Osher, S., Rudin, L.I.: Feature-oriented image enhancement using shock filters. SIAM J. Numer. Anal. 27(4), 919–940 (1990)
Alvarez, L., Mazorra, L.: Signal and image restoration using shock filters and anisotropic diffusion. SIAM J. Numer. Anal. 31(2), 590–605 (1994)
Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 60–65 (2005)
Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 4(2), 490–530 (2005)
Buades, A., Coll, B., Morel, J.M.: Image denoising methods. A new nonlocal principle. SIAM Rev. 52(1), 113–147 (2010)
Lou, Y., Zhang, X., Osher, S., Bertozzi, A.: Image recovery via nonlocal operators. J. Sci. Comput. 42(2), 185–197 (2010)
Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)
Mallat, S.: A Wavelet Tour of Signal Processing, The Sparse Way, 3rd edn. Academic Press, Burlington, MA (2009)
Ron, A., Shen, Z.: Affine systems in $$l_{2}({\mathbb{R}}^{d})$$: the analysis of the analysis operator. J. Funct. Anal. 148(2), 408–447 (1997)
Dong, B., Shen, Z.: MRA-based wavelet frames and applications. In: Zhao, H.-K. (ed.) Mathematics in Image Processing. IAS Lecture Notes Series, vol. 19. American Mathematical Society, Providence (2013)
Gu, S., Zhang, L., Zuo, W., Feng, X.: Weighted nuclear norm minimization with application to image denoising. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2862–2869 (2014)
Engan, K., Aase, S.O., Husoy, J.H.: Method of optimal directions for frame design. In: IEEE International Conference on Acoustics, Speech, and Signal Processing(ICASSP), vol. 5, pp. 2443–2446. IEEE (1999)
Aharon, M., Elad, M., Bruckstein, A., et al.: K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311 (2006)
Liu, R., Lin, Z., Zhang, W., Su, Z.: Learning PDEs for image restoration via optimal control. In: European Conference on Computer Vision, pp. 115–128. Springer (2010)
Cai, J.F., Ji, H., Shen, Z., Ye, G.B.: Data-driven tight frame construction and image denoising. Appl. Comput. Harmon. Anal. 37(1), 89–105 (2014)
Bao, C., Ji, H., Shen, Z.: Convergence analysis for iterative data-driven tight frame construction scheme. Appl. Comput. Harmon. Anal. 38(3), 510–523 (2015)
Tai, C., Weinan, E.: Multiscale adaptive representation of signals: I. The basic framework. J. Mach. Learn. Res. 17(1), 4875–4912 (2016)
Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization. In: Neural Information Processing Systems, pp. 2080–2088 (2009)
Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)
Cai, J.F., Jia, X., Gao, H., Jiang, S.B., Shen, Z., Zhao, H.: Cine cone beam CT reconstruction using low-rank matrix factorization: algorithm and a proof-of-principle study. IEEE Trans. Med. Imaging 33(8), 1581–1591 (2014)
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717 (2009)
Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)
Mumford, D., Shah, J.: Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42(5), 577–685 (1989)
Cai, J.F., Dong, B., Shen, Z.: Image restoration: a wavelet frame based model for piecewise smooth functions and beyond. Appl. Comput. Harmon. Anal. 41(1), 94–138 (2016)
Heimann, T., Meinzer, H.P.: Statistical shape models for 3D medical image segmentation: a review. Med. Image Anal. 13(4), 543–563 (2009)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems, pp. 1097–1105 (2012)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Neural Information Processing Systems, pp. 2672–2680 (2014)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976)
Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique 9(R2), 41–76 (1975)
Zhu, M., Chan, T.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. UCLA CAM Report, vol. 34 (2008)
Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 3(4), 1015–1046 (2010)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)
Cai, J.F., Osher, S., Shen, Z.: Split Bregman methods and frame based image restoration. Multiscale Model. Simul. 8(2), 337–369 (2009)
Goldstein, T., Osher, S.: The split Bregman method for $$l_1$$-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)
Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for $$\ell _1$$-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)
Osher, S., Mao, Y., Dong, B., Yin, W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Commun. Math. Sci. 8(1), 93–111 (2010)
Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Bruck Jr., R.E.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61(1), 159–164 (1977)
Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383–290 (1979)
Shen, Z., Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for frame-based image restoration via the balanced approach. SIAM J. Imaging Sci. 4(2), 573–596 (2011)
Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate $$O(1/k^2)$$. Dokl. Akad. Nauk SSSR 269, 543–547 (1983)
Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)
Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT, pp. 177–186. Springer (2010)
Bottou, L.: Stochastic gradient descent tricks. In: Orr, G.B., Müller, K.R. (eds.) Neural Networks: Tricks of the Trade, pp. 421–436. Springer, Berlin (2012)
Zhang, T.: Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: International Conference on Machine Learning, pp. 116–123. ACM (2004)
Nitanda, A.: Stochastic proximal gradient descent with acceleration techniques. In: Neural Information Processing Systems, pp. 1574–1582 (2014)
Zhang, Y., Xiao, L.: Stochastic primal-dual coordinate method for regularized empirical risk minimization. J. Mach. Learn. Res. 18(1), 2939–2980 (2017)
Konečnỳ, J., Liu, J., Richtárik, P., Takáč, M.: Mini-batch semi-stochastic gradient descent in the proximal setting. IEEE J. Sel. Top. Signal Process. 10(2), 242–255 (2016)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations (2015)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(Jul), 2121–2159 (2011)
Hinton, G.: Neural networks for machine learning. Coursera, video lectures (2012)
Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)
Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: International Conference on Machine Learning, pp. 399–406 (2010)
Chen, Y., Yu, W., Pock, T.: On learning optimized reaction diffusion processes for effective image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5261–5269 (2015)
Yang, Y., Sun, J., Li, H., Xu, Z.: Deep ADMM-Net for compressive sensing MRI. In: Neural Information Processing Systems, pp. 10–18 (2016)
Adler, J., Öktem, O.: Learned primal-dual reconstruction. IEEE Trans. Med. Imaging 37(6), 1322–1332 (2018)
Solomon, O., Cohen, R., Zhang, Y., Yang, Y., Qiong, H., Luo, J., van Sloun, R.J., Eldar, Y.C.: Deep unfolded robust PCA with application to clutter suppression in ultrasound. arXiv preprint arXiv:1811.08252 (2018)
Chen, X., Liu, J., Wang, Z., Yin, W.: Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds. In: Neural Information Processing Systems, pp. 9079–9089 (2018)
Liu, R., Cheng, S., He, Y., Fan, X., Lin, Z., Luo, Z.: On the convergence of learning-based iterative methods for nonconvex inverse problems. IEEE Trans. Pattern Anal. Mach. Intell. (2019). https://doi.org/10.1109/TPAMI.2019.2920591
Li, H., Yang, Y., Chen, D., Lin, Z.: Optimization algorithm inspired deep neural network structure design. In: Asian Conference on Machine Learning, pp. 614–629 (2018)
Zhang, H., Dong, B., Liu, B.: JSR-Net: a deep network for joint spatial-radon domain CT reconstruction from incomplete data. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)-2019, pp. 3657–3661 (2019). https://doi.org/10.1109/ICASSP.2019.8682178
Weinan, E.: A proposal on machine learning via dynamical systems. Commun. Math. Stat. 5(1), 1–11 (2017)
Chang, B., Meng, L., Haber, E., Tung, F., Begert, D.: Multi-level residual networks from dynamical systems view. In: International Conference on Learning Representations Poster (2018)
Li, Z., Shi, Z.: Deep residual learning and PDEs on manifold. arXiv:1708.05115 (2017)
Chang, B., Meng, L., Haber, E., Ruthotto, L., Begert, D., Holtham, E.: Reversible architectures for arbitrarily deep residual neural networks. In: AAAI Conference on Artificial Intelligence (2018)
Lu, Y., Zhong, A., Li, Q., Dong, B.: Beyond finite layer neural networks: bridging deep architectures and numerical differential equations. In: International Conference on Machine Learning, pp. 3276–3285 (2018)
Wang, B., Yuan, B., Shi, Z., Osher, S.J.: Enresnet: Resnet ensemble via the Feynman–Kac formalism. arXiv:1811.10745 (2018)
Ruthotto, L., Haber, E.: Deep neural networks motivated by partial differential equations. arXiv:1804.04272 (2018)
Tao, Y., Sun, Q., Du, Q., Liu, W.: Nonlocal neural networks, nonlocal diffusion and nonlocal modeling. In: Neural Information Processing Systems, pp. 494–504. Curran Associates, Inc. (2018)
Zhang, D., Zhang, T., Lu, Y., Zhu, Z., Dong, B.: You only propagate once: accelerating adversarial training via maximal principle. In: Neural Information Processing Systems (2019)
Zhang, X., Lu, Y., Liu, J., Dong, B.: Dynamically unfolding recurrent restorer: a moving endpoint control method for image restoration. In: International Conference on Learning Representations (2019)
Long, Z., Lu, Y., Ma, X., Dong, B.: PDE-Net: learning PDEs from data. In: International Conference on Machine Learning, pp. 3214–3222 (2018)
Long, Z., Lu, Y., Dong, B.: PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network. J. Comput. Phys. 339, 108925 (2019)
Lu, Y., Li, Z., He, D., Sun, Z., Dong, B., Qin, T., Wang, L., Liu, T.Y.: Understanding and improving transformer from a multi-particle dynamic system point of view. arXiv:1906.02762 (2019)
He, J., Xu, J.: MgNet: a unified framework of multigrid and convolutional neural network. Sci. China Math. 62, 1331–1354 (2019)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 4700–4708 (2017)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Neural Information Processing Systems, pp. 153–160 (2007)
Poultney, C., Chopra, S., Cun, Y.L., et al.: Efficient learning of sparse representations with an energy-based model. In: Neural Information Processing Systems, pp. 1137–1144 (2007)
Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)
Mao, X., Shen, C., Yang, Y.B.: Image restoration using very deep convolutional encoder–decoder networks with symmetric skip connections. In: Neural Information Processing Systems, pp. 2802–2810 (2016)
Chen, H., Zhang, Y., Kalra, M.K., Lin, F., Chen, Y., Liao, P., Zhou, J., Wang, G.: Low-dose CT with a residual encoder–decoder convolutional neural network. IEEE Trans. Med. Imaging 36(12), 2524–2535 (2017)
Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)
Yin, R., Gao, T., Lu, Y.M., Daubechies, I.: A tale of two bases: local-nonlocal regularization on image patches with convolution framelets. SIAM J. Imaging Sci. 10(2), 711–750 (2017)
Ye, J.C., Han, Y., Cha, E.: Deep convolutional framelets: a general deep learning framework for inverse problems. SIAM J. Imaging Sci. 11(2), 991–1048 (2018)
Falk, T., Mai, D., Bensch, R., Çiçek, Ö., Abdulkadir, A., Marrakchi, Y., Böhm, A., Deubner, J., Jäckel, Z., Seiwald, K., et al.: U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67–70 (2019)
Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)
Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)
Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signal Syst. 2(4), 303–314 (1989)
Funahashi, K.I.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2(3), 183–192 (1989)
Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39(3), 930–945 (1993)
Liang, S., Srikant, R.: Why deep neural networks for function approximation? In: International Conference on Learning Representations (2017)
Mhaskar, H., Liao, Q., Poggio, T.: Learning functions: when is deep better than shallow. arXiv:1603.00988 (2016)
Eldan, R., Shamir, O.: The power of depth for feedforward neural networks. In: Conference on Learning Theory, pp. 907–940 (2016)
Cohen, N., Sharir, O., Shashua, A.: On the expressive power of deep learning: a tensor analysis. In: Conference on Learning Theory, pp. 698–728 (2016)
Delalleau, O., Bengio, Y.: Shallow vs. deep sum-product networks. In: Neural Information Processing Systems, pp. 666–674 (2011)
Telgarsky, M.: Representation benefits of deep feedforward networks. arXiv:1509.08101 (2015)
Telgarsky, M.: Benefits of depth in neural networks. In: Conference on Learning Theory, vol. 49, pp. 1–23 (2016)
Lu, Z., Pu, H., Wang, F., Hu, Z., Wang, L.: The expressive power of neural networks: a view from the width. In: Neural Information Processing Systems, pp. 6231–6239 (2017)
Hanin, B., Sellke, M.: Approximating continuous functions by ReLU nets of minimal width. arXiv:1710.11278 (2017)
Hanin, B.: Universal function approximation by deep neural nets with bounded width and ReLU activations. Mathematics 7(10), 992 (2019)
Yarotsky, D.: Optimal approximation of continuous functions by very deep ReLU networks. In: Conference on Learning Theory (2018)
Rolnick, D., Tegmark, M.: The power of deeper networks for expressing natural functions. In: International Conference on Learning Representations (2018)
Shen, Z., Yang, H., Zhang, S.: Nonlinear approximation via compositions. Neural Netw. 119, 74–84 (2019)
Veit, A., Wilber, M.J., Belongie, S.: Residual networks behave like ensembles of relatively shallow networks. In: Neural Information Processing Systems, pp. 550–558 (2016)
Lin, H., Jegelka, S.: ResNet with one-neuron hidden layers is a universal approximator. In: Neural Information Processing Systems, pp. 6172–6181 (2018)
He, J., Li, L., Xu, J., Zheng, C.: ReLU deep neural networks and linear finite elements. arXiv:1807.03973 (2018)
Nochetto, R.H., Veeser, A.: Primer of adaptive finite element methods. In: Naldi, G., Russo, G. (eds.) Multiscale and Adaptivity: Modeling, Numerics and Applications, pp. 125–225. Springer, Berlin (2011)
Cessac, B.: A view of neural networks as dynamical systems. Int. J. Bifurc. Chaos 20(06), 1585–1629 (2010)
Sonoda, S., Murata, N.: Double continuum limit of deep neural networks. In: ICML Workshop (2017)
Thorpe, M., van Gennip, Y.: Deep limits of residual neural networks. arXiv:1810.11741 (2018)
Weinan, E., Han, J., Li, Q.: A mean-field optimal control formulation of deep learning. Res. Math. Sci. 6(10), 1–41 (2019). https://doi.org/10.1007/s40687-018-0172-y
Li, Q., Chen, L., Tai, C., Weinan, E.: Maximum principle based algorithms for deep learning. J. Mach. Learn. Res. 18(1), 5998–6026 (2017)
Chen, T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Neural Information Processing Systems, pp. 6572–6583 (2018)
Zhang, X., Li, Z., Loy, C.C., Lin, D.: Polynet: a pursuit of structural diversity in very deep networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3900–3908 (2017)
Larsson, G., Maire, M., Shakhnarovich, G.: Fractalnet: ultra-deep neural networks without residuals. In: International Conference on Learning Representations (2016)
Gomez, A.N., Ren, M., Urtasun, R., Grosse, R.B.: The reversible residual network: backpropagation without storing activations. In: Neural Information Processing Systems, pp. 2214–2224 (2017)
Zhang, J., Han, B., Wynter, L., Low, K.H., Kankanhalli, M.: Towards robust ResNet: a small step but a giant leap. In: International Joint Conference on Artificial Intelligence, pp. 4285–4291 (2019)
Ascher, U.M., Petzold, L.R.: Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, vol. 61. SIAM, Philadelphia (1998)
Zhu, M., Chang, B., Fu, C.: Convolutional neural networks combined with Runge–Kutta methods. arXiv:1802.08831 (2018)
Warming, R., Hyett, B.: The modified equation approach to the stability and accuracy analysis of finite-difference methods. J. Comput. Phys. 14(2), 159–179 (1974)
Su, W., Boyd, S., Candès, E.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. In: Neural Information Processing Systems, pp. 2510–2518 (2014)
Wilson, A.C., Recht, B., Jordan, M.I.: A Lyapunov analysis of momentum methods in optimization. arXiv:1611.02635 (2016)
Dong, B., Jiang, Q., Shen, Z.: Image restoration: wavelet frame shrinkage, nonlinear evolution PDEs, and beyond. Multiscale Model. Simul. 15(1), 606–660 (2017)
Gastaldi, X.: Shake-shake regularization. In: International Conference on Learning Representations Workshop (2017)
Huang, G., Sun, Y., Liu, Z., Sedra, D., Weinberger, K.Q.: Deep networks with stochastic depth. In: European Conference on Computer Vision, pp. 646–661 (2016)
Sun, Q., Tao, Y., Du, Q.: Stochastic training of residual networks: a differential equation viewpoint. arXiv preprint arXiv:1812.00174 (2018)
Scherzer, O. (ed.): Handbook of Mathematical Methods in Imaging, 2nd edn. Springer, New York (2015)
Herman, G.T.: Fundamentals of Computerized Tomography: Image Reconstruction from Projections. Springer, Berlin (2009)
Zhu, B., Liu, J.Z., Cauley, S.F., Rosen, B.R., Rosen, M.S.: Image reconstruction by domain-transform manifold learning. Nature 555(7697), 487 (2018)
Kalra, M., Wang, G., Orton, C.G.: Radiomics in lung cancer: its time is here. Med. Phys. 45(3), 997–1000 (2018)
Wu, D., Kim, K., Dong, B., El Fakhri, G., Li, Q.: End-to-end lung nodule detection in computed tomography. In: International Workshop on Machine Learning in Medical Imaging, pp. 37–45. Springer (2018)
Liu, D., Wen, B., Liu, X., Wang, Z., Huang, T.S.: When image denoising meets high-level vision tasks: a deep learning approach. In: International Joint Conference on Artificial Intelligence, pp. 842–848 (2018)
Liu, D., Wen, B., Jiao, J., Liu, X., Wang, Z., Huang, T.S.: Connecting image denoising and high-level vision tasks via deep learning. arXiv preprint arXiv:1809.01826 (2018)
Zhang, Z., Liang, X., Dong, X., Xie, Y., Cao, G.: A sparse-view CT reconstruction method based on combination of densenet and deconvolution. IEEE Trans. Med. Imaging 37(6), 1407–1417 (2018)
Yang, Q., Yan, P., Zhang, Y., Yu, H., Shi, Y., Mou, X., Kalra, M.K., Zhang, Y., Sun, L., Wang, G.: Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Trans. Med. Imaging 37(6), 1348–1357 (2018)
Jin, K.H., Mccann, M.T., Froustey, E., Unser, M.: Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process. 26(9), 4509–4522 (2017)
Han, Y.S., Yoo, J., Ye, J.C.: Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis. arXiv preprint arXiv:1611.06391 (2016)
Liu, J., Chen, X., Wang, Z., Yin, W.: ALISTA: Analytic weights are as good as learned weights in International Conference on Learning Representations. In: ICLR (2019)
Xie, X., Wu, J., Zhong, Z., Liu, G., Lin, Z.: Differentiable linearized ADMM. In: International Conference on Machine Learning (2019)
Yang, Y., Sun, J., Li, H., Xu, Z.: ADMM-Net: a deep learning approach for compressive sensing MRI. arXiv preprint arXiv:1705.06869 (2017)
Adler, J., Öktem, O.: Solving ill-posed inverse problems using iterative deep neural networks. Inverse Probl. 33, 124007 (2017)
Dong, B., Li, J., Shen, Z.: X-ray CT image reconstruction via wavelet frame based regularization and radon domain inpainting. J. Sci. Comput. 54(2), 333–349 (2013)
Burger, M., Müller, J., Papoutsellis, E., Schönlieb, C.B.: Total variation regularization in measurement and image space for PET reconstruction. Inverse Probl. 30(10), 105003 (2014)
Zhan, R., Dong, B.: CT image reconstruction by spatial-radon domain data-driven tight frame regularization. SIAM J. Imaging Sci. 9(3), 1063–1083 (2016)