A survey on deep matrix factorizations
Tài liệu tham khảo
Udell, 2016, Generalized low rank models, Found. Trends Mach. Learn., 9, 1, 10.1561/2200000055
Udell, 2019, Why are big data matrices approximately low rank?, SIAM J. Math. Data Sci., 1, 144, 10.1137/18M1183480
Wold, 1987, Principal component analysis, Chemometr. Intell. Lab. Syst., 2, 37, 10.1016/0169-7439(87)80084-9
Golub, 1971, Singular value decomposition and least squares solutions, 134
Papyan, 2018, Theoretical foundations of deep learning via sparse representations: A multilayer sparse model and its connection to convolutional neural networks, IEEE Signal Process. Mag., 35, 72, 10.1109/MSP.2018.2820224
Georgiev, 2005, Sparse component analysis and blind source separation of underdetermined mixtures, IEEE Trans. Neural Netw., 16, 992, 10.1109/TNN.2005.849840
Lee, 1999, Learning the parts of objects by non-negative matrix factorization, Nature, 401, 788, 10.1038/44565
LeCun, 2015, Deep learning, Nature, 521, 436, 10.1038/nature14539
Marcus, 2018
Goodfellow, 2014, Generative adversarial nets, 2672
Trigeorgis, 2016, A deep matrix factorization method for learning attribute representations, IEEE Trans. Pattern Anal. Mach. Intell., 39, 417, 10.1109/TPAMI.2016.2554555
Gillis, 2013, Fast and robust recursive algorithms for separable nonnegative matrix factorization, IEEE Trans. Pattern Anal. Mach. Intell., 36, 698, 10.1109/TPAMI.2013.226
Ang, 2019, Accelerating nonnegative matrix factorization algorithms using extrapolation, Neural Comput., 31, 417, 10.1162/neco_a_01157
Févotte, 2011, Algorithms for nonnegative matrix factorization with the β-divergence, Neural Comput., 23, 2421, 10.1162/NECO_a_00168
Wang, 2012, Nonnegative matrix factorization: A comprehensive review, IEEE Trans. Knowl. Data Eng., 25, 1336, 10.1109/TKDE.2012.51
Kim, 2014, Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework, J. Global Optim., 58, 285, 10.1007/s10898-013-0035-4
Fu, 2019, Nonnegative matrix factorization for signal and data analytics: Identifiability, algorithms, and applications., IEEE Signal Process. Mag., 36, 59, 10.1109/MSP.2018.2877582
Gillis, 2014, The why and how of nonnegative matrix factorization, Regul. Optim. Kernels Support Vector Mach., 12
Gillis, 2017, Introduction to nonnegative matrix factorization, SIAG/OPT View. News, 25, 7
Cichocki, 2009
Abdolali, 2020
Miao, 2007, Endmember extraction from highly mixed data using minimum volume constrained nonnegative matrix factorization, IEEE Trans. Geosci. Remote Sens., 45, 765, 10.1109/TGRS.2006.888466
Ang, 2018, Volume regularized non-negative matrix factorizations, 1
Fu, 2019, Nonnegative matrix factorization for signal and data analytics: Identifiability, algorithms, and applications, IEEE Signal Process. Mag., 36, 59, 10.1109/MSP.2018.2877582
Hoyer, 2002, Non-negative sparse coding, 557
Mørup, 2012, Archetypal analysis for machine learning and data mining, Neurocomputing, 80, 54, 10.1016/j.neucom.2011.06.033
De Handschutter, 2019, Near-convex archetypal analysis, IEEE Signal Process. Lett., 27, 81, 10.1109/LSP.2019.2957604
Javadi, 2019, Nonnegative matrix factorization via archetypal analysis, J. Amer. Statist. Assoc., 1
Vavasis, 2009, On the complexity of nonnegative matrix factorization, SIAM J. Opt., 20, 1364, 10.1137/070709967
Cichocki, 2006, Multilayer nonnegative matrix factorisation, Electron. Lett., 42, 947, 10.1049/el:20060983
Cichocki, 2007, Multilayer nonnegative matrix factorization using projected gradient approaches, Int. J. Neural Syst., 17, 431, 10.1142/S0129065707001275
Trigeorgis, 2014, A deep semi-NMF model for learning hidden representations, 1692
Ding, 2010, Convex and semi-nonnegative matrix factorizations, IEEE Trans. Pattern Anal. Mach. Intell., 32, 45, 10.1109/TPAMI.2008.277
Yu, 2018
Dikmen, 2014, Learning the information divergence, IEEE Trans. Pattern Anal. Mach. Intell., 37, 1442, 10.1109/TPAMI.2014.2366144
Févotte, 2009, Nonnegative matrix factorization with the Itakura-Saito divergence: With application to music analysis, Neural Comput., 21, 793, 10.1162/neco.2008.04-08-771
Leplat, 2020, Blind audio source separation with minimum-volume beta-divergence NMF, IEEE Trans. Signal Process., 3400, 10.1109/TSP.2020.2991801
C.H. Ding, T. Li, W. Peng, H. Park, Orthogonal nonnegative matrix t-factorizations for clustering, in: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006, pp. 126–135.
Pompili, 2014, Two algorithms for orthogonal nonnegative matrix factorization with application to clustering, Neurocomputing, 141, 15, 10.1016/j.neucom.2014.02.018
Li, 2014, Two efficient algorithms for approximately orthogonal nonnegative matrix factorization, IEEE Signal Process. Lett., 22, 843
Lyu, 2017, A deep orthogonal non-negative matrix factorization method for learning attribute representations, 443
Qiu, 2017
Eggert, 2004, Sparse coding and NMF, 4, 2529
Kim, 2008
Gribonval, 2015, Sparse and spurious: dictionary learning with noise and outliers, IEEE Trans. Inform. Theory, 61, 6298, 10.1109/TIT.2015.2472522
Cohen, 2019, Nonnegative low-rank sparse component analysis, 8226
Guo, 2019, Sparse deep nonnegative matrix factorization, Big Data Min. Anal., 3, 13, 10.26599/BDMA.2019.9020020
Beck, 2009, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2, 183, 10.1137/080716542
Gillis, 2012, Sparse and unique nonnegative matrix factorization through data preprocessing, J. Mach. Learn. Res., 13, 3349
Lyu, 2013, On algorithms for sparse multi-factor NMF, 602
Peharz, 2012, Sparse nonnegative matrix factorization with l0-constraints, Neurocomputing, 80, 38, 10.1016/j.neucom.2011.09.024
Qian, 2011, Hyperspectral unmixing via l1/2 sparsity-constrained nonnegative matrix factorization, IEEE Trans. Geosci. Remote Sens., 49, 4282, 10.1109/TGRS.2011.2144605
Srivastava, 2014, Dropout: a simple way to prevent neural networks from overfitting, J. Mach. Learn. Res., 15, 1929
He, 2019, Dropout non-negative matrix factorization, Knowl. Inf. Syst., 60, 781, 10.1007/s10115-018-1259-x
J. Cavazza, P. Morerio, B. Haeffele, C. Lane, V. Murino, R. Vidal, Dropout as a low-rank regularizer for matrix factorization, in: International Conference on Artificial Intelligence and Statistics, 2018, pp. 435–444.
Pascual-Montano, 2006, Nonsmooth nonnegative matrix factorization (nsNMF), IEEE Trans. Pattern Anal. Mach. Intell., 28, 403, 10.1109/TPAMI.2006.60
Song, 2013, Hierarchical representation using NMF, 466
Sharma, 2018, ASE: Acoustic scene embedding using deep archetypal analysis and GMM, 3299
Keller, 2019, Deep archetypal analysis, 171
Alemi, 2017, Deep variational information bottleneck
Li, 2015, Multilayer concept factorization for data representation, 486
Zhang, 2020, Deep self-representative concept factorization network for representation learning, 361
Zhang, 2021, A survey on concept factorization: From shallow to deep representation learning, Inf. Process. Manage., 58, 10.1016/j.ipm.2021.102534
Meng, 2019, Semi-supervised graph regularized deep NMF with bi-orthogonal constraints for data representation, IEEE Trans. Neural Netw. Learn. Syst.
Sidiropoulos, 2017, Tensor decomposition for signal processing and machine learning, IEEE Trans. Signal Process., 65, 3551, 10.1109/TSP.2017.2690524
Bi, 2018, Multilayer tensor factorization with applications to recommender systems, Ann. Statist., 46, 3308, 10.1214/17-AOS1659
Casebeer, 2019, Deep tensor factorization for spatially-aware scene decomposition, 180
Smaragdis, 2017, A neural network alternative to non-negative audio models, 86
Jia, 2016, Sparse canonical temporal alignment with deep tensor decomposition for action recognition, IEEE Trans. Image Process., 26, 738, 10.1109/TIP.2016.2621664
Oymak, 2018
Domanov, 2015, Generic uniqueness conditions for the canonical polyadic decomposition and INDSCAL, SIAM J. Matrix Anal. Appl., 36, 1567, 10.1137/140970276
Ravishankar, 2012, Learning sparsifying transforms, IEEE Trans. Signal Process., 61, 1072, 10.1109/TSP.2012.2226449
Maggu, 2018, Unsupervised deep transform learning, 6782
Gillis, 2014, Successive nonnegative projection algorithm for robust nonnegative blind source separation, SIAM J. Imaging Sci., 7, 1420, 10.1137/130946782
Lee, 2001, Algorithms for non-negative matrix factorization, 556
Ahn, 2004, A multiplicative up-propagation algorithm, 3
Lin, 2007, Projected gradient methods for nonnegative matrix factorization, Neural Comput., 19, 2756, 10.1162/neco.2007.19.10.2756
Nesterov, 1983, A method for solving the convex programming problem with convergence rate O(1/k̂2), 269, 543
Huang, 2016, A flexible and efficient algorithmic framework for constrained matrix and tensor factorization, IEEE Trans. Signal Process., 64, 5052, 10.1109/TSP.2016.2576427
Zhou, 2018, A deep structure-enforced nonnegative matrix factorization for data representation, 340
Arora, 2019, Implicit regularization in deep matrix factorization, 7411
Fan, 2018, Matrix completion by deep matrix factorization, Neural Netw., 98, 34, 10.1016/j.neunet.2017.10.007
Q. Wang, M. Sun, L. Zhan, P. Thompson, S. Ji, J. Zhou, Multi-modality disease modeling via collective deep matrix factorization, in: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 1155–1164.
Le Roux, 2015, Deep NMF for speech separation, 66
Koren, 2009, Matrix factorization techniques for recommender systems, Computer, 42, 30, 10.1109/MC.2009.263
Bioucas-Dias, 2012, Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 5, 354, 10.1109/JSTARS.2012.2194696
Ma, 2013, A signal processing perspective on hyperspectral unmixing: Insights from remote sensing, IEEE Signal Process. Mag., 31, 67, 10.1109/MSP.2013.2279731
Zhu, 2017
Data - rslab, (Accessed on 09/09/2020), https://rslab.ut.ac.ir/data.
Mongia, 2020, Deep latent factor model for collaborative filtering, Signal Process., 169, 10.1016/j.sigpro.2019.107366
Xue, 2017, Deep matrix factorization models for recommender systems, 3203
Yi, 2019, Deep matrix factorization with implicit feedback embedding for recommendation system, IEEE Trans. Ind. Inf., 10.1109/TII.2019.2893714
Yang, 2018, Multi-view clustering: A survey, Big Data Min. Anal., 1, 83, 10.26599/BDMA.2018.9020003
H. Zhao, Z. Ding, Y. Fu, Multi-view clustering via deep matrix factorization, in: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017, pp. 2921–2927.
B. Cui, H. Yu, T. Zhang, S. Li, Self-weighted multi-view clustering with deep matrix factorization, in: Asian Conference on Machine Learning, 2019, pp. 567–582.
Wei, 2020, Multi-view multiple clusterings using deep matrix factorization., 6348
Xu, 2018, Deep multi-view concept learning., 2898
Huang, 2020, Auto-weighted multi-view clustering via deep matrix decomposition, Pattern Recognit., 97, 10.1016/j.patcog.2019.107015
Xiong, 2020, Cross-view hashing via supervised deep discrete matrix factorization, Pattern Recognit., 103, 10.1016/j.patcog.2020.107270
J. Yang, J. Leskovec, Overlapping community detection at scale: a nonnegative matrix factorization approach, in: Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, 2013, pp. 587–596.
Ye, 2018, Deep autoencoder-like nonnegative matrix factorization for community detection, 1393
Rajabi, 2014, Spectral unmixing of hyperspectral imagery using multilayer NMF, IEEE Geosci. Remote Sens. Lett., 12, 38, 10.1109/LGRS.2014.2325874
Tong, 2017, Hyperspectral unmixing via deep matrix factorization, Int. J. Wavelets Multiresolut. Inf. Process., 15, 10.1142/S0219691317500588
Feng, 2018, Hyperspectral unmixing using sparsity-constrained deep nonnegative matrix factorization with total variation, IEEE Trans. Geosci. Remote Sens., 56, 6245, 10.1109/TGRS.2018.2834567
Rudin, 1992, Nonlinear total variation based noise removal algorithms, Physica D, 60, 259, 10.1016/0167-2789(92)90242-F
Zhao, 2016, Multilayer unmixing for hyperspectral imagery with fast kernel archetypal analysis, IEEE Geosci. Remote Sens. Lett., 13, 1532, 10.1109/LGRS.2016.2595102
Gao, 2017, Change detection in SAR images based on deep semi-NMF and SVD networks, Remote Sens., 9, 435, 10.3390/rs9050435
Li, 2020, Deep nonsmooth nonnegative matrix factorization network factorization network with semi-supervised learning for SAR image change detection, ISPRS J. Photogramm. Remote Sens., 160, 167, 10.1016/j.isprsjprs.2019.12.002
Sharma, 2017, Deep sparse representation based features for speech recognition, IEEE/ACM Trans. Audio Speech Lang. Process., 25, 2162, 10.1109/TASLP.2017.2748240
Davis, 1980, Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences, IEEE Trans. Acoust. Speech Signal Process., 28, 357, 10.1109/TASSP.1980.1163420
C. Hsu, J. Chien, T. Chi, Layered nonnegative matrix factorization for speech separation, in: 16th Annual Conference of the International Speech Communication Association (Interspeech 2015), Vols 1-5, 2015, pp. 628–632.
Thakur, 2018, Deep convex representations: Feature representations for bioacoustics classification, 2127
Thakur, 2019, Deep archetypal analysis based intermediate matching kernel for bioacoustic classification, IEEE J. Sel. Top. Sign. Proces., 13, 298, 10.1109/JSTSP.2019.2906465
Ding, 2008, On the equivalence between non-negative matrix factorization and probabilistic latent semantic indexing, Comput. Statist. Data Anal., 52, 3913, 10.1016/j.csda.2008.01.011
S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, M. Zhu, A practical algorithm for topic modeling with provable guarantees, in: International Conference on Machine Learning, 2013, pp. 280–288.
Dobigeon, 2013, Nonlinear unmixing of hyperspectral images: Models and algorithms, IEEE Signal Process. Mag., 31, 82, 10.1109/MSP.2013.2279274
Sainath, 2013, Low-rank matrix factorization for deep neural network training with high-dimensional output targets, 6655
Zhang, 2014, Extracting deep neural network bottleneck features using low-rank matrix factorization, 185
Kang, 2014, NMF-based target source separation using deep neural network, IEEE Signal Process. Lett., 22, 229, 10.1109/LSP.2014.2354456
Ozkan, 2018, Endnet: Sparse autoencoder network for endmember extraction and hyperspectral unmixing, IEEE Trans. Geosci. Remote Sens., 1
Ng, 2011, Sparse autoencoder, CS294A Lecture Notes, 72, 1
Lemme, 2012, Online learning and generalization of parts-based image representations by non-negative sparse autoencoders, Neural Netw., 33, 194, 10.1016/j.neunet.2012.05.003
Hosseini-Asl, 2016, Deep learning of part-based representation of data using sparse autoencoders with nonnegativity constraints, IEEE Trans. Neural Netw. Learn. Syst., 27, 2486, 10.1109/TNNLS.2015.2479223
Flenner, 2017, A deep non-negative matrix factorization neural network, Semant. Sch.
Tariyal, 2016, Deep dictionary learning, IEEE Access, 4, 10096, 10.1109/ACCESS.2016.2611583
van Dijk, 2019, Finding archetypal spaces using neural networks, 2634
C. Bauckhage, K. Kersting, F. Hoppe, C. Thurau, Archetypal analysis as an autoencoder, in: Workshop New Challenges in Neural Computation, 2015, p. 8.
Razaviyayn, 2013, A unified convergence analysis of block successive minimization methods for nonsmooth optimization, SIAM J. Optim., 23, 1126, 10.1137/120891009
Sun, 2020, The global landscape of neural networks: An overview, IEEE Signal Process. Mag., 37, 95, 10.1109/MSP.2020.3004124
Laurent, 2018, Deep linear networks with arbitrary loss: All local minima are global, 2902
S. Arora, N. Golowich, N. Cohen, W. Hu, A convergence analysis of gradient descent for deep linear neural networks, in: 7th International Conference on Learning Representations, ICLR 2019, 2019.
Bartlett, 2019, Gradient descent with identity initialization efficiently learns positive-definite linear transformations by deep residual networks, Neural Comput., 31, 477, 10.1162/neco_a_01164
S. Arora, N. Cohen, E. Hazan, On the optimization of deep networks: Implicit acceleration by overparameterization, in: International Conference on Machine Learning, 2018, pp. 244–253.
S. Du, W. Hu, Width provably matters in optimization for deep linear neural networks, in: International Conference on Machine Learning, 2019, pp. 1655–1664.
O. Shamir, Exponential convergence time of gradient descent for one-dimensional deep linear neural networks, in: Conference on Learning Theory, 2019, pp. 2691–2713.
Gunasekar, 2017, Implicit regularization in matrix factorization, 6151
Huang, 2013, Non-negative matrix factorization revisited: Uniqueness and algorithm for symmetric decomposition, IEEE Trans. Signal Process., 62, 211, 10.1109/TSP.2013.2285514
Malgouyres, 2016, On the identifiability and stable recovery of deep/multi-layer structured matrix factorization, 315
Malgouyres, 2019, Multilinear compressive sensing and an application to convolutional linear networks, SIAM J. Math. Data Sci., 1, 446, 10.1137/18M119834X
Stewart, 1990
O. Seddati, S. Dupont, S. Mahmoudi, M. Parian, Towards good practices for image retrieval based on CNN features, in: Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 1246–1255.
Smaragdis, 2004, Non-negative matrix factor deconvolution; extraction of multiple sound sources from monophonic inputs, 494
Wang, 2021