Linear local tangent space alignment with autoencoder
Tóm tắt
Linear local tangent space alignment (LLTSA) is a classical dimensionality reduction method based on manifold. However, LLTSA and all its variants only consider the one-way mapping from high-dimensional space to low-dimensional space. The projected low-dimensional data may not accurately and effectively “represent” the original samples. This paper proposes a novel LLTSA method based on the linear autoencoder called LLTSA-AE (LLTSA with Autoencoder). The proposed LLTSA-AE is divided into two stages. The conventional process of LLTSA is viewed as the encoding stage, and the additional and important decoding stage is used to reconstruct the original data. Thus, LLTSA-AE makes the low-dimensional embedding data “represent” the original data more accurately and effectively. LLTSA-AE gets the recognition rates of 85.10, 67.45, 75.40 and 86.67% on handwritten Alphadigits, FERET, Georgia Tech. and Yale datasets, which are 9.4, 14.03, 7.35 and 12.39% higher than that of the original LLTSA respectively. Compared with some improved methods of LLTSA, it also obtains better performance. For example, on Handwritten Alphadigits dataset, compared with ALLTSA, OLLTSA, PLLTSA and WLLTSA, the recognition rates of LLTSA-AE are improved by 4.77, 3.96, 7.8 and 8.6% respectively. It shows that LLTSA-AE is an effective dimensionality reduction method.
Tài liệu tham khảo
Jia W, Sun M, Lian J et al (2022) Feature dimensionality reduction: a review. Complex Intell Syst 1–31
Van Der Maaten L, Postma E, Van den Herik J et al (2009) Dimensionality reduction: a comparative. J Mach Learn Res 10(66–71):13
Shen L, Tao H, Ni Y et al (2023) Improved yolov3 model with feature map cropping for multi-scale road object detection. Meas Sci Technol
Loey M, Manogaran G, Taha MHN et al (2021) Fighting against Covid-19: a novel deep learning model based on yolo-v2 with ResNet-50 for medical face mask detection. Sustain Cities Soc 65(102):600
Zhuang Z, Tao H, Chen Y et al (2022) An optimal iterative learning control approach for linear systems with nonuniform trial lengths under input constraints. IEEE Trans Syst Man Cybern Syst
Lin CY, Sun L, Tomizuka M (2015) Robust principal component analysis for iterative learning control of precision motion systems with non-repetitive disturbances. In: 2015 American control conference (ACC). IEEE, pp 2819–2824
Keogh EJ, Mueen A (2017) Curse of dimensionality. Encycl Mach Learn Data Min 2017:314–315
Verleysen M, François D (2005) The curse of dimensionality in data mining and time series prediction. In: International work-conference on artificial neural networks. Springer, pp 758–770
Fisher RA (1936) The use of multiple measurements in taxonomic problems. Ann Eugen 7(2):179–188
Cai D, He X, Han J (2007) Semi-supervised discriminant analysis. In: IEEE international conference on computer vision. IEEE, pp 1–7
Turk M, Pentland A (1991) Eigenfaces for recognition. J Cognit Neurosci 3(1):71–86
Yan S, Xu D, Zhang B et al (2006) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51
Gupta A, Barbu A (2018) Parameterized principal component analysis. Pattern Recogn 78:215–227
Jolliffe IT, Cadima J (2016) Principal component analysis: a review and recent developments. Philos T R Soc A 374(2065):20150202
Erichson NB, Zheng P, Manohar K et al (2020) Sparse principal component analysis via variable projection. SIAM J Appl Math 80(2):977–1002
Gisbrecht A, Hammer B (2015) Data visualization by nonlinear dimensionality reduction. Wires Data Min Knowl 5(2):51–73
Belkin M, Niyogi P (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering
Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326
Tenenbaum JB, Silva VD, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323
Zhang Z, Zha H (2004) Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM J Sci Comput 26(1):313–338
Lee JA, Lendasse A, Verleysen M (2004) Nonlinear projection with curvilinear distances: isomap versus curvilinear distance analysis. Neurocomputing 57:49–76
Sani ZA, Ahmad Shalbaf HB, Shalbaf R (2015) Automatic computation of left ventricular volume changes over a cardiac cycle from echocardiography images by nonlinear dimensionality reduction. J Digit Imaging 28(1):91
Bengio Y, Paiement JF, Vincent P et al (2003) Out-of-sample extensions for LLE, isomap, MDS, eigenmaps, and spectral clustering. 177–184
Cai D, He X, Han J et al (2007) Isometric projection. In: Association for the advancement of artificial intelligence, pp 528–533
He X, Niyogi P (2004) Locality preserving projections. Adv Neural Inf Process Syst 16:153–160
He X, Cai D, Yan S et al (2005) Neighborhood preserving embedding. 1208–1213
Zhang T, Yang J, Zhao D et al (2007) Linear local tangent space alignment and application to face recognition. Neurocomputing 70(7–9):1547–1553
Jäntschi L (2019) The eigenproblem translated for alignment of molecules. Symmetry 11(8):1027
Zhang Z, Wang J, Zha H (2011) Adaptive manifold learning. IEEE Trans Pattern Anal Mach Intell 34(2):253–265
Lei YK, Xu YM, Yang JA et al (2012) Feature extraction using orthogonal discriminant local tangent space alignment. Pattern Anal Appl 15(3):249–259
Feng L, Liu S, Xiao Y et al (2015) A novel CBIR system with WLLTSA and ULRGA. Neurocomputing 147:509–522
WenHua L (2011) Modified linear local tangent space alignment algorithm. J Comput Appl 31(01):247
Hassan Shah MZ, Hu L, Ahmed Z (2022) Weighted linear local tangent space alignment via geometrically inspired weighted PCA for fault detection. IEEE Trans Ind Inform 1–1
Fang L, Lv Y, Ma L et al (2017) Improved linear local tangent space alignment and its application to pattern recognition. Int J Comput Appl T 56(3):244–52
Zuqiang Su eBaoping Tang (2017) Rotating machinery fault diagnosis with supervised-linear local tangent space alignment for dimension reduction. Chin J Sci Instrum 35(3):244–252
Lv YX, Deng YN, Shi Y et al (2014) Adaptive discriminant linear local tangent space alignment algorithm on face recognition. In: Advanced materials research. Trans Tech Publ, pp 2381–2384
Li Y, Luo D, Liu S (2009) Orthogonal discriminant linear local tangent space alignment for face recognition. Neurocomputing 72(4–6):1319–1323
Wang Y, Wang Z, Zhang G et al (2012) Face recognition using marginal discriminant linear local tangent space alignment. In: International conference on intelligent system design and engineering application. IEEE, pp 1418–1421
Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507
Bengio Y et al (2009) Learning deep architectures for AI. Found Trends Mach Leg 2(1):1–127
Bengio Y, Lamblin P, Popovici D et al (2006) Greedy layer-wise training of deep networks
Masci J, Meier U, Cireşan D et al (2011) Stacked convolutional auto-encoders for hierarchical feature extraction. In: International conference on artificial neural networks. Springer, pp 52–59
Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114
Ranzato M, Boureau YL, Cun Y et al (2007) Sparse feature learning for deep belief networks
Bolboacă SD, Jäntschi L (2014) Sensitivity, specificity, and accuracy of predictive models on phenols toxicity. J Comput Sci-Neth 5(3):345–350
Kasai H (2017) Sgdlibrary: a matlab library for stochastic optimization algorithms. J Mach Learn Res 18(1):7942–7946
Phillips PJ, Moon H, Rizvi SA et al (2000) The FERET evaluation methodology for face-recognition algorithms. IEEE Trans Pattern Anal Mach Intell 22(10):1090–1104
Nefian AV (1999) Georgia tech face database. Georgia Institute of Technology. http://www.anefian.com/research/ face_reco.htm
Belhumeur P, Kriegman D (1997) The Yale face database. Yale University. 1(2):4. http://cvc.yale.edu/projects/yalefaces/yalefaces.html