×

A unified framework for semi-supervised dimensionality reduction. (English) Zbl 1154.68501

Summary: In practice, many applications require a dimensionality reduction method to deal with the partially labeled problem. In this paper, we propose a semi-supervised dimensionality reduction framework, which can efficiently handle the unlabeled data. Under the framework, several classical methods, such as principal component analysis, linear discriminant analysis, maximum margin criterion, locality preserving projections and their corresponding kernel versions can be seen as special cases. For high-dimensional data, we can give a low-dimensional embedding result for both discriminating multi-class sub-manifolds and preserving local manifold structure. Experiments show that our algorithms can significantly improve the accuracy rates of the corresponding supervised and unsupervised approaches.

MSC:

68T10 Pattern recognition, speech recognition
68T05 Learning and adaptive systems in artificial intelligence
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] He, X.; Yan, S.; Hu, Y.; Niyogi, P.; Zhang, H., Face recognition using laplacianfaces, IEEE Trans. Pattern Anal. Mach. Intell., 27, 3, 328-340 (2005)
[2] Belhumeur, P. N.; Hespanha, J. P.; Kriegman, D. J., Fisherfaces: recognition using class specific linear projection, IEEE Trans. Pattern Anal. Mach. Intell., 19, 7, 711-720 (1997)
[3] Li, H.; Jiang, T.; Zhang, K., Efficient and robust feature extraction by maximum margin criterion, IEEE Trans. Neural Networks, 17, 1, 157-165 (2006)
[4] M. Turk, A. Pentland, Face recognition using eigenfaces, in: Proceedings of the Computer Vision and Pattern Recognition, 1991.; M. Turk, A. Pentland, Face recognition using eigenfaces, in: Proceedings of the Computer Vision and Pattern Recognition, 1991.
[5] Yang, J.; Frangi, A. F.; Yang, J. Y.; Zhang, D.; Jin, Z., KPCA plus LDA: A complete kernel fisher discriminant framework for feature extraction and recognition, IEEE Trans. Pattern Anal. Mach. Intell., 27, 2, 230-244 (2005)
[6] F. Chung, Spectral graph theory, in: CBMS Regional Conference Series in Mathematics, vol. 92, American Mathematical Society, 1997.; F. Chung, Spectral graph theory, in: CBMS Regional Conference Series in Mathematics, vol. 92, American Mathematical Society, 1997. · Zbl 0867.05046
[7] B. Schölkopf, R. Herbrich, A.J. Smola, A generalized representer theorem, in: Proceedings of Computational Learning Theory, 2001, pp. 416-426.; B. Schölkopf, R. Herbrich, A.J. Smola, A generalized representer theorem, in: Proceedings of Computational Learning Theory, 2001, pp. 416-426. · Zbl 0992.68088
[8] Fukunaga, K., Introduction to Statistical Pattern Recognition (1990), Academic Press: Academic Press Boston, MA · Zbl 0711.62052
[9] J. Ye, Least squares linear discriminant analysis, in: International Conference on Machine Learning, 2007, pp. 1087-1093.; J. Ye, Least squares linear discriminant analysis, in: International Conference on Machine Learning, 2007, pp. 1087-1093.
[10] P. Zhang, J. Peng, N. Riedel, Discriminant analysis: a least squares approximation view, in: Proceedings of the Computer Vision and Pattern Recognition Workshop on Learning, 2005, p. 46.; P. Zhang, J. Peng, N. Riedel, Discriminant analysis: a least squares approximation view, in: Proceedings of the Computer Vision and Pattern Recognition Workshop on Learning, 2005, p. 46.
[11] Tiknonov, A. N.; Arsenin, V. Y., Solutions of Ill-posed Problems (1977), Wiley: Wiley Washington, DC
[12] Billings, S. A.; Lee, K. L., Nonlinear fisher discriminant analysis using a minimum squared error cost function and the orthogonal least squares algorithm, Neural Networks, 15, 2, 263-270 (2002)
[13] Hastie, T. J.; Buja, A.; Tibshirani, R., Penalized discriminant analysis, Ann. Statist., 23, 1, 73-102 (1995) · Zbl 0821.62031
[14] Belkin, M.; Niyogi, P.; Sindhwani, V., Manifold regularization: a geometric framework for learning from labeled and unlabeled examples, J. Mach. Learning Res., 1, 1, 1-48 (2006) · Zbl 1222.68144
[15] Baudat, G.; Anouar, F., Generalized discriminant analysis using a kernel approach, Neural Comput., 12, 10, 2385-2404 (2000)
[16] S. Mika, G. Rätsch, B. Schölkopf, A. Smola, J. Weston, K.-R. Müller, Invariant feature extraction and classification in kernel spaces, Proceedings of the Neural Information Processing Systems (1999) 526-532.; S. Mika, G. Rätsch, B. Schölkopf, A. Smola, J. Weston, K.-R. Müller, Invariant feature extraction and classification in kernel spaces, Proceedings of the Neural Information Processing Systems (1999) 526-532.
[17] W. Du, K. Inoue, K. Urahama, Dimensionality reduction for semi-supervised face recognition, in: Proceedings of the Fuzzy Systems and Knowledge Discovery, 2005, pp. 1-10.; W. Du, K. Inoue, K. Urahama, Dimensionality reduction for semi-supervised face recognition, in: Proceedings of the Fuzzy Systems and Knowledge Discovery, 2005, pp. 1-10.
[18] H. Liu, X. Yuan, Q. Tang, R. Kustra, An efficient method to estimate labelled sample size for transductive LDA(QDA/MDA) based on bayes risk, in: Proceedings of the European Conference on Machine Learning, 2004, pp. 274-285.; H. Liu, X. Yuan, Q. Tang, R. Kustra, An efficient method to estimate labelled sample size for transductive LDA(QDA/MDA) based on bayes risk, in: Proceedings of the European Conference on Machine Learning, 2004, pp. 274-285. · Zbl 1132.68574
[19] V. Roth, V. Steinhage, Nonlinear discriminant analysis using kernel functions, in: Neural Information Processing Systems, 1999, pp. 568-574.; V. Roth, V. Steinhage, Nonlinear discriminant analysis using kernel functions, in: Neural Information Processing Systems, 1999, pp. 568-574.
[20] D. Zhang, Z.-H. Zhou, S. Chen, Semi-supervised dimensionality reduction, in: SIAM Conference on Data Mining (SDM), 2007.; D. Zhang, Z.-H. Zhou, S. Chen, Semi-supervised dimensionality reduction, in: SIAM Conference on Data Mining (SDM), 2007.
[21] X. Yang, H. Fu, H. Zha, J. Barlow, Semi-supervised nonlinear dimensionality reduction, in: Proceedings of the International Conference on Machine Learning, 2006, pp. 1065-1072.; X. Yang, H. Fu, H. Zha, J. Barlow, Semi-supervised nonlinear dimensionality reduction, in: Proceedings of the International Conference on Machine Learning, 2006, pp. 1065-1072.
[22] D. Cai, X. He, J. Han, Semi-supervised discriminant analysis, in: Proceedings of the International Conference on Computer Vision, 2007.; D. Cai, X. He, J. Han, Semi-supervised discriminant analysis, in: Proceedings of the International Conference on Computer Vision, 2007.
[23] M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, in: Neural Information Processing Systems 2001, pp. 585-591.; M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, in: Neural Information Processing Systems 2001, pp. 585-591.
[24] F.S. Samaria, A.C. Harter, Parameterisation of a stochastic model for human face identification, in: IEEE Workshop on Applications of Computer Vision, 1994, pp. 138-142.; F.S. Samaria, A.C. Harter, Parameterisation of a stochastic model for human face identification, in: IEEE Workshop on Applications of Computer Vision, 1994, pp. 138-142.
[25] Georghiades, A.; Belhumeur, P.; Kriegman, D., From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE Trans. Pattern Anal. Mach. Intell., 23, 6, 643-660 (2001)
[26] Graham, D.; Allinson, N., Characterizing virtual eigensignatures for general purpose face recognition, Face Recognition: From Theory to Appl., 163, 446-456 (1998)
[27] C.L. Blake, C.J. Merz, UCI repository of machine learning databases, \( \langle;\) http://www.ics.uci.edu/\({}^{\sim;}\rangle;\); C.L. Blake, C.J. Merz, UCI repository of machine learning databases, \( \langle;\) http://www.ics.uci.edu/\({}^{\sim;}\rangle;\)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.