×

Class label versus sample label-based CCA. (English) Zbl 1109.62053

Summary: When correlating the samples with the corresponding class labels, canonical correlation analysis (CCA) can be used for supervised feature extraction and subsequent classification. Intuitively, different encoding modes for class label can result in different classification performances. However, actually, when the samples in each class share a common class label as in usual cases, a unified formulation of CCA is not only derived naturally, but also more importantly from it we can get some insight into the shortcomings of the existing feature extraction using CCA for sequent classification: the existing encodings for class labels fail to reflect the difference among the samples such as in the central region of the class and those in mixture overlapping regions among classes, consequently resulting in its equivalence to the traditional linear discriminant analysis (LDA) for some commonly-used class-label encodings. To reflect such a difference between the samples, we elaborately design an independent soft label for each sample of each class rather than a common label for all the samples of the same class. A purpose of doing so is to try to promote CCA classification performance. The experiments show that this soft label based CCA is better than or comparable to the original CCA/LDA in terms of the recognition performance.

MSC:

62H20 Measures of association (correlation, canonical correlation, etc.)
62H30 Classification and discrimination; cluster analysis (statistical aspects)
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Duda, R. O.; Hart, P. E.; Stock, D. G., Pattern Classification (2001), John Wiley and Sons: John Wiley and Sons New York
[2] Hotelling, H., Relations between two sets of variates, Biometrika, 28, 321-377 (1936) · Zbl 0015.40705
[3] Hardoon, D. R.; Szedmak, S.; Shawe-Taylor, J., Canonical correlation analysis: An overview with application to learning method, Neural Computation, 16, 2639-2664 (2004) · Zbl 1062.68134
[4] Marco Loog , Bram van Ginneken, R.P.W. Duin, Dimensionality reduction by canonical contextual correlation projections, in: Proceedings of the eighth European Conference on Computer Vision, May 2004, pp. 562-573.; Marco Loog , Bram van Ginneken, R.P.W. Duin, Dimensionality reduction by canonical contextual correlation projections, in: Proceedings of the eighth European Conference on Computer Vision, May 2004, pp. 562-573. · Zbl 1098.68810
[5] Y. Hel-Or, The canonical correlations of color images and their use for demosaicing, HP Labs Technical Report, HPL-2003-164 (R.1), 2004.; Y. Hel-Or, The canonical correlations of color images and their use for demosaicing, HP Labs Technical Report, HPL-2003-164 (R.1), 2004.
[6] Melzer, T.; Reiter, M.; Bischof, H., Appearance models based on kernel canonical correlation analysis, Pattern Recognition, 36, 1961-1971 (2003) · Zbl 1035.68105
[7] T.V. Gestel, J.A.K. Suykens, J. De Brabanter, B. De Moor, J. Vandewalle, Kernel canonical correlation analysis and least squares support vector machines, in: Proceedings of the International Conference on Artificial Neural Networks (ICANN 2001) 2001, pp. 384-389.; T.V. Gestel, J.A.K. Suykens, J. De Brabanter, B. De Moor, J. Vandewalle, Kernel canonical correlation analysis and least squares support vector machines, in: Proceedings of the International Conference on Artificial Neural Networks (ICANN 2001) 2001, pp. 384-389. · Zbl 1001.68717
[8] Horikawa, Yo, Use of Autocorrelation Kernels, (Kernel Canonical Correlation Analysis for Texture Classification, ICONIP 2004. Kernel Canonical Correlation Analysis for Texture Classification, ICONIP 2004, LNCS 3316 (2004), Springer-Verlag: Springer-Verlag Berlin), 1235-1240
[9] Baek, J.; Kim, M., Face recognition using partial least squares components, Pattern Recognition, 37, 303-1306 (2004) · Zbl 1070.68579
[10] Barker, M.; Rayens, W., Partial least squares for discrimination, Journal of Chemometrics, 17, 166-173 (2003)
[11] B. Johansson, On classification: simultaneously reducing dimensionality and finding automatic representation using canonical correlation, Technical report LiTH-ISY-R-2375, ISSN 1400-3902, Linköping University, 2001.; B. Johansson, On classification: simultaneously reducing dimensionality and finding automatic representation using canonical correlation, Technical report LiTH-ISY-R-2375, ISSN 1400-3902, Linköping University, 2001.
[12] M. Borga, Canonical correlation: A tutorial. Available online from: <http://people.imt.liu.se/ magnus/cca/tutorial/>, 1999.; M. Borga, Canonical correlation: A tutorial. Available online from: <http://people.imt.liu.se/ magnus/cca/tutorial/>, 1999.
[13] Bishop, C. M., Neural Network for Pattern Recognition (1995), Clarendon Press: Clarendon Press Oxford
[14] Cover, T. M.; Hart, P. E., Nearest neighbor pattern classification, IEEE Transactions of Information Theory, 13, 21-27 (1967) · Zbl 0154.44505
[15] Keller, J. M.; Gray, M. R.; Givens, J. A., A fuzzy \(k\)-nearest neighbor algorithm, IEEE Transactions on Systems Man and Cybernetics, 15, 580-585 (1985)
[16] Chen, S. C.; Liu, J.; Zhou, Z. H., Making FLDA applicable to face recognition with one sample per person, Pattern Recognition, 37, 1553-1555 (2004)
[17] Shuicheng Yan, Dong Xu, Benyu Zhang, Hong-Jiang Zhang, Graph embedding: A general framework for dimensionality reduction, in: Proceeding of IEEE CVPR’05, 2005.; Shuicheng Yan, Dong Xu, Benyu Zhang, Hong-Jiang Zhang, Graph embedding: A general framework for dimensionality reduction, in: Proceeding of IEEE CVPR’05, 2005.
[18] Cristianini, N.; Shawe-Taylor, J., An Introduction to Support Vector Machines and other Kernel-based Learning Methods (2000), Cambridge University Press
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.