Steinwart, Ingo How to compare different loss functions and their risks. (English) Zbl 1127.68089 Constructive Approximation 26, No. 2, 225-287 (2007). Summary: Many learning problems are described by a risk functional which in turn is defined by a loss function, and a straightforward and widely known approach to learn such problems is to minimize a (modified) empirical version of this risk functional. However, in many cases this approach suffers from substantial problems such as computational requirements in classification or robustness concerns in regression. In order to resolve these issues many successful learning algorithms try to minimize a (modified) empirical risk of a surrogate loss function, instead. Of course, such a surrogate loss must be “reasonably related” to the original loss function since otherwise this approach cannot work well. For classification good surrogate loss functions have been recently identified, and the relationship between the excess classification risk and the excess risk of these surrogate loss functions has been exactly described. However, beyond the classification problem little is known on good surrogate loss functions up to now. In this work we establish a general theory that provides powerful tools for comparing excess risks of different loss functions. We then apply this theory to several learning problems including (cost-sensitive) classification, regression, density estimation, and density level detection. Cited in 17 Documents MSC: 68T05 Learning and adaptive systems in artificial intelligence 68T10 Pattern recognition, speech recognition 62G08 Nonparametric regression and quantile regression 62G07 Density estimation 62G20 Asymptotic properties of nonparametric inference 68Q32 Computational learning theory Keywords:learning algorithms PDFBibTeX XMLCite \textit{I. Steinwart}, Constr. Approx. 26, No. 2, 225--287 (2007; Zbl 1127.68089) Full Text: DOI