Language:   Search:   Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.

Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1026.15004
Generalized inverses. Theory and applications. 2nd ed.
(English)
[B] CMS Books in Mathematics/Ouvrages de MathÃ©matiques de la SMC. 15. New York, NY: Springer. xv, 420 p. EUR 74.95/net; sFr. 124.50; \sterling 57.50; \$69.95 (2003). ISBN 0-387-00293-6/hbk In recent years needs have been felt in numerous areas of applied mathematics for some kind of partial inverse of a matrix that is singular or even rectangular. Generalized inverses of matrices were first noted by E. H. Moore (1920), who defined a unique inverse for every constant matrix although generalized inverses of differential and integral operators have first been mentioned in print by Fredholm (1903), Hilbert (1904) e.t.c. . A summary of Moore's work is given in the Appendix of the book. In 1955 Penrose showed that Moore's inverse, for a given matrix$A$, is the unique matrix$Xsatisfying the four equations \align AXA &= A, \tag 1\\ XAX &= X, \tag 2\\ (AX)^{\ast}&= AX,\tag 3\\ ( XA)^{\ast}&= XA,\tag 4 \endalign where the symbol\ast $denotes the conjugate transpose. Due to the later discovery and its importance, this unique inverse is now called Moore-Penrose inverse.\par In the Introduction, the authors try to describe the transmission from the known inverse of constant and regular matrices to the generalized inverse of rectangular matrices. A historical note, on the discovery of the generalized inverse first for integral and differential operators (1903-1931) and then for constant matrices (1920-1955) is also given in the introduction.\par Chapter 0 contains preliminary results from Linear Algebra, that are used in successive chapters, such as scalar vectors, linear transformations and matrices, elementary operations and permutations, Hermite normal forms, Jordan and Smith normal forms e.t.c.. This chapter can be skipped in reading.\par Chapter 1 introduces the$\{i,j,\dots,k\}$-inverse as the inverse which satisfies equations$(i),(j),\dots,\dots,(k)$among equations (1)--(4). Then, it studies the existence and constructions of various inverses i.e.$\{1\}$-inverses (known as pseudo inverse or generalized inverse),$\{1,2\}$-inverses (semi-inverse or reciprocal inverse),$\{1,2,3\}$-inverses,$ \{1,2,4\}$-inverses, and$\{1,2,3,4\}$-inverse (Moore-Penrose inverse or general reciprocal inverse or generalized inverse).\par In Chapter 2, a characterization of various generalized inverses is given in terms of solutions of specific linear systems. Some other results presented in this chapter are the following: a) generalized inverses with prescribed range are constructed, b) restricted generalized inverses are defined and used in the solution of constrained'' linear equations, c) the Bott-Duffin inverse is defined and used in the solution of electrical network problems, and d) an application of \{1\}- and \{1,2\}-inverses in interval linear programming and the integral solution of linear equations are given respectively. \par In Chapter 3 various generalized inverses are characterized and studied in terms of their minimization properties with respect to the class of ellipsoidal (or weighted Euclidean) norms and the more general class of essentially strictly convex cones. An extremal property of the Bott-Duffin inverse with application to electrical networks is also given.\par Chapter 4 studies generalized inverses having some of the spectral properties i.e. properties related to eigenvalues and eigenvectors of the inverse of a nonsingular matrix. Only square matrices are considered, since only they have eigenvalues and eigenvectors. More specifically the chapter deals with the inverse$X$that satisfies the properties :$ A^{k}XA=A^{k}$,$XAX=X$,$AX=XA$where$k$is the index of$A$. This inverse is called Drazin inverse. The spectral properties of Drazin inverse are shown, while a particular case of Drazin inverse, the group inverse is also studied. Finally, the quasi-commuting inverse and the strong spectral inverse are also defined.\par In computing a generalized or ordinary inverse of a matrix, the size of the difficulty of the problem may be reduced if the matrix is partitioned into other submatrices. Chapter 5 studies generalized inverses of partitioned matrices and their application to the solution of linear equations. Intersections of linear manifolds are also studied in order to obtain common solutions of pairs of linear equations and to invert matrices partitioned by rows or columns. \par Chapter 6 studies the spectral theory for rectangular matrices. The authors are approaching the singular value decomposition (SVD) of rectangular matrices following the approach of {\it C. Eckart} and {\it G. Young} [Bull. Am. Math. Soc. 45, 118-121 (1939; Zbl 0020.19802)]. Some of the applications of the SVD are given and concern: a) the Schmidt approximation theorem that approximates an original matrix by lower rank matrices, provided that the error of approximation is acceptable, b) the polar decomposition theorem, c) the study of the principal angles between subspaces, d) the study of the behavior of the Moore-Penrose inverse of a perturbed matrix$A+E$and its dependence on$A^{\dag }$and on the erro''$E $, and e) the generalization by Penrose of the classical spectral theorem for normal matrices. Finally, a generalization of the SVD based on {\it C. F. Van Loan} [SIAM J. Numer. Analysis 13, 76-83 (1976; Zbl 0338.65022)] is described and concerns the simultaneous diagonalization of two$n$-columned matrices.\par Chapter 7 proposes computational methods for the unrestricted \{1\}- and \{1,2\}-inverses, \{2\}-inverses and the Moore-Penrose inverse. Two iterative methods are used for the computation of the Moore-Penrose inverse: a) the Greville's method that is a finite iterative method, and b) an iterative method that produces sequences of matrices$\{ X_{k},k=1,2,\dots\} $that converges to the Moore-Penrose inverse$ A^{\dag }$as$k\to \infty \$, under certain intial approximations.\par Chapter 8 presents a selection of few applications that illustrate the richness and potential of generalized inverses. The list of applications includes: a) the important operation of parallel sum with application in electrical networks etc., b) the linear statistical model, c) the Newton-method for solution of nonlinear equations, without regarding the nonsingularity of the Jacobian matrix, d) the solution of continuous-time auto regressive (AR) representations, e) the properties of the transition matrix of a finite Markov chain, and f) the solution of singular linear difference equations. Finally, the last two sections deal with the matrix volume and its application to surface integrals and probability distributions.\par Chapter 9 is a brief and biased introduction to generalized inverses of linear operators between Hilbert spaces, with special emphasis on the similarities to the finite-dimensional case. The results have been applied to integral and differential operators. Integral and series representations of generalized inverses as well as iterative methods for their computation were given in the sequel. Minimal properties of generalized inverses of operators between Hilbert spaces, analogous to the matrix case, have also been studied.\par The new material that was added in this second edition (the first edition was in 1974; Zbl 0305.15001), is the preliminary chapter (Chapter 0), the chapter of applications (Chapter 8), an Appendix on the work of E. H. Moore and new exercises and applications.\par Each chapter is accompanied by suggestions for further reading, while the bibliography contains 901 references. This bibliography has also been posted by the authors in the Web page of the International Linear Algebra Society http://www.math.technion.ac.il//iic/research.html and updated from time to time. The book contains more than 450 exercises at different levels of difficulty, many of which are solved in detail. This feature makes it suitable either for reference and self-study or for use as a classroom text. It can be used profitably by graduate students or advanced undergraduate students, only elementary knowledge of linear algebra being assumed.
[Nicholas Karampetakis (Thessaloniki)]
MSC 2000:
*15A09 Matrix inversion
15-02 Research monographs (linear algebra)
15-03 Historical (linear algebra)
65F20 Overdetermined systems (numerical linear algebra)
15A06 Linear equations (linear algebra)
90C05 Linear programming
15-00 Reference works (linear algebra)
47A05 General theory of linear operators
62J05 Linear regression
65H10 Systems of nonlinear equations (numerical methods)
39A10 Difference equations
60J10 Markov chains with discrete parameter
65F10 Iterative methods for linear systems

Keywords: generalized inverse; pseudoinverse; Moore-Penrose inverse; reciprocal inverse; matrix functions; linear systems; constrained linear systems; Drazin inverse; spectral theory; singular value decomposition; factorization; textbook; historical note; Bott-Duffin inverse; interval linear programming; parallel sum; linear statistical model; Newton-method; auto regressive representation; finite Markov chain; singular linear difference equations; linear operators; Hilbert spaces; iterative methods; bibliography; exercises

Citations: Zbl 0020.19802; Zbl 0305.15001; Zbl 0338.65022

Highlights
Master Server