Matrix decompositions are useful in numerical problems, in particular
for solving systems of linear equations. All XploRe examples for this section
can be found in matrix08.xpl.
Let A be a square matrix of dimension n x n. A scalar
is an eigenvalue and a nonzero vector v is an
eigenvector of A if
.
The eigenvalues are the roots of the characteristic polynomial
of order n defined as
,
where In denotes
the n-dimensional identity matrix. The determinant of the matrix A
is equal to the product of its n eigenvalues:
.
The function
eigsm
calculates the eigenvectors and eigenvalues
of a given symmetric matrix.
We evaluate the eigenvalues and eigenvectors of nonsymmetric matrices with
the function
eiggn.
The function
eigsm
has as its unique argument the matrix and
returns the eigenvalues and vectors. The returned
arguments are unsorted with respect to the eigenvalues.
Consider the following example:
x = #(1, 2)~#(2, 3) y = eigsm(x) yin which we define a matrix x, and calculate its eigenvalues and eigenvectors. XploRe stores them in a variable of list type, y: The variable y.values contains the eigenvalues, while the variable y.vectors contains the corresponding eigenvectors:
Contents of y.values [1,] -0.23607 [2,] 4.2361 Contents of y.vectors [1,] 0.85065 0.52573 [2,] -0.52573 0.85065We verify that the determinant of the matrix x is equal to the product of the eigenvalues of x:
det(x) - y.values[1] * y.values[2]gives
Contents of _tmp [1,] 4.4409e-16i.e. something numerically close to zero.
If the n eigenvalues of the matrix A are different, this matrix
can be decomposed as
,
where
is the
diagonal matrix the diagonal elements of which are the eigenvalues,
and P is the matrix obtained by the
concatenation of the eigenvectors. The transformation matrix P is
orthonormal, i.e.
PT P= P PT = I. This decomposition of the
matrix A is called the spectral decomposition.
We check that the matrix of concatenated eigenvectors is orthonormal:
y.vectors'*y.vectorsyields a matrix numerically close to the identity matrix:
Contents of _tmp [1,] 1 -1.0219e-17 [2,] -1.0219e-17 1
We verify the spectral decomposition of our example:
z = y.vectors *diag(y.values) *y.vectors' zwhich gives the original matrix x:
Contents of z [1,] 1 2 [2,] 2 3
If the matrix A can be decomposed as
,
then
.
In particular,
.
Therefore, the inverse of x could be calculated as
xinv = y.vectors*inv(diag(y.values))*y.vectors' xinvwhich gives
Contents of xinv [1,] -3 2 [2,] 2 -1which is equal to the inverse of x
˙inv(x)
Contents of cinv [1,] -3 2 [2,] 2 -1
|
Let B be a n x p matrix, with ,
and rank r, with
.
The singular value decomposition of the matrix B
decomposes this matrix as B = U L V, where U is the n x r
orthonormal matrix of eigenvectors of B BT, V is the p x r
orthonormal matrix of eigenvectors of BTB associated with the
nonzero eigenvalues, and L is a r x r diagonal matrix.
The function
svd
computes the singular value decomposition of a
n x p matrix x. This function returns the matrices u,
v and
in the form of a list.
x = #(1, 2, 3)~#(2, 3, 4) y = svd(x) yXploRe returns the matrix U in the variable y.u, the diagonal elements of L in the variable y.l, and the matrix V in the variable y.v:
Contents of y.u [1,] 0.84795 0.3381 [2,] 0.17355 0.55065 [3,] -0.50086 0.7632 Contents of y.l [1,] 0.37415 [2,] 6.5468 Contents of y.v [1,] -0.82193 0.56959 [2,] 0.56959 0.82193
We test that y.u *diag(y.l) *y.v' equals x with the commands
xx = y.u *diag(y.l) *y.v' xxThis displays the matrix x:
Contents of xx [1,] 1 2 [2,] 2 3 [3,] 3 4
|
The LU decomposition of an n-dimensional square matrix A is defined
as
x = #(1, 2)~#(2, 3) lu = ludecomp(x) lugives
Contents of lu.l [1,] 1 0 [2,] 0.5 1 Contents of lu.u [1,] 2 3 [2,] 0 0.5 Contents of lu.index [1,] 2 [2,] 1
We re-obtain the original matrix x by using the function
index, which takes as its argument the
row-permutations in the LU decomposition. The instruction
index(lu.l*lu.u,lu.index)returns the matrix x
Contents of index [1,] 1 2 [2,] 2 3
|
Let A be an n-dimensional square matrix, symmetric, i.e.
Aij =
Aji, and positive definite, i.e.
n-dimensional vectors v. The Cholesky decomposition
decomposes the matrix A as A = L LT, where L is a lower
triangular matrix.
gives:
Contents of LT [1,] 1.4142 1.4142 [2,] 0 1 Contents of _tmp [1,] 2 2 2 2 [2,] 2 3 2 3The function
library("xplore") ; unit is in library xplore! x = #(2,2)~#(2,3) tmp = chold(x, rows(x)) d = diag(xdiag(tmp)) ; diagonal matrix b = tmp-d+unit(rows(tmp)) ; lower triangular L = b'*sqrt(d) ; Cholesky triangularWe can check that the decomposition works by comparing x and L*L'. The instruction
x - L*L'displays
Contents of _tmp [1,] -4.4409e-16 -4.4409e-16 [2,] -4.4409e-16 -4.4409e-16which means that the difference between both matrices is practically zero.
The Cholesky decomposition is used in computational statistics for inverting the Hessian matrices and the matrices XTX in regression analysis.
![]() |
MD*TECH Method and Data Technologies |
http://www.mdtech.de mdtech@mdtech.de |