A nonsingular matrix \(\pmb{A}\) has a unique inverse \(\pmb{A}^{-1}\) such that
\[\pmb{A}\pmb{A}^{-1} = \pmb{A}^{-1}\pmb{A} = \pmb{I}\]
We can use the R function solve to find an inverse:
\[ \pmb{A} = \begin{pmatrix} 1 & 5 & -3 \\ -3 & 2 & 7 \\ 2 & 5 & 9 \\ \end{pmatrix} \]
A=rbind(c(1, 5, -3), c(-3, 2, 7), c(2, 5, 9))
A
## [,1] [,2] [,3]
## [1,] 1 5 -3
## [2,] -3 2 7
## [3,] 2 5 9
Ainv=solve(A)
Ainv
## [,1] [,2] [,3]
## [1,] -0.06938776 -0.24489796 0.167346939
## [2,] 0.16734694 0.06122449 0.008163265
## [3,] -0.07755102 0.02040816 0.069387755
round(A%*%Ainv, 4)
## [,1] [,2] [,3]
## [1,] 1 0 0
## [2,] 0 1 0
## [3,] 0 0 1
Let
\[ \pmb{A} = \begin{pmatrix} a & b \\ c & d \\ \end{pmatrix} \] then
\[ \pmb{A}^{-1} = \frac1{ad-bc} \begin{pmatrix} d & -b \\ -c & a \\ \end{pmatrix} \]
proof easy, just multiply!
This is actually more general than it appears:
Say \(\pmb{A}\) is a symmetric and nonsingular matrix partitioned as
\[ \pmb{A} = \begin{pmatrix} \pmb{A_{11}} & \pmb{A_{12}} \\ \pmb{A_{21}} & \pmb{A_{22}} \\ \end{pmatrix} \]
then if \(\pmb{B}=\pmb{A_{22}}-\pmb{A_{21}}\pmb{A^{-1}_{11}}\pmb{A_{12}}\) and provided all inverses exist we have
\[ \pmb{A^{-1}} = \begin{pmatrix} \pmb{A^{-1}_{11}}+\pmb{A^{-1}_{11}}\pmb{A_{12}}\pmb{B^{-1}}\pmb{A_{21}}\pmb{A^{-1}_{11}} & -\pmb{A^{-1}_{11}}\pmb{A_{12}}\pmb{B^{-1}} \\ -\pmb{B^{-1}}\pmb{A_{21}}\pmb{A^{-1}_{11}} & \pmb{B^{-1}} \\ \end{pmatrix} \]
proof straight-forward multiplication
\[ \pmb{A} = \begin{pmatrix} \pmb{A_{11}} & \pmb{a}_{12} \\ \pmb{a}_{12} & a_{22} \\ \end{pmatrix} \]
where \(a_{22}\) is a scalar, then \(b=a_{22}-a_{21}'\pmb{A}^{-1}_{11}a_{12}\) is a scalar and
\[ \pmb{A^{-1}} = \frac1{b}\begin{pmatrix} b\pmb{A^{-1}_{11}}+\pmb{A^{-1}_{11}}a_{12}a_{12}'\pmb{A^{-1}_{11}} & -\pmb{A^{-1}_{11}}a_{12} \\ -a_{12}'\pmb{A^{-1}_{11}} & 1 \\ \end{pmatrix} \] Say
\[ \pmb{A} = \begin{pmatrix} 1 & -2 & 4\\ -2 & 3 & 0 \\ 4 & 0 & 1\\ \end{pmatrix} \] and we use the partition
\[ \pmb{A}_{11}= \begin{pmatrix} 1 & -2 \\ -2 & 3 \end{pmatrix}\\ \pmb{a}_{12} = \begin{pmatrix} 4 \\ 0 \end{pmatrix}\\ \pmb{a}_{21} = \begin{pmatrix} 4 & 0 \end{pmatrix}\\ a_{22}=1 \] then
\[ \pmb{A}_{11}^{-1} = \begin{pmatrix} -3 & -2 \\ -2 & -1 \\ \end{pmatrix}\\ b=a_{22}-a_{21}'\pmb{A}^{-1}_{11}a_{12} = \\ 1-\begin{pmatrix} 4 & 0 \end{pmatrix} \begin{pmatrix} -3 & -2 \\ -2 & -1 \\ \end{pmatrix} \begin{pmatrix} 4 \\ 0 \end{pmatrix} = 1-(-48)=49\\ \text{ }\\ b\pmb{A^{-1}_{11}}+\pmb{A^{-1}_{11}}a_{12}a_{12}'\pmb{A^{-1}_{11}} = \\ 49\begin{pmatrix} -3 & -2 \\ -2 & -1 \\ \end{pmatrix}+\begin{pmatrix} -3 & -2 \\ -2 & -1 \\ \end{pmatrix} \begin{pmatrix} 4 \\ 0 \end{pmatrix} \begin{pmatrix} 4 & 0 \end{pmatrix} \begin{pmatrix} -3 & -2 \\ -2 & -1 \\ \end{pmatrix}=\\ 49\begin{pmatrix} -3 & -2 \\ -2 & -1 \\ \end{pmatrix}+\begin{pmatrix} -3 & -2 \\ -2 & -1 \\ \end{pmatrix} \begin{pmatrix} 16 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} -3 & -2 \\ -2 & -1 \\ \end{pmatrix}=\\ \text{ }\\ \begin{pmatrix} -3 & -2 \\ -2 & 15 \\ \end{pmatrix} \]
also
\[ -\pmb{A}^{-1}_{11}a_{12} = \begin{pmatrix} 12 \\ 8 \end{pmatrix}\\ -a_{12}'\pmb{A}^{-1}_{11} = \begin{pmatrix} 12 & 8 \end{pmatrix}\\ \]
and finally
\[ \pmb{A}^{-1} = \frac1{49}\begin{pmatrix} -3 & -2 & 12\\ -2 & 15 & 8\\ 12 & 8 & 1 \\ \end{pmatrix} \]
R check:
solve(matrix(c(1,-2,4,-2,3,0,4,0,1), 3, 3))*49
## [,1] [,2] [,3]
## [1,] -3 -2 12
## [2,] -2 15 8
## [3,] 12 8 1
\[ \pmb{A} = \begin{pmatrix} \pmb{A_{11}} & \pmb{O} \\ \pmb{O} & \pmb{A_{22}} \\ \end{pmatrix} \]
then
\[ \pmb{A^{-1}} = \begin{pmatrix} \pmb{A^{-1}_{11}} & \pmb{O} \\ \pmb{O} & \pmb{A^{-1}_{22}} \\ \end{pmatrix} \]
Let \(\pmb{A}\) be a symmetric matrix and \(\pmb{y}\) a vector, then if
\(\pmb{y'Ay}>0\) for all \(\pmb{y}\ne 0\) A is called positive definite.
\(\pmb{y'Ay}\ge0\) for all \(\pmb{y}\ne 0\) A is called positive semi-definite.
\[ \begin{aligned} &\pmb{A} = \begin{pmatrix} 1 & -1 \\ -1 & 1 \\ \end{pmatrix}\\ &\text{ }\\ &\pmb{y'Ay} = (y_1\text{ }y_2) \begin{pmatrix} y_1 - y_2 \\ -y_1 +y_2 \\ \end{pmatrix} =\\ &\text{ }\\ &y_1(y_1 - y_2)+y_2(-y_1 +y_2) = \\ &y_1^2-y_1y_2-y_1y_2+y_2^2 = \\ &y_1^2-2y_1y_2+y_2^2 = \\ &(y_1-y_2)^2\ge 0 \end{aligned} \]
and so \(\pmb{A}\) is positive semi-definite.
proof
Let \(\pmb{y'} = \begin{pmatrix} 0& ... & 0 & 1 & 0 &... & 0\end{pmatrix}'\), then \(\pmb{y'Ay}=a_{ii}>0\)
Let \(\pmb{P}\) be a nonsingular matrix, then if \(\pmb{A}\) is positive (semi)-definite, so is \(\pmb{P'AP}\)
proof
\[\pmb{y'P'APy} = \pmb{(Py)'A(Py)}\]
and if P is nonsingular \(\pmb{Py}=0\) iff \(\pmb{y}=0\).
If \(\pmb{A}\) is positive-definite, then \(\pmb{A}^{-1}\) is positive-definite.
proof omitted
Let \(\pmb{A}\) be an \(n\times p\) matrix with \(p<n\).
proof omitted
It can be quite difficult to determine whether a matrix is positive definite or not. One way is as follows: The Cholesky decomposition of a matrix \(\pmb{A}\) is given by \(\pmb{A=LL'}\), where \(\pmb{L}\) is a an upper triangular matrix. Now it turns out that a matrix has a Cholesky decomposition if and only if it is positive-definite.
A=matrix(c(1,-1,-1,2),2,2)
A
## [,1] [,2]
## [1,] 1 -1
## [2,] -1 2
chol(A)
## [,1] [,2]
## [1,] 1 -1
## [2,] 0 1
chol(A)%*%t(chol(A))
## [,1] [,2]
## [1,] 2 -1
## [2,] -1 1
and so \(\pmb{A}\) is positive-definite.
A=matrix(c(1,-2,-2,1),2,2)
A
## [,1] [,2]
## [1,] 1 -2
## [2,] -2 1
chol(A)
## Error in chol.default(A): the leading minor of order 2 is not positive definite
and so \(\pmb{A}\) is not positive-definite.
We want to solve the system of linear equations
\[ \begin{aligned} &2y_1+3y_2-y_3 = 1\\ &y_1+2y_2+2y_3 = 2\\ &3y_1+3y_2+y_3 = 3\\ \end{aligned} \]
this can be done by solving the matrix equation \(\pmb{Ay}=\pmb{c}\) where
\[ \pmb{A} = \begin{pmatrix} 2 & 3& -1\\ 1 & 2& 2\\ 3 & 3&1 \\ \end{pmatrix} \pmb{c} = \begin{pmatrix} 1 \\ 2 \\ 3 \\ \end{pmatrix} \pmb{y} = \begin{pmatrix} y_1 \\ y_2 \\ y_3 \\ \end{pmatrix} \]
and a solution is given by \(\pmb{y}=\pmb{A^{-1}c}\). So
A=rbind(c(2, 3, 1), c(1, 2, 2), c(3, 3, 1) )
cc=cbind(c(1, 2, 3))
Ainv=round(solve(A), 5)
Ainv
## [,1] [,2] [,3]
## [1,] -1.00 0.00 1.00
## [2,] 1.25 -0.25 -0.75
## [3,] -0.75 0.75 0.25
c(Ainv %*% cc)
## [1] 2.0 -1.5 1.5
or directly
c(solve(A, cc))
## [1] 2.0 -1.5 1.5
A generalized inverse of an \(n\times p\) matrix \(\pmb{A}\) is any matrix \(\pmb{A^{-}}\) such that
\[\pmb{A}\pmb{A^{-}}\pmb{A}=\pmb{A}\]
Generalized inverses are not unique, except if \(\pmb{A}\) is nonsingular and then \(\pmb{A^{-}}=\pmb{A^{-1}}\). Every matrix has a generalized inverse.
\[ \pmb{A} = \begin{pmatrix} 1 \\ 2 \\ 3 \\ \end{pmatrix} \]
then \(\pmb{A^{-}}= (1, 0, 0)\) because
\[ \begin{aligned} &\pmb{A}\pmb{A}^{-}\pmb{A} = \\ &\begin{pmatrix} 1 \\ 2 \\ 3 \\ \end{pmatrix} (1, 0, 0) \begin{pmatrix} 1 \\ 2 \\ 3 \\ \end{pmatrix}=\\ &\begin{pmatrix} 1 \\ 2 \\ 3 \\ \end{pmatrix} 1=\pmb{A} \end{aligned} \]
Say \(\pmb{A}\) is a \(n\times p\) matrix of rank r that can be partitioned as
\[ \pmb{A} = \begin{pmatrix} \pmb{A}_{11} & \pmb{A}_{12} \\ \pmb{A}_{21} & \pmb{A}_{22} \end{pmatrix} \] where \(\pmb{A}_{11}\) is an \(r\times r\) matrix of rank r. Then a generalized inverse is given by
\[ \pmb{A}^{-} = \begin{pmatrix} \pmb{A}_{11}^{-1} & \pmb{0} \\ \pmb{0} & \pmb{0} \end{pmatrix} \] proof omitted
If a system of equations \(\pmb{Ax}=\pmb{c}\) is consistent (that is has a solution), then all possible solutions can be found as follows: find \(\pmb{A^{-}}\), then all solutions are of the form
\[\pmb{A}^{-}\pmb{c}+(\pmb{I}-\pmb{A}^{-}\pmb{A})\pmb{h}\]
for any arbitrary vector \(\pmb{h}\).
proof omitted
We want to solve the system
\[ \begin{aligned} &2y_1+3y_2-y_3 = 1\\ &y_1+2y_2+2y_3 = 2\\ \end{aligned} \]
to find a generalized inverse we find the inverse of the matrix
\[ \begin{pmatrix} 2 & 3\\ 1 & 2\\ \end{pmatrix} \]
which is
\[ \begin{pmatrix} 2 & -3\\ -1 & 2\\ \end{pmatrix} \]
and so a generalized inverse is given by
\[ \pmb{A^{-}} = \begin{pmatrix} 2 & -3\\ -1 & 2\\ 0 & 0 \\ \end{pmatrix} \]
Let’s check:
A=rbind(c(2, 3, -1), c(1, 2, 2))
A
## [,1] [,2] [,3]
## [1,] 2 3 -1
## [2,] 1 2 2
y= cbind(c(2, -1, 0), c(-3, 2, 0))
y
## [,1] [,2]
## [1,] 2 -3
## [2,] -1 2
## [3,] 0 0
A %*% y %*% A
## [,1] [,2] [,3]
## [1,] 2 3 -1
## [2,] 1 2 2
Now all the solutions are given by
\[ \begin{aligned} &\pmb{A^{-}}\pmb{c}+(\pmb{I}-\pmb{A^{-}}\pmb{A})\pmb{h} = \\ &\begin{pmatrix} 2 & -3\\ -1 & 2\\ 0 & 0 \\ \end{pmatrix} \begin{pmatrix} 1 \\ 2 \\ \end{pmatrix} +\left( \begin{pmatrix} 1 & 0 &0\\ 0 & 1 &0\\ 0 & 0 & 1 \end{pmatrix}- \begin{pmatrix} 2 & -3\\ -1 & 2\\ 0 & 0 \\ \end{pmatrix} \begin{pmatrix} 2 & 3 & -1\\ 1 & 2 & 2\\ \end{pmatrix} \right) \begin{pmatrix} h_1\\ h_2 \\ h_3\\ \end{pmatrix}\\ &\begin{pmatrix} -4\\ 3\\ 0\\ \end{pmatrix} +\left( \begin{pmatrix} 1 & 0 &0\\ 0 & 1 &0\\ 0 & 0 & 1 \end{pmatrix}- \begin{pmatrix} 1 & 0 &-8\\ 0 & 1 & 5\\ 0 & 0 & 0\\ \end{pmatrix} \right) \begin{pmatrix} h_1\\ h_2\\ h_3 \end{pmatrix} = \\ &\begin{pmatrix} -4\\ 3\\ 0\\ \end{pmatrix}+ \begin{pmatrix} 0 & 0 &8\\ 0 & 0 & -5\\ 0 & 0 & 1\\ \end{pmatrix} \begin{pmatrix} h_1\\ h_2\\ h_3\\ \end{pmatrix} = \\ &\begin{pmatrix} -4+8h_3\\ 3-5h_3\\ h_3 \end{pmatrix} \end{aligned} \]
Here is a solution using R
library(MASS)
A
## [,1] [,2] [,3]
## [1,] 2 3 -1
## [2,] 1 2 2
gA=ginv(A)
gA
## [,1] [,2]
## [1,] 0.1333333 0.02222222
## [2,] 0.1666667 0.11111111
## [3,] -0.2333333 0.37777778
A%*%gA%*%A
## [,1] [,2] [,3]
## [1,] 2 3 -1
## [2,] 1 2 2
y=gA%*%cbind(c(1, 2))
A%*%y
## [,1]
## [1,] 1
## [2,] 2
but of course this yields only one solution.
The determinant of an \(n\times n\) matrix \(\pmb{A}\) is a scalar function of \(\pmb{A}\), denoted by either det(\(\pmb{A}\)) or \(\vert \pmb{A} \vert\), defined as the sum of all n! possible products of n elements such that
Note this definition is famous for being hard to understand and impossible to apply!
The cofactor \(\pmb{A}_{ij}\) is the matrix \(\pmb{A}\) with the ith row and jth column removed.
\[\vert \pmb{A}\vert = \sum_{i=1}^n (-1)^{i+1}a_{ik}\vert \pmb{A}_{ik}\vert = \sum_{j=1}^n (-1)^{j+1}a_{kj}\vert \pmb{A}_{kj}\vert\] for any k.
proof omitted
\[ \begin{aligned} &\begin{vmatrix} 4 & 3 & 2 \\ 0 & 2 & 3 \\ 2 & 1 & 1 \\ \end{vmatrix} =\\ &(-1)^{1+1}4 \begin{vmatrix} 2 & 3 \\ 1 & 1 \\ \end{vmatrix} + (-1)^{2+1}0 \begin{vmatrix} 3 & 2 \\ 1 & 1 \\ \end{vmatrix}+ (-1)^{3+1}2 \begin{vmatrix} 3 & 2 \\ 2 & 3 \\ \end{vmatrix} = \\ &(-1)^24(2-3)+(-1)^30(3-2)+(-1)^42(9-4) = -4+10 =6 \end{aligned} \]
or
A=rbind(c(4, 3, 2), c(0, 2, 3), c(2, 1, 1))
A
## [,1] [,2] [,3]
## [1,] 4 3 2
## [2,] 0 2 3
## [3,] 2 1 1
det(A)
## [1] 6
\(\vert diag(a_1,..,a_n) \vert = \prod_{i=1}^n a_i\)
the determinant of a triangular matrix is the product of the diagonal elements.
\(\pmb{A}\) is a singular matrix iff det(\(\pmb{A}\))=0
If \(\pmb{A}\) is positive definite \(\vert \pmb{A} \vert>0\)
\(\vert \pmb{A'} \vert=\vert \pmb{A} \vert\)
\(\vert \pmb{A^{-1}} \vert = 1/\vert \pmb{A} \vert\)
proof
proof of ii: say \(\pmb{A}\) is upper tringular, then
\[det(\pmb{A})=(-1)^{1+1}a_{11}det(\pmb{A}_{11})=a_{11}det(\pmb{A}_{11})=..=\prod a_{ii}\]
proofs of other parts omitted
Say \(\pmb{A}\) is a square matrix partitioned as
\[ \pmb{A} = \begin{pmatrix} \pmb{A_{11}} & \pmb{A_{12}} \\ \pmb{A_{21}} & \pmb{A_{22}} \\ \end{pmatrix} \]
and \(\pmb{A}_{11}\) and \(\pmb{A}_{22}\) are square and nonsingular, then
\[\vert \pmb{A} \vert= \vert \pmb{A}_{11}\vert\vert \pmb{A}_{22}-\pmb{A}_{21}\pmb{A}^{-1}_{11}\pmb{A}_{12} \vert=\\ \vert \pmb{A}_{22}\vert\vert \pmb{A}_{11}-\pmb{A}_{12}\pmb{A}^{-1}_{22}\pmb{A}_{21} \vert\] proof omitted
Say \(\pmb{A}\) is a square matrix partitioned as
\[ \pmb{A} = \begin{pmatrix} \pmb{A_{11}} & \pmb{A_{12}} \\ \pmb{O} & \pmb{A_{22}} \\ \end{pmatrix}\\ \text{or}\\ \pmb{A} = \begin{pmatrix} \pmb{A_{11}} & \pmb{O} \\ \pmb{A_{21}} & \pmb{A_{22}} \\ \end{pmatrix}\\ \]
and \(\pmb{A}_{11}\) and \(\pmb{A}_{2}\) are square and nonsingular, then
\[\vert \pmb{A} \vert= \vert \pmb{A}_{11}\vert\vert \pmb{A}_{22} \vert\] proof omitted
\[\vert \pmb{A}^n \vert = \vert \pmb{A} \vert^n\]
Two vectors \(\pmb{a}\) and \(\pmb{b}\) are said to be orthogonal if
\[\pmb{a}'\pmb{b} = \sum_{i=1}^n a_i b_i=0\]
An orthogonal vector is called orthonormal if it has length 1.
Geometrically two vectors are orthogonal if they are at right angles (perpendicular) to each other. Let \(\theta\) be the angle between the two vectors \(\pmb{a}\) and \(\pmb{b}\), as illustrated here:
then by the law of cosines we have
\[||\pmb{a}-\pmb{b}||^2=||\pmb{a}||^2+||\pmb{b}||^2-2||\pmb{a}||\cdot||\pmb{b}||\cos(\phi)\] and so
\[ \begin{aligned} &\cos \theta =\frac{||\pmb{a}||^2+||\pmb{b}||^2-||\pmb{a}-\pmb{b}||^2}{2||\pmb{a}||\cdot||\pmb{b}||}=\\ &\frac{\pmb{a}'\pmb{a}+\pmb{b}'\pmb{b}-(\pmb{a-b})'(\pmb{a-b})}{2\sqrt{(\pmb{a}'\pmb{a})(\pmb{b}'\pmb{b})}} =\\ &\frac{\pmb{a}'\pmb{a}+\pmb{b}'\pmb{b}-\left[\pmb{a'a-a'b-b'a+b'b}\right]}{2\sqrt{(\pmb{a}'\pmb{a})(\pmb{b}'\pmb{b})}} =\\ &\frac{\pmb{a}'\pmb{b}}{\sqrt{(\pmb{a}'\pmb{a})(\pmb{b}'\pmb{b})}} \\ \end{aligned} \]
so if \(\theta=90^{\circ}, \pmb{a}'\pmb{b} = \cos 90^{\circ}=0\).
A set of vectors where all vectors are mutually orthogonal and normalized is called an orthonormal set. A matrix \(\pmb{C}\) where all columns form an orthonormal set is called an orthogonal matrix. We have \(\pmb{C}'\pmb{C}=\pmb{I}\).
Let \(\pmb{C}\) be an orthogonal matrix, then
proof
\(1=\vert\pmb{I}\vert = \vert\pmb{C'C}\vert = \vert\pmb{C}'\vert\vert\pmb{C}\vert= \vert\pmb{C}\vert\vert\pmb{C}\vert=\vert\pmb{C}\vert^2\)
\(\vert\pmb{C'AC}\vert = \vert\pmb{C}'\vert\vert\pmb{A}\vert\vert\pmb{C}\vert= \vert\pmb{A}\vert\vert\pmb{C}\vert^2=\vert\pmb{A}\vert\)
follows because columns are normalized.
The trace of a matrix \(\pmb{A}\) is the sum of the diagonal elements of \(\pmb{A}\).
\[ \pmb{A} = \begin{pmatrix} 1 & 5 & -3 \\ -3 & 2 & 7 \\ 2 & 5 & 9 \\ \end{pmatrix}\\ \text{tr}(\pmb{A}) = 1+2+9=12 \]
\(\text{tr}(\pmb{A\pm B})=\text{tr}(\pmb{A})\pm\text{tr}(\pmb{B})\)
\(\text{tr}(\pmb{AB})=\text{tr}(\pmb{BA})\)
\(\text{tr}(\pmb{A'A})=\sum_{i=1}^n a_i'a_i\)
if \(\pmb{P}\) is any nonsingular matrix then
\[\text{tr}(\pmb{P^{-1}AP})=\text{tr}(\pmb{A})\]
\[\text{tr}(\pmb{C^{'}AC})=\text{tr}(\pmb{A})\]
proof omitted
For any x the matrix
\[ \pmb{C} = \begin{pmatrix} \cos x & \sin x \\ -\sin x & \cos x \end{pmatrix} \]
is orthogonal because
\[ \begin{pmatrix} \cos x \\ -\sin x \end{pmatrix}' \begin{pmatrix} \sin x \\ \cos x \end{pmatrix} = \cos x \sin x-\sin x \cos x =0 \]
Now
\[ \vert \pmb{C} \vert = \begin{vmatrix} \cos x & \sin x \\ -\sin x & \cos x \end{vmatrix}=\cos^2 x+\sin^2 x=1 \]
To illustrate the theorems let \(\pmb{A} =\begin{pmatrix} 1 & 2 \\ -1 & 3\end{pmatrix}\), then
Cx=function(x) matrix(c(cos(x), -sin(x), sin(x), cos(x)), 2, 2)
A=matrix(c(1,-1,2,3), 2, 2)
A
## [,1] [,2]
## [1,] 1 2
## [2,] -1 3
Cx(1)
## [,1] [,2]
## [1,] 0.5403023 0.8414710
## [2,] -0.8414710 0.5403023
Cx(-0.5)
## [,1] [,2]
## [1,] 0.8775826 -0.4794255
## [2,] 0.4794255 0.8775826
Theorem 4.2.28 ii:
det(A)
## [1] 5
det(t(Cx(1))%*%A%*%Cx(1))
## [1] 5
det(t(Cx(-0.5))%*%A%*%Cx(-0.5))
## [1] 5
Theorem 4.2.31 v:
sum(diag(A))
## [1] 4
sum(diag(t(Cx(1))%*%A%*%Cx(1)))
## [1] 4
sum(diag(t(Cx(-0.5))%*%A%*%Cx(-0.5)))
## [1] 4