Geometria Lingotto.
LeLing12: More on determinants.
Contents:
¯
• Laplace formulas.
• Cross product and generalisations.
• Rank and determinant: minors.
• The characteristic polynomial.
Recommended exercises: Geoling 14.
¯
Laplace formulas
The expansion of Laplace allows to reduce the computation of an n × n determinant to
that of n (n − 1) × (n − 1) determinants. The formula, expanded with respect to the
ith row (where A = (aij )), is:
det(A) = (−1)i+1 ai1 det(Ai1 ) + · · · + (−1)i+n ain det(Ain )
where Aij is the (n − 1) × (n − 1) matrix obtained by erasing the row i and the
column j from A. With respect to the j th column it is:
det(A) = (−1)j+1 a1j det(A1j ) + · · · + (−1)j+n anj det(Anj )
Esempio
1 2
3 4
5 6
0.1. We do it with respect
1 4 1 −2 3 1
1 = 1 5 1
6 1
1 to the first row below.
3 4 +1
5 6 = (4 − 6) − 2(3 − 5) + (3.6 − 5.4) = 0
The proof of the expansion along the first row is as follows. The determinant’s
linearity, proved in the previous set of notes, implies
 
Ej
n
 A2 
X
 
a1j det  .. 
det(A) =
 . 
j=1
An
Ingegneria dell’Autoveicolo, LeLing12
1
Geometria
Geometria Lingotto.
where Ej is the canonical basis of the rows, i.e. Ej is zero except at position j where
there is 1. Thus we have to calculate the determinants
0
0
·
·
·
0
1
0
0
·
·
·
0
a21 a22 · · · a2(j−1) a2j a2(j+1) · · · · · · a2n ..
..
..
..
..
.. .
. ···
.
.
···
. ···
. an1 an2 · · · an(j−1) anj an(j+1) · · · · · · ann The whole column j
0
a21
..
.
an1
does not intervene in the computation
0 ···
a22 · · ·
..
. ···
an2 · · ·
0
a2(j−1)
..
.
an(j−1)
1
0
0 ··· 0
0 a2(j+1) · · · · · · a2n
..
..
..
.
···
. ···
.
0 an(j+1) · · · · · · ann
Since swapping two rows changes the sign of the determinant,
 
1 0
Ej
0 ···
0
0
0 ··· 0
 A2 
0 a21 a22 · · · a2(j−1) a2(j+1) · · · · · · a2n
 
det  ..  = (−1)1+j .. ..
..
..
..
. . ···
 . 
.
···
.
···
.
0 an1 an2 · · · an(j−1) an(j+1) · · · · · · ann
An
= (−1)1+j det(A1j )
Altogether, we have the expansion along the first row: det(A) = a11 det(Ai1 ) ± · · · ±
a1n det(A1n ).
a b = ad − bc is a special case of Laplace’s formula.
Note that the determinant c d Cross product
a
x
→
−
→
−
Given a plane vector R 3 v =
we wish to find an orthogonal vector w =
.
b
y
x y vanishes when x = a and y = a, as confirmed by the formula
The determinant a
b
x y b
a
−
→
−
= xb − ya = x · b . Hence →
w =
is orthogonal to v =
,
a b y
−a
−a
b
a
b
i.e.
·
= 0.
b
−a
2
Ingegneria dell’Autoveicolo, LeLing12
2
Geometria
Geometria Lingotto.
This argument generalises to space, and puts us in the position of solving the problem
 
a1
→
−
→
−
→
−
→
−

of finding a vector w orthogonal to two given vectors v1 , v2 . In fact, let v1 = b1  ,
c1
 
 
a2
x
→
−
→
−



v2 = b2 and w = y  . Consider the determinant:
c2
z
x y z b1 c 1 a1 c 1 a1 b 1 a1 b 1 c 1 = x b 2 c 2 − y a2 c 2 + z a2 b 2 a2 b 2 c 2 where the equality is a consequence of Laplace expanded along the first row. We may
interpret the latter equality as a dot product
x y z a1 c 1 b1 c 1 + z a1
−y
a1 b 1 c 1 = x a2
a2 c 2 b2 c 2 a2 b 2 c 2  b1
b2

 

x
b1   
 − a1
y
=
·
 a2
b2  z
 a1
a2

c1 c2 

c1 
,
c2 

b1 
b2 
 b1 c 1  b2 c 2 
 
 a1 c 1 
→
−
−
−

 is orthogonal to both →
so w = − v1 and →
v2 .

a
c
2
2
 
 a1 b 1 
a2 b 2 −
−
−
−
−
The vector →
w is called the cross product 1 of →
v1 and →
v2 , and denoted →
v1 × →
v2 .
−
Similarly, using an n × n determinant and Laplace we can find a vector →
w ∈ Rn
perpendicular to n − 1 given vectors.
The cross product seen geometrically
−
−
−
−
The cross product →
v1 × →
v2 is orthogonal to →
v1 and →
v2 . But as the vectors orthogonal to
→
−
−
−
−
v1 and →
v2 form a straight line, to determine →
v1 × →
v2 it suffices to know its length and
orientation.
1
Also known as vector product, or wedge product because of the symbol ∧.
Ingegneria dell’Autoveicolo, LeLing12
3
Geometria
Geometria Lingotto.
−
−
−
−
Proposizione 0.2. The length of →
v1 × →
v2 is |→
v1 ||→
v2 | sin(θ), where 0 ≤ θ < π is the
−
−
angle formed by →
v1 and →
v2 .
→

−
−
v1 × →
v2
−
v1 ) is the volume 2 of the parallelepiped spanned by
Proof. Recall that det( →
→
−
v2
−
−
−
−
the three vectors →
v1 × →
v2 , →
v1 , →
v2 :
→

−
−
v1 × →
v2
−
−
−
−
−
−
−
v1 ) = |→
V olume(→
v1 × →
v2 , →
v1 , →
v2 ) = det( →
v1 × →
v2 |2 .
→
−
v2
Notice:

−
→
→
v1 ×−
v2
−
→ −
→
|v1 ×v2 |
→
− →
−
−
−
−
−
−
|→
v1 ||→
v2 | sin(θ) = Area(→
v1 , →
v2 ) = det( →
v1 ) = | v1 × v2 |
→
−
v2
2

−
→
→
v1 ×−
v2
→
→
|−
v ×−
v |
1
2
−
Since det( →
v1 ) is positive, we conclude that the
→
−
v2
−
−
right-hand-rule is suited to tell the orientation of →
v1 × →
v2 .
→
−
→
−
→
−
−
Hence v1 × v2 is the unique vector orthogonal to v1 , →
v2
→
−
→
−
with length equal to the area Area( v1 , v2 ) of the paral−
−
lelogram spanned by →
v1 and →
v2 , and whose orientation
is found with the right-hand-rule.
−
−
The cross product →
v ×→
w can be interpreted also by
−
−
a 90◦ -degree rotation. Namely, assuming that →
v ⊥→
w,
→
−
→
−
then v × w is the vector obtained by rotating by 90◦
−
in the counter-clockwise direction the vector →
w on the
→
−
plane orthogonal to v .
2
because the determinant stems from the need to compute areas and volumes formed by vectors.
Ingegneria dell’Autoveicolo, LeLing12
4
Geometria
Geometria Lingotto.
Rank and determinant: minors
Recaal that the determinant makes sense for square matrices exclusively. If not square,
we can compute certain quantities called minors or mini-determinants. A mini-determinant
of order k for the matrix A is the determinant of a k × k matrix obtained from A by
selecting k rows and k columns. For instance, any number aij is a mini-determinant of
order 1.


1 8
2
1 2 

= −2 is a minor of order 2 in 0 6 −3 ,
Esempio 0.3. The determinant 3 −1 4 
3 4 1 1
1
corresponding to choosing the first and third rows and columns. Also the
determinant
1 8
2 6 −3 −1 4 = 21 is a mini-determinant for A. Here’s a 3rd order minor: 0 6 −3 =
3 −1 4 −87
 
c1
 c2 
 
A column C =  ..  is non-zero iff at least one minor is not zero. In fact, the minors
.
cn
are the numbers c1 , · · · , cn . This is a general fact, it holds for arbitrary matrices n × m;
A is non-zero iff at least one mini-determinant is non-zero. Furthermore, a matrix has
rank zero iff all mini-determinants are zero. This generalises, too, and is expressed by
Kronecker’s theorem 3 .
Teorema 0.4. An n × m matrix A has rank ρ(A) = k if and only if there exists a
non-zero minor of order k and all minors of order > k are zero.
Proof . Let Mk be a mini-determinant of order k . If there were a non-trivial linear
combination among the k rows (columns) involved in building Mk , such combination
would also be a non-trivial linear combination of k rows in the mini-determinant Mk .
That would imply Mk = 0, so Mk = 0 if k > ρ(A). To prove that if k = ρ(A) then
there is a non-zero minor of order k , we may assume A has k columns, i.e. we merely
pay attention k LI columns. Since the rank is the dimension of the column space, there
are k LI rows as well. Hence the k × k minor thus built cannot be trivial, for the k rows
a re LI. 2
3
Leopold Kronecker (1823 - 1891), german mathematician.
Ingegneria dell’Autoveicolo, LeLing12
5
Geometria
Geometria Lingotto.
The choice of free parameters
There are linear systems where some variables are free. In the Gauß-Jordan elimination
method explained at the beginning of the course the free variables were the last ones:
if x1 , x2 , · · · , xn were the unknowns, then · · · , xn−1 , xn are those we tried to leave free,
so to speak. What if we want to leave the first ones x1 , x2 free? More generally, if we
want that a certain subset xi1 , xi2 , · · · , xil is free. In practice the easiest way is simply to
change the order, i.e. we write the columns of xi1 , xi2 , · · · , xil at the end of the matrix,
and proceed as before.
x1 + 2x2 + 7x3 + 5x4 = 1
Esempio 0.5. To solve
, using x1 , x2 as parameters, we
3x1 + 4x2 + 8x3 + 3x4 = 0
use the matrix
1
7 5 1 2
8 3 3 4
0
The first column corresponds to x3 , the second to x4 , the third to x1 and the last one
to x2 . Now, Gauß-Jordan:
5 1 2
1
5
1
1
2
7 5 1 2
1
1 7 7 7
1
7
7
7
7
7
→
→
→
−8
13
12
8 3 3 4
0
8 3 3 4
0
0 −19
7
7
7
7
5 1
2
1
14
−3
1 7 7
1 0 12
7
7
19
19
19
→
8
8
−12
−12
0 1 −13
0 1 −13
19
19
19
19
19
19
12
x4 + 19
x1 + 14
x = −3
19 2
19
The system is equivalent to:
The solution, with free param−12
8
x3 + −13
x
+
x
=
1
2
19
19
19
eters x1 , x2 , is:
   
 
 
1
0
x1
0
 0 
 1 
 x2   0 
  =  8  + x1  13  + x2  12 
 x3   
 
 
19
19
19
−3
−12
−14
x4
19
19
19
In the next example, instead, x1 , x2 cannot be taken as fre parameters .
x1 = 1
Esempio 0.6.
3x1 + 4x2 + 8x3 + 3x4 = 0
Hence the question is clearly: when can we choose a set of unknowns xi1 , xi2 , · · · , xil
as free parameters? Here’s the answer.
Ingegneria dell’Autoveicolo, LeLing12
6
Geometria
Geometria Lingotto.
Teorema 0.7. Let (A|B) be a consistent system in the n unknowns x1 , x2 , · · · , xn .
The unknowns xi1 , xi2 , · · · , xil can be chosen as free parameters iff the rank of A equals
the rank of the matrix obtained by erasing the columns of the variables xi1 , xi2 , · · · , xil .
Moreover, this is possible iff there exists a non-zero mini-determinant of order l in the
matrix obtained erasing the columns relative to xi1 , xi2 , · · · , xil .
The proof is an easy corollary of the theory developed so far.
The previous theorem is a linear version of the Implicit function theorem of Calculus.
The latter allows to define a (vector) function using a system of equations that are not
necessarily linear.
The characteristic polynomial
To a square matrix A is associated a very important polynomial found using determinants. It is called characteristic polynomial :
a11 − x
a12
···
a1n a21
a22 − x · · ·
a2n χA (x) = det(A − xId) = ..
..
..
...
.
.
.
an1
an2
· · · ann − x In other words, one subtracts x on the diagonal of A and calculates the determinant.
1 2
Esempio 0.8. Here is the characteristic polynomial of A =
:
3 4
1−x
2
= (1 − x)(4 − x) − 6 = x2 − 5x − 2.
χA (x) = 3
4−x Substituting 0 to x we have det(A), so the constant term of the characteristic polynomial is the determinant of A.
Proposizione 0.9. The characteristic polynomial of an n × n matrix has degree n.
Proof. Easy using the formula of Laplace.
2
a b
The characteristic polynomial of A =
is
c d
a−x
b χA (x) = = x2 − (a + d)x + (ad − bc)
c
d−x Ingegneria dell’Autoveicolo, LeLing12
7
Geometria
Geometria Lingotto.
It has to be noticed that the characteristic polynomial of the n × n zero matrix is
(−1)n xn , whereas for the n × n identity matrix we have (1 − x)n .
An important result.
Teorema 0.10. Let A be a matrix and P an invertible matrix. The characteristic
polynomial of A coincides with that of P AP −1 .
Proof. Binet’s formula has
det(P (A − xId)P −1 ) = det(P )det(A − xId)det(P −1 ) = det(A − xId) = χA (x),
but det(P (A − xId)P −1 ) = det(P AP −1 − xId) = χP AP −1 (x) 2
Ingegneria dell’Autoveicolo, LeLing12
8
Geometria
Scarica

Lecture #12