Professor John Beachy, Watson 355, 753-6753, email: beachy@math.niu.edu
Office Hours: 1:00-1:50 M T F (in Watson 355), or by appointment
My faculty homepage | My personal homepage | Math 240 Homepage and Homework problems (all sections)
Assignments | Homework problems | Class notes | Syllabus | Lecture Schedule
Chapter Summaries | Handouts | Solutions
The departmental final exam
(listed under mass exams in the schedule of classes)
will be given on
Wednesday, December 11,
from 8:00 to 9:50 PM
The exam will be given in DU 276.
Previous final exams (in Acrobat Reader format): Fall 99 with Solutions | Fall 00 | Spring 95
There will be a review session on Tuesday, December 10,
from 2:00 to 4:00 p.m., in DU 310.
During exam week I do not have regularly scheduled office hours,
but I will be in my office, and you are welcome to stop in any time.
DATE ASSIGNMENT Friday, 12/6 Hmwk 8 6.1 #8 a-d; 6.2 #10 b,d; 6.4 #15,20 Hmwk 9 Optional review problems Friday, 11/22 EXAM III (covering 4.1-4.3, 4.5, 5.1-5.5) Friday, 11/15 QUIZ 9: Sections 5.1--5.4 Wednesday, 11/13 HMWK 7 Friday, 11/8 QUIZ 8: Sections 4.5, 5.1, 5.2 Wednesday, 11/6 HMWK 6 Friday, 11/1 QUIZ 7: Sections 4.1 - 4.3 Tuesday, 10/22 EXAM II (covering 2.3-2.8, 3.3-3.4) Friday, 10/18 NO QUIZ HMWK 5: p 211 #14; p 224 #10 Friday, 10/11 QUIZ 6: Sections 2.7, 2.8 Friday, 10/4 QUIZ 5: Sections 2.5, 2.6 Wednesday, 10/2 RETEST (Exam I, same coverage) Friday, 9/27 QUIZ 4: Section 2.4 Monday, 9/23 EXAM I (covering 1.1-1.6, 2.1-2.3) Friday, 9/20 NO QUIZ HMWK 4 HMWK 3 Friday, 9/13 QUIZ 3: Sections 1.5, 1.6, 2.1 HMWK 2: p 68 #10b p 95 #19 Friday, 9/6 QUIZ 2: Section 1.4 Friday, 8/30 QUIZ 1: Sections 1.1, 1.2, 1.3 HMWK 1: p. 7 #6, 20 p. 18 #7, 15, 31
SYLLABUS
COURSE:
LINEAR ALGEBRA AND APPLICATIONS (4)
Matrix algebra and solutions of systems of linear equations,
matrix inversion, determinants.
Vector spaces, linear dependence, basis and dimension, subspaces.
Inner products, Gram-Schmidt process.
Linear transformations, matrices of a linear transformation.
Eigenvalues and eigenvectors.
Quadratic forms.
Applications.
TEXT:
Elementary Linear Algebra,
7th Edition (2000),
by Kolman and Hill
GRADING: Semester grades will be based on 600 points:
The last day for undergraduates to withdraw from a full-session course is Friday, October 18.
HOUR TESTS: If you cannot take a test at the scheduled time, you must contact me before the time of the test.
HOMEWORK: You should work all of the recommended homework problems. These will be important in class discussions. I will collect and grade some of the problems, on specific homework assignments.
QUIZZES: You should be prepared for a quiz each Friday. The quizzes will be designed to test that you are doing all of the recommended homework problems. I also reserve the right to give unannounced quizzes in any class period.
FINAL EXAMINATION: The departmental final exam (listed under mass exams in the schedule of classes) will be given on Wednesday, December 11, 2002, from 8:00 to 9:50 P.M.
GENERAL ADVICE: The WEB site Understanding Mathematics: a study guide has a good discussion about learning mathematics. There are additional WEB resources listed here.
SCHEDULE OF LECTURES
TENTATIVE SCHEDULE:
Monday Tuesday Wednesday Friday M Tu W Th F 1.1 1.2 1.3 1.4 AUG 26 27 28 29 30 Holiday 1.4 1.5 1.5 SEP 2 3 4 5 6 1.6 1.6 2.1 2.2 9 10 11 12 13 2.2 2.3 2.3 Review 16 17 18 19 20 EXAM I 2.4 2.4 2.5 23 24 25 26 27 2.5 2.6 Retest 2.6 OCT 30 1 2 3 4 2.7 2.7 2.8 3.1 7 8 9 10 11 3.3 3.3 3.4 3.4 14 15 16 17 18 3.5 EXAM II 4.1 4.2 21 22 23 24 25 4.2 4.3 4.3 4.4 NOV 28 29 30 31 1 4.5 5.1 5.2 5.2 4 5 6 7 8 5.3 5.4 5.4 5.5 11 12 13 14 15 5.5 Review Review EXAM III 18 19 20 21 22 6.1 6.1 Holiday Holiday 25 26 27 28 29 6.2 6.2 6.4 6.4 DEC 2 3 4 5 6 FINAL EXAM Wednesday, 8-9:50 PM 9 10 11 12 13
Week of Section Page Problems 8/26 1.1 7 1 3 4 6 7 8 9 15 20 1.2 18 3 4 7 15 17 19 26 27 29 31 1.3 27 8 9 10 11 23 30 32 34 36 1.4 38 9 10 11 13 15 16 24 9/4 1.4 25 27 28 30 36 38 40 1.5 57 1 5 6 7 9 10 11 1.5 12 14 15 19 24 25 26 9/9 1.6 67 3 7 8 10 13 14 15 1.6 19 20 21 25 29 31 2.1 7 9 13 16 17 19 21 2.2 3 4 5 7 8 9 13 9/16 2.2 15 16 17 18 19 20 2.3 1 3 7 9 10 11 14 2.3 15 19 21 22 23 25 26 9/23 Exam I 2.4 3 4 5 6 7 8 11 12 2.4 13 14 17 20 21 22 26 2.5 1 2 4 11 13 16 9/30 2.5 18 24 28 29 35 37
Homework 3 | Homework 4 | Homework 5 | Homework 6 | Homework 7 | Homework 9 (optional review)
HANDOUTS
Handouts (in html format):
Course information (my section) | Syllabus (all sections) | List of suggested homework problems
Properties of the Real Numbers | Definition of a Vector Space | Another Proof of the Cauchy-Schwarz Inequality
Handouts (in pdf format):
Lecture on Similar Matrices | The matrix describing a reflection in a plane | Chapter Summaries (for review)
Homework 2 | Homework 3 | Homework 4 | Homework 5 | Homework 6 | Homework 7 | Homework 9 (optional review)
Review Sheets: Exam II | Chapter Summaries
Previous exams:
Fall, 1997:
Exam I
|
Solutions
|
Exam II
|
Exam III
SOLUTIONS
These are in pdf format:
Quiz 7 | Homework 6 | Quiz 8 | Homework 7 | Quiz 9 | Exam 3
Quiz 4 | Quiz 5 | Quiz 6 | Homework 5 | Exam 2
Quiz 1 | Quiz 2 | Quiz 3 | Exam 1 | Exam 1 (Repeat)
This is a list of some of the properties of the set of real numbers that we need in order to work with vectors and matrices. Actually, we can work with matrices whose entries come from any set that satisfies these properties, such as the set of all rational numbers or the set of all complex numbers.
a + (b + c) = (a + b) + c and a . (b . c) = (a . b) . c.
a + b = b + a and a . b = b . a.
a . (b + c) = a . b + a . c and (a + b) . c = a . c + b . c.
a + 0 = a and 0 + a = a, and
a . 1 = a and 1 . a = a.
a + x = 0 and x + a = 0
have a solution x in the set of real numbers, called the additive inverse of a, and denoted by -a.a . x = 1 and x . a = 1
have a solution x in the set of real numbers, called the multiplicative inverse of a, and denoted by a-1.Here are some additional properties of real numbers a,b,c, which can be proved from the properties listed above.
3x +2y -5z = 3 -2x - y +3z + w = 0 - x + y +6w = 11 x + y -2z + w = 3We can perform any of these operations on the system:
To use the Gauss-Jordan technique, sometimes called Gaussian elimination, choose an equation with a coefficient of 1 in the first column. (It may be necessary to first create one, by dividing each term of one of the equations by its coefficient of x, or by adding a multiple of one of the equations to another to get the 1.) This equation is called the pivot, and it should be moved to the top position. Use it to eliminate the x term in the other equations.
Repeat this procedure for each of the columns. The solution given below illustrates Gauss-Jordan elimination.
3x +2y -5z = 3 - x + y +6w = 11 -2x - y +3z + w = 0 x + y -2z + w = 3~>
x + y -2z + w = 3 - x + y +6w = 11 -2x - y +3z + w = 0 3x +2y -5z = 3~>
x + y -2z + w = 3 2y -2z +7w = 14 y - z +3w = 6 - y + z -3w = -6~>
x + y -2z + w = 3 y - z +3w = 6 2y -2z +7w = 14 - y + z -3w = -6~>
x - z -2w = -3 y - z +3w = 6 w = 2~>
x - z = 1 y - z = 0 w = 2
This gives us the final solution: x = z + 1, y = z, w = 2.
We do not have to write down the variables each time, provided we keep careful track of their positions. The solution using matrices to represent the system looks like this.
_ _ | | | 3 2 -5 0 3 | | -1 1 0 6 11 | | -2 -1 3 1 0 | | 1 1 -2 1 3 | |_ _|~>
_ _ | | | 1 1 -2 1 3 | | -1 1 0 6 11 | | -2 -1 3 1 0 | | 3 2 -5 0 3 | |_ _|~>
_ _ | | | 1 1 -2 1 3 | | 0 2 -2 7 14 | | 0 1 -1 3 6 | | 0 -1 1 -3 -6 | |_ _|~>
_ _ | | | 1 1 -2 1 3 | | 0 1 -1 3 6 | | 0 2 -2 7 14 | | 0 -1 1 -3 -6 | |_ _|~>
_ _ | | | 1 0 -1 -2 -3 | | 0 1 -1 3 6 | | 0 0 0 1 2 | | 0 0 0 0 0 | |_ _|~>
_ _ | | | 1 0 -1 0 1 | | 0 1 -1 0 0 | | 0 0 0 1 2 | | 0 0 0 0 0 | |_ _|
Finally, we put the variables back in, to get the solution: x -z = 1, y -z = 0, w = 2. This can be rewritten in the form x = z + 1, y = z, w = 2.
The answer shows that there are infinitely many solutions. Any value can be chosen for z, and then using the corresponding values for x, y, and w gives a solution.
The operation + (vector addition) must satisfy the following conditions:
Theorem 2.3 (p 103): Let V be a vector space, with operations + and ·, and let W be a subset of V. Then W is a subspace of V if and only if the following conditions hold.
Definition 2.6 (page 105). Let v1, v2, ..., vk be vectors in a vector space V. A vector v in V is called a linear combination of v1, v2, ..., vk if there are real numbers a1, a2, ..., ak with
v = a1 v1 + a2 v2 + ... + ak vk
Definition 2.7 (page 106). span { v1, v2, ..., vk } is the set of all linear combinations of v1, v2, ..., vk.
Theorem 2.4 (page 107). span { v1, v2, ..., vk } is a subspace.
Definition 2.8 (page 114). Vectors v1, v2, ..., vk span V if span { v1, v2, ..., vk } = V.
Definition 2.9 (page 116). The vectors v1, v2, ..., vk. are linearly independent if the equation
x1 v1 + x2 v2 + ... + xk vk = 0
has only the trivial solution (all zeros).I believe that the next theorem is the best way to think about linear dependence. I would probably use it as the definition. The definition that the author uses is the usual one, and it is the best way to check whether or not a given set of vectors is linearly independent.
Theorem 2.6 (page 121). A set of vectors is linearly dependent if and only if one of them is a linear combination of the others.
Definition 2.10 (page 125). A set of vectors is a basis for V if it is a linearly independent spanning set.
Theorem 2.7 (page 124). A set of vectors is a basis for V if and only if every vector in V can be expressed uniquely as a linear combination of the vectors in the set.
Theorem 2.8 (page 125). Any spanning set contains a basis.
Theorem 2.9 (page 129). If a vector space has a basis with n elements, then it cannot contain more than n linearly independent vectors.
Corollary 2.1 (page 129). Any two bases have the same number of elements.
Definition 2.11 (page 129). If a vector space V has a finite basis, then the number of vectors in the basis is called the dimension of V.
Corollaries 2.2 - 2.5 (page 130). dim (V) is the maximum number of linearly independent vectors in V, and it is also the minimum number of vectors in any spanning set.
Theorem 2.10 (page 131). Any linearly independent set can be expanded to a basis.
Theorem 2.11 (page 132). If dim (V) = n, and you have a set of n vectors, then to check that it forms a basis you only need to check one of the two conditions (spanning and linear independence).
Theorem 2.16 (page 160). Row equivalent matrices have the same row space.
Definition 2.15 (page 163). The row rank of a matrix is the dimension of its row space. The column rank of a matrix is the dimension of its column space.
Theorem 2.17 (page 165). For any matrix (of any size), the row rank and column rank are equal.
Theorem 2.18 (page 166). If A is any m by n matrix, then rank(A) + nullity(A) = n .
Theorem 2.20 (page 168). The equation Ax = b has a solution if and only if the augmented matrix has the same rank as A.
Summary of results on rank and nonsingularity (page 169)
The following conditions are equivalent for any n by n matrix A:
Procedure 1. (page 112). To test whether the vectors v1, v2, ..., vk are linearly independent or linearly dependent:
Procedure 2. (page 110). To check that the vectors v1, v2, ..., vk span the subspace W:
Procedure 3. (page 128). To find a basis for the subspace span { v1, v2, ..., vk } by deleting vectors:
Procedure 4. (page 145). To find the transition matrix PS<-T:
Procedure 5. (page 153). To find a basis for the solution space of the system A x = 0 :
Procedure 6. To find a simplified basis for the subspace span { v1, v2, ..., vk } :
(u,v)2 < ||u||2 ||v||2
Discussion of the ideas behind the proof: This is an alternate proof that I hope you will think is better motivated than the one in the text.The only inequality that shows up in the definition of an inner product space appears in the condition that
0 < (x,x) for any vector x.
Since this seems to be the only possible tool, we need to rewrite the Cauchy-Schwarz inequality until it looks like the inner product of a vector with itself. The first thing to do is to rewrite the lengths in term of the inner product, using the fact that (u,u) = ||u||2.
(u,v)2 < ||u||2 ||v||2
(u,v)2 < (u,u) (v,v)
Next, we can subtract (u,v)2 from both sides of the inequality.
0 < (u,u) (v,v) - (u,v) (u,v)
Now divide through by (u,u), and to simplify things let c be the quotient (u,v) / (u,u). This gives us the next inequality.
0 < (v,v) - c (u,v)
Now we can factor out v, using the properties of the inner product, to get an inequality that almost looks like the inner product of a vector with itself.
0 < (v-c u , v)
We can complete the proof if we show that (v-c u , v) = (v-c u , v-c u). To show this, we only need to check that
(v-c u , u) = 0,
since (v-c u , v-c u) = (v-c u , v) - (v-c u , -c u) = (v-c u , v) + c (v-c u , u).
Expanding (v-c u , u) gives (v,u) -c (u,u), and this is equal to zero because c = (u,v)/(u,u). In summary, we finally have the inequality
0 < (v-c u , v-c u) = (v-c u , v) = (v,v) - c (u,v).
As we have already shown, this inequality is the same as the Cauchy-Schwarz inequality
(u,v)2 < ||u||2 ||v||2.
End of discussion
Formal proof of the Cauchy-Schwarz inequality:
If u = 0, then the inequality certainly holds, so we can assume that u is nonzero. Then (u,u) is nonzero, and so we can define c = (u,v) / (u,u). It follows form the definition of an inner product that for the vector v-c u we have
0 < (v-c u , v-c u).
Computing this inner product gives
(v-c u , v-c u) = (v , v-c u) - c (u , v-c u) = (v , v-c u)
because c = (u,v) / (u,u)) and therefore (u , v-c u) = (u,v) - c (u,u) = 0.
Thus we have 0 < (v-c u , v) = (v,v) - c (u,v),
and adding c (u,v) to both sides gives c (u,v) < (v,v).
Finally, multiplying both sides of the inequality by (u,u) gives the identity
(u,v)2 < (u,u) (v,v), which is the same as the Cauchy-Schwarz inequality.
End of proof
Let L: V -> W be a linear transformation. Suppose that S and S' are bases for V, while T and T' are bases for W. We will use the notation MT<-S(L) for the matrix for L, relative to the basis S for V and T for W. If we use PS<-S' for the transition matrix which converts coordinates relative to S' into coordinates relative to S, then we have the following relationship:
MT',S'(L) = PT'<-T MT<-S(L) PS<-S'
Note that the above equation must be read from right to left because it involves composition of operations.In case W=V, T=S, and T'=S', we have
MS'<-S' (L) = PS'<-S MS<-S(L) PS<-S'.
Since the transition from S to S' is the inverse of the transition from S' to S, the transition matrices are inverses of each other. With MS'<-S'=B, MS<-S(L)=A, and PS<-S'=P, the equation is the one which defines similarity of matrices:B = P-1AP.
Top of the page | Math 240 Homepage | Department homepage | John Beachy's homepage