SUMMARY OF MATERIAL FOR EXAM 2 / PRACTICE PROBLEMS

Subspaces of R^n: non-empty subsets of R^n closed under scalar multiplication and vector addition. A subspace must contain 0. The only subspaces of R^1 are 0 and R^1. The only subspaces of R^2 are {0}, lines through the origin, and all of R^2. The only subspaces of R^3 are 0, lines through the origin, planes through the origin, and all of R^3.

The span of a set of vectors:
The span of the empty set is {0}.
The span of any other set of vectors consists of all linear combinations of those vectors.

Linear dependence and independence of vectors. Vectors are dependent if and only if they have a nonzero linear relation.

The notion of basis. A basis for a subspace V of R^n is any largest possible set of independent vectors in the subspace, or, equivalently, any smallest possible set of vectors that spans the subspace. Every element of V can be written uniquely as a linear combination of elements of a basis for V.

Any two bases for V have the same number of elements. The dimension of the subspace V is the number of vectors in a basis for V, which does not depend on which basis one chooses.

If V is a subspace spanned by n vectors, then the dimension of V is at most n, and any n+1 or more vectors in V are dependent. If V is contained in W and these are subspaces of R^n, then if dim V = dim W we must have that V = W. If V is a strictly smaller set than W, then dim V < dim W.

The kernel of a linear transformation T with matrix A is the set of all vectors X in the domain of T such that T(X) = 0. It is a subspace of the domain of T. The kernel of T (or A) is also the set of solutions of the homogeneous linear equations AX = 0, and it is also the set of all linear relations on the columns of A. The columns of A are independent if and only if the kernel of A is the vector space {0}. The columns are independent if and only if the rank of A is equal to the number of columns.

The image of T (or A) is the set of all values taken on by T. It is the same as the set of vectors C such that AX = C has a solution. It is a subspace of the codomain (= target) of T. It is also the same as the set of all linear combinations of columns of A, which is also called the column space of A, i.e., column space A = im A.

The rank of a matrix A =
the dimension of the column space =
dimension of the image of A =
the dimension of the row space =
the number of leading 1's in the RREF =
the number of nonzero rows in the RREF =
the number of leading variables when one solves AX = 0.

For an n by n matrix A,
A is invertible if and only if
RREF is the identity if and only if
the rank of A is n if and only if
the columns of A span R^n if and only if
the columns of A are independent if and only if
the columns of A are a basis for R^n if and only if
A^T (the transpose) is invertible
(and then we get the same conditions on rows).

To get a basis for the column space of A, one can find the RREF of A: call that B. Take those columns of A the correspond to leading 1's in B (the PIVOT columns of A). These will give a basis for the column space of A. DO NOT TAKE COLUMNS FROM THE RREF. (The RREF often has a different column space: but the relations on the columns of the RREF are the same as the relations on the columns of A.)

To get a basis for the kernel of A, find the general solution of the system AX = 0, written as a set, described in terms of the free variables. The general solution can be written as a linear combination of constant vectors in which the coefficients are the free variables. These constant vectors are a basis for the kernel. The dimension of ker T = ker A is the same as the number of free variables when one solves AX = 0.

Number of free variables + number of leading variables
= number of variables

and this implies

dim ker A + dim im A = number of columns of A or

dim ker T + dim im T = dim (domain of T)

Two vectors or orthogonal if their dot product is zero. More generally, the cosine of the angle between two nonzero vectors is their dot product divided by the product of their lengths.

A finite set of vectors in R^n is called orthonormal if each is perpendicular to all of the others and they all have length one. An orthonormal set is always independent.

If V has orthonormal basis w_1, ..., w_k then the (orthogonal) projection of v on V, proj_V v, is

(v . w_1)w_1 + (v . w_2)w_2 + ... + (v . w_k)w_k.

This (orthogonal) projection z of v on V is the unique vector in V such that v-z is orthogonal to V. Every vector is uniquely the sum of a vector in V (its projection) and a vector perpendicular to V.

Every basis v_1, ..., v_m for V can be converted to an orthonormal basis w_1, ..., w_m to V such that for all k, v_1, ..., v_k and w_1, ..., w_k have the same span, using the Gram-Schmidt method:

Let w_1 = v_1/||v_1|| and we let r_{11} = ||v_1||.
If w_1, ..., w_{k-1} have been found, to find
w_k, first find

v_k - the projection of v_k on the
space spanned by w_1, ..., w_{k-1}.

The projection is

(v_k . w_1)w_1 + ... + (v_k . w_{k-1})w_{k-1}.

Let r_{ik} = v_k . w_i for K > i, and r_{kk} be the length of

v_k - the projection of v_k on the
space spanned by w_1, ..., w_{k-1}

Then, for all k, v_k = r_{k1} w_1 + ... + r_{kk}w_k.

This means that if M is the n by m matrix formed with columns v_1, ..., v_m, Q is the orthonormal matrix with columns w_1, ..., w_m, and R is the upper triangular matrix with entries [r_{ik}] if i < or = k, then

M = QR.

Thus, every n by m matrix with independent columns is the product of an orthonormal matrix and an upper triangular matrix, and the upper triangular matrix has positive entries on the diagonal. This factorization is unique.

The transpose A^T of the m by n matrix A = [a_{ij}] is the n by m matrix [a_{ji}]. The rows (resp., columns) of A are the columns (resp. rows) of A^T. (A^T)^T = A. If AB is defined, so is B^T A^T, and (AB)^T = B^T A^T.

An n by m matrix A has orthonormal columns if and only if A^T A = 1_m. In this case, A A^T is the matrix of projection on im(A).

An n by n matrix is symmetric if A = A^T. A^T A is always symmetric.

A matrix is called orthogonal if it is a square matrix, say n by n, and its columns are an orthonormal set. Let T be the linear transformation corresponding to A.

A is orthogonal
if and only if
A is square and its columns are an orthonormal set
if and only if
T preserves distances
if and only if
the columns of A are an orthonormal basis for R^n
if and only if
T preserves all dot products of vectors
if and only if
A^T A = 1_n
if and only if
A is invertible with A^{-1} = A^T
if and only if
A A^T = 1_n
if and only if
A^T is orthogonal
(and so we get all the corresponding conditions
on the rows)
Because an orthogonal transformation preserves dot products, it also preserves all angles. In particular, it preserves orthogonality.

The orthogonal complement of a subspace V of R^n is the set of all vectors orthogonal to every vector in V (equivalently, orthogonal to a spanning set for V) The orthogonal complement of V is a vector space W, and every vector in R^n is uniquely the sum of a vector in V (its projection on V) and a vector in W (its projection on W). When W is the orthogonal complement of V, then V is the orthogonal complement of W. The sum of their dimensions is n.

The orthogonal complement of the image of a matrix A is the kernel of A^T.


Practice Problems:

3.1     5, 7, 9
3.2     1, 7, 8, 24, 47
3.3     10, 17, 36, 37, 52
4.1     9, 10, 17, 26
4.2     5 + 19, 7 + 21
4.3     8, 9, 27, 28