Suppose we wanted to check whether a set of vectors {v1,v2,…,vn} forms a basis for a subspace V.
For this, we need to check that both of the requirements of a basis are met:
span{v1,v2,…,vn}=V: We know that we can check whether {v1,v2,…,vn} spans V by checking if there is a pivot in every row of the matrix [v1v2⋯vn].
{v1,v2,…,vn} is linearly independent: We know that we can check for linear independence by checking if there is a pivot in every column of the matrix [v1v2⋯vn].
Therefore, all we need to do is to check whether [v1v2⋯vn] has a pivot in every row and column.
Change of basis
Quick overview of change of basis:
Sometimes, the problem becomes easier if we use another basis to solve the problem, e.g. an orthonormal basis, where the basis vectors are orthogonal to each other ("ortho") and the length of all basis vectors is 1 ("normal"). In this case, we have to change the basis.
Note: Change of basis only works if both bases B and C span the same subspace. This makes sense intuitively: a change of basis should not add or remove elements from the subspace.
It turns out, though, that a change of basis is actually just a linear transformation, so we can use a matrix to express a change of basis. (In fact, a change of basis is a special case of an identity transformation, where a vector is mapped to itself. We are just changing how the vector is expressed.)
Assume that we wanted to change the basis from B={b1,b2,…,bn} to C={c1,c2,…,cn}.
First, let's explore what [v]B=v1v2⋮vn actually means. As we saw above with the standard basis example, this equation merely means v=v1b1+v2b2+⋯+vnbn
We define PB=[b1b2⋯bn], so PB is the matrix with the basis vectors of B.
PB[v]B=v
Similarly,
PC[v]C=v
Therefore,
PB[v]B=PC[v]C
[v]C=PC−1PB[v]B
We define PC−1PB to be the change of basis matrix from B to C denoted by PC←B.
Another way of finding the change of basis matrix is by looking at the basis vectors. Since [v]B is just a linear combination of the basis vectors of B, if we can find how to express each of the basis vectors of B in terms of C, [v]C is just a linear combination of the basis vectors of B in terms of Cwith the same weights.
[v]C=v1[b1]C+v2[b2]C+⋯+vn[bn]C
To express each basis vector of B in terms of C, we have to solve the following augmented matrices, one for each basis vector in B:
[c1c2⋯cnb1]
[c1c2⋯cnb2]
⋮
[c1c2⋯cnbn]
To make our lives easier and because we are performing the same row operations every time, we can row-reduce the following augmented matrix
Bonus: All eigenvectors of a matrix are linearly independent, so we have n eigenvectors for an n×n matrix, we can use the eigenvectors as a basis for the subspace Rn. In this case, we call the basis an eigenbasis. Try expressing the matrix 100−10−1223 (taken from the chapter Eigen-everything) in terms of its eigenbasis, i.e. express each column in terms of the eigenbasis. What do you notice?