studEE16A
  • Introduction
  • Linear Algebra
    • Linear Equations
      • Description
      • Example Problems
    • Vector Spaces
      • Description
      • Example Problems
    • Inner Products
      • Description
      • Example Problems
    • Determinants
      • Description
      • Example Problems
    • Eigen-everything
      • Description
      • Example Problems
    • Matrices
      • Description
      • Example Problems
    • Least Squares
      • Description
      • Example Problems
    • Gram-Schmidt
      • Description
      • Example Problems
    • Basis
      • Description
      • Example Problems
    • Page Rank
  • Circuits
    • Circuit Basics
    • Capacitance
    • Nodal Analysis
    • Superposition
    • Thevenin and Norton
    • What, When, Where, and Why?
    • Op Amps
Powered by GitBook
On this page
  • Example 1
  • Example 2
  • Example 3
  • Fundamental Subspaces
  • What and why
  • Column space
  • Null space
  • Row space
  • Left null space
  • Dimensions
  • Orthogonal complements

Was this helpful?

  1. Linear Algebra
  2. Vector Spaces

Example Problems

PreviousDescriptionNextInner Products

Last updated 5 years ago

Was this helpful?

Example 1

Show whether, for all x,y∈Rx, y \in \mathbb{R}x,y∈R, [xy0]\begin{bmatrix}x \\ y \\ 0\end{bmatrix}​xy0​​ forms a subspace of R3\mathbb{R}^3R3.

Property 1:

Let u⃗=[x1y10]\vec{u} = \begin{bmatrix}x_1 \\ y_1 \\ 0\end{bmatrix}u=​x1​y1​0​​ and v⃗=[x2y20]\vec{v} = \begin{bmatrix}x_2 \\ y_2 \\ 0\end{bmatrix}v=​x2​y2​0​​ for any x1,x2,y1,y2∈Rx_1, x_2, y_1, y_2 \in \mathbb{R}x1​,x2​,y1​,y2​∈R. We observe that u⃗\vec{u}u and v⃗\vec{v}v are in the subspace.

u⃗+v⃗=[x1x20]+[x2y20]=[x1+x2y1+y20]\vec{u} + \vec{v} = \begin{bmatrix}x_1 \\ x_2 \\ 0\end{bmatrix} + \begin{bmatrix}x_2 \\ y_2 \\ 0\end{bmatrix} = \begin{bmatrix}x_1 + x_2 \\ y_1 + y_2 \\ 0\end{bmatrix}u+v=​x1​x2​0​​+​x2​y2​0​​=​x1​+x2​y1​+y2​0​​

[x1+x2y1+y20]\begin{bmatrix}x_1 + x_2 \\ y_1 + y_2 \\ 0\end{bmatrix}​x1​+x2​y1​+y2​0​​ is clearly in the subspace, so property 1 is satisfied.

Property 2:

Let u⃗=[xy0]\vec{u} = \begin{bmatrix}x \\ y \\ 0\end{bmatrix}u=​xy0​​ and c∈Rc \in \mathbb{R}c∈R.

cu⃗=c[xy0]=[cxcy0]c\vec{u} = c\begin{bmatrix}x \\ y \\ 0\end{bmatrix} = \begin{bmatrix}cx \\ cy \\ 0\end{bmatrix}cu=c​xy0​​=​cxcy0​​

[cxcy0]\begin{bmatrix}cx \\ cy \\ 0\end{bmatrix}​cxcy0​​ is also in the subspace, so property 2 is satisfied as well.

Therefore, [xy0]\begin{bmatrix}x \\ y \\ 0\end{bmatrix}​xy0​​ forms a subspace of R3\mathbb{R}^3R3.

Example 2

Show whether [10−1]t+[21−3]\begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix}t + \begin{bmatrix}2 \\ 1 \\ -3\end{bmatrix}​10−1​​t+​21−3​​ forms a subspace of R3\mathbb{R}^3R3 for t∈Rt \in \mathbb{R}t∈R.

Property 1:

Let u⃗=[10−1]⋅1+[21−3]=[31−4]\vec{u} = \begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix} \cdot 1 + \begin{bmatrix}2 \\ 1 \\ -3\end{bmatrix} = \begin{bmatrix}3 \\ 1 \\ -4\end{bmatrix}u=​10−1​​⋅1+​21−3​​=​31−4​​ and v⃗=[10−1]⋅2+[21−3]=[41−5]\vec{v} = \begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix} \cdot 2 + \begin{bmatrix}2 \\ 1 \\ -3\end{bmatrix} = \begin{bmatrix}4 \\ 1 \\ -5\end{bmatrix}v=​10−1​​⋅2+​21−3​​=​41−5​​.

u⃗+v⃗=[31−4]+[41−5]=[72−9]\vec{u} + \vec{v} = \begin{bmatrix}3 \\ 1 \\ -4\end{bmatrix} + \begin{bmatrix}4 \\ 1 \\ -5\end{bmatrix} = \begin{bmatrix}7 \\ 2 \\ -9\end{bmatrix}u+v=​31−4​​+​41−5​​=​72−9​​

Example 3

Property 1:

Therefore, property 1 is satisfied.

Property 2:

Therefore, property 2 is satisfied as well, so the null space of a matrix forms a subspace.

Fundamental Subspaces

What and why

For each matrix, there are four fundamental subspaces: the column space, the null space, the row space, and the left null space. In this section, we will explore the properties of each of these subspaces and see how they relate to one another.

Column space

Null space

Row space

Left null space

Dimensions

You should also note the two following equations:

Using orthogonal complements, we can show why these two equations are guaranteed to be true.

Orthogonal complements

  1. they are orthogonal to each other and because

  1. they are orthogonal to each other and because

Let us try to find the corresponding ttt value for [72−9]\begin{bmatrix}7 \\ 2 \\ -9\end{bmatrix}​72−9​​.

[10−1]⋅t+[21−3]=[72−9]\begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix} \cdot t + \begin{bmatrix}2 \\ 1 \\ -3\end{bmatrix} = \begin{bmatrix}7 \\ 2 \\ -9\end{bmatrix}​10−1​​⋅t+​21−3​​=​72−9​​ [10−1]⋅t=[51−6]\begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix} \cdot t = \begin{bmatrix}5 \\ 1 \\ -6\end{bmatrix}​10−1​​⋅t=​51−6​​

There is no t∈Rt \in \mathbb{R}t∈R that fulfills the equation. Therefore, u⃗+v⃗\vec{u} + \vec{v}u+v is not in the subspace, so [10−1]t+[21−3]\begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix}t + \begin{bmatrix}2 \\ 1 \\ -3\end{bmatrix}​10−1​​t+​21−3​​ does not form a subspace.

Graphically, [10−1]t+[21−3]\begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix}t + \begin{bmatrix}2 \\ 1 \\ -3\end{bmatrix}​10−1​​t+​21−3​​ is a line in R3\mathbb{R}^3R3 that does not pass through the origin. However, according to the scaling property, 0⃗\vec{0}0 should always be part of a subspace since we can multiply any u⃗\vec{u}u in this subspace with c=0c = 0c=0 to get the zero vector. Therefore, [10−1]t+[21−3]\begin{bmatrix}1 \\ 0 \\ -1\end{bmatrix}t + \begin{bmatrix}2 \\ 1 \\ -3\end{bmatrix}​10−1​​t+​21−3​​ is not a valid subspace.

If WWW is a subset of vectors but 0⃗∉W\vec{0} \not\in W0∈W, then WWW is not a valid subspace.

Show that the null space of an m×nm \times nm×n matrix A\textbf{A}A forms a subspace of Rn\mathbb{R}^nRn.

Let u⃗,v⃗∈Null(A)\vec{u}, \vec{v}\in \text{Null}(\textbf{A})u,v∈Null(A). Then Au⃗=0⃗\textbf{A}\vec{u} = \vec{0}Au=0 and Av⃗=0⃗\textbf{A}\vec{v} = \vec{0}Av=0.

Let us check whether u⃗+v⃗\vec{u} + \vec{v}u+v is in the null space of A\textbf{A}A.

A(u⃗+v⃗)=Au⃗+Av⃗=0⃗+0⃗=0⃗\textbf{A}(\vec{u} + \vec{v}) = \textbf{A}\vec{u} + \textbf{A}\vec{v} = \vec{0} + \vec{0} = \vec{0}A(u+v)=Au+Av=0+0=0

Let us check whether cu⃗c\vec{u}cu is in the null space of A\textbf{A}A for any c∈Rc \in \mathbb{R}c∈R.

A(cv⃗)=cAv⃗=c0⃗=0⃗\textbf{A}(c\vec{v}) = c\textbf{A}\vec{v} = c\vec{0} = \vec{0}A(cv)=cAv=c0=0

The column space, or rank, of an m×nm \times nm×n matrix A\textbf{A}A refers to the range of A\textbf{A}A, or the span of its column vectors.

Col(A)={Av⃗∈Rm∣v⃗∈Rn}=span{a⃗1,a⃗2,…,a⃗n}\text{Col}(\textbf{A}) = \{\textbf{A}\vec{v} \in \mathbb{R}^m \mid \vec{v} \in \mathbb{R}^n\} = \text{span}\left\{\vec{a}_1, \vec{a}_2, \ldots, \vec{a}_n\right\}Col(A)={Av∈Rm∣v∈Rn}=span{a1​,a2​,…,an​}

We can visualize the column space as the set of all vectors that are the "output" of the linear transformation given by A\textbf{A}A.

We note that the column space of A\textbf{A}A is a subspace of Rm\mathbb{R}^mRm. The dimension of the column space of A\textbf{A}A is the number of pivots, or the number of linearly independent column vectors, in A\textbf{A}A.

The null space of an m×nm \times nm×n matrix A\textbf{A}A refers to all vectors that map to the zero vector 0⃗\vec{0}0 when the linear transformation given by A\textbf{A}A is applied on these vectors.

Null(A)={v⃗∈Rn∣Av⃗=0⃗}\text{Null}(\textbf{A}) = \{\vec{v} \in \mathbb{R}^n \mid \textbf{A}\vec{v} = \vec{0}\}Null(A)={v∈Rn∣Av=0}

We note that the null space of A\textbf{A}A is a subspace of Rn\mathbb{R}^nRn. The dimension of the null space of A\textbf{A}A is the number of free variables in the row echelon form of A\textbf{A}A.

The row space of an m×nm \times nm×n matrix A\textbf{A}A refers to range of AT\textbf{A}^TAT, or the span of its row vectors.

Row(A)={ATv⃗∈Rn∣v⃗∈Rm}=span{r⃗1T,r⃗2T,…,r⃗mT}\text{Row}(\textbf{A}) = \{\textbf{A}^T\vec{v} \in \mathbb{R}^n \mid \vec{v} \in \mathbb{R}^m\} = \text{span}\left\{\vec{r}_1^T, \vec{r}_2^T, \ldots, \vec{r}_m^T\right\}Row(A)={ATv∈Rn∣v∈Rm}=span{r1T​,r2T​,…,rmT​}

We note that the row space of A\textbf{A}A is a subspace of Rn\mathbb{R}^nRn. The dimension of the row space of A\textbf{A}A is the number of pivots in A\textbf{A}A since the number of pivots in A\textbf{A}A equals the number of pivots in AT\textbf{A}^TAT after row reduction.

The left null space of an m×nm \times nm×n matrix A\textbf{A}A refers to all vectors that map to the zero vector 0⃗\vec{0}0 when the linear transformation given by AT\textbf{A}^TAT is applied on these vectors.

Null(AT)={v⃗∈Rm∣ATv⃗=0⃗}\text{Null}(\textbf{A}^T) = \{\vec{v} \in \mathbb{R}^m \mid \textbf{A}^T\vec{v} = \vec{0}\}Null(AT)={v∈Rm∣ATv=0}

We note that the left null space of A\textbf{A}A is a subspace of Rm\mathbb{R}^mRm. The dimension of the null space of A\textbf{A}A is the number of free variables in the row echelon form of AT\textbf{A}^TAT.

This subspace is called the left null space (while the "usual" null space is sometimes called the right null space) because ATv⃗=0⃗  ⟹  v⃗TA=0⃗T\textbf{A}^T\vec{v} = \vec{0} \implies \vec{v}^T\textbf{A} = \vec{0}^TATv=0⟹vTA=0T. In the latter equation, we are multiplying the vector v⃗T\vec{v}^TvT on the left of the matrix A\textbf{A}A.

Let rrr denote the number of pivots in an m×nm \times nm×n matrix A\textbf{A}A. Then we quickly observe the following:

dim⁡(Col(A))=r;dim⁡(Null(A))=n−r\dim(\text{Col}(\textbf{A})) = r; \dim(\text{Null}(\textbf{A})) = n - rdim(Col(A))=r;dim(Null(A))=n−r dim⁡(Col(AT))=r;dim⁡(Null(AT))=m−r\dim(\text{Col}(\textbf{A}^T)) = r; \dim(\text{Null}(\textbf{A}^T)) = m - rdim(Col(AT))=r;dim(Null(AT))=m−r

Note that dim⁡(Col(A))+dim⁡(Null(A))=n\dim(\text{Col}(\textbf{A})) + \dim(\text{Null}(\textbf{A})) = ndim(Col(A))+dim(Null(A))=n. This is called the Rank Theorem.

Terms in Rm\mathbb{R}^mRm: dim⁡(Col(A))+dim⁡(Null(AT))=m\dim(\text{Col}(\textbf{A})) + \dim(\text{Null}(\textbf{A}^T)) = mdim(Col(A))+dim(Null(AT))=m

Terms in Rn\mathbb{R}^nRn: dim⁡(Null(A))+dim⁡(Col(AT))=n\dim(\text{Null}(\textbf{A})) + \dim(\text{Col}(\textbf{A}^T)) = ndim(Null(A))+dim(Col(AT))=n

Let us examine the null space of an m×nm \times nm×n matrix A\textbf{A}A.

We know that for every vector v⃗∈Null(A)\vec{v} \in \text{Null}(\textbf{A})v∈Null(A),

Av⃗=0⃗\textbf{A}\vec{v} = \vec{0}Av=0

Using the row vectors of A\textbf{A}A, we get

Av⃗=[r⃗1Tr⃗2T⋮r⃗mT]v⃗=0⃗\textbf{A}\vec{v} = \begin{bmatrix}\vec{r}_1^T \\ \vec{r}_2^T \\ \vdots \\ \vec{r}_m^T\end{bmatrix}\vec{v} = \vec{0}Av=​r1T​r2T​⋮rmT​​​v=0

Writing out each component, we get r⃗1Tv⃗=r⃗2Tv⃗=⋯=r⃗nTv⃗=0\vec{r}_1^T\vec{v} = \vec{r}_2^T\vec{v} = \cdots = \vec{r}_n^T\vec{v} = 0r1T​v=r2T​v=⋯=rnT​v=0. We observe that these are just inner products, so ⟨r⃗1,v⃗⟩=⟨r⃗2,v⃗⟩=⋯=⟨r⃗m,v⃗⟩=0\langle \vec{r}_1, \vec{v} \rangle = \langle \vec{r}_2, \vec{v} \rangle = \cdots = \langle \vec{r}_m, \vec{v} \rangle = 0⟨r1​,v⟩=⟨r2​,v⟩=⋯=⟨rm​,v⟩=0. This means that v⃗\vec{v}v is orthogonal to all of the row vectors of A\textbf{A}A.

Given that v⃗∈Null(A)\vec{v} \in \text{Null}(\textbf{A})v∈Null(A), we note that the null space of A\textbf{A}A is orthogonal to the row space of A\textbf{A}A.

Since these two subspaces are orthogonal to each other, from the equation dim⁡(Null(A))+dim⁡(Col(AT))=n\dim(\text{Null}(\textbf{A})) + \dim(\text{Col}(\textbf{A}^T)) = ndim(Null(A))+dim(Col(AT))=n, we see that Null(A)+Col(AT)=Rn\text{Null}(\textbf{A}) + \text{Col}(\textbf{A}^T) = \mathbb{R}^nNull(A)+Col(AT)=Rn.

Therefore, we call Null(A)\text{Null}(\textbf{A})Null(A) and Col(AT)\text{Col}(\textbf{A}^T)Col(AT) orthogonal complements because

they span Rn\mathbb{R}^nRn.

Similarly, let us examine the left null space of A\textbf{A}A.

We know that for every vector v⃗∈Null(AT)\vec{v} \in \text{Null}(\textbf{A}^T)v∈Null(AT),

ATv⃗=0⃗  ⟹  v⃗TA=0⃗T\textbf{A}^T\vec{v} = \vec{0} \implies \vec{v}^T\textbf{A} = \vec{0}^TATv=0⟹vTA=0T

Using the column vectors of A\textbf{A}A, we get

v⃗TA=v⃗T[a⃗1a⃗2⋯a⃗n]=0⃗T\vec{v}^T\textbf{A} = \vec{v}^T\begin{bmatrix}\vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n\end{bmatrix} = \vec{0}^TvTA=vT[a1​​a2​​⋯​an​​]=0T

Writing out each equation, we get, just like above, ⟨v⃗,a⃗1⟩=⟨v⃗,a⃗2⟩=⋯=⟨v⃗,a⃗n⟩=0\langle \vec{v}, \vec{a}_1 \rangle = \langle \vec{v}, \vec{a}_2 \rangle = \cdots = \langle \vec{v}, \vec{a}_n \rangle = 0⟨v,a1​⟩=⟨v,a2​⟩=⋯=⟨v,an​⟩=0.

Therefore, v⃗\vec{v}v is orthogonal to each column vector of A\textbf{A}A. Using similar reasoning as above, we conclude that the column space of A\textbf{A}A and the left null space of A\textbf{A}A are orthogonal complements because

they span Rm\mathbb{R}^mRm.