Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Vector spaces

Definition

We will consider complex vector spaces but let us start with a bit of generality so that we can compare them to more familiar real vector spaces. Consider F=R\mathbb{F} = \mathbb{R} or C\mathbb{C}. (Actually the following works for many filds whjich includes the rational numbers, integers modulo pp, and so forth). A vector space over F\mathbb{F} also called “real” and “complex” vector spaces for F=R,CF = \mathbb{R},\mathbb{C} respectively, is a set VV of elements v\ket{v} with the following properties:

  1. Vector addition. For all v,wV\ket{v},\ket{w}\in V there is a notion of addition v+wV\ket{v} + \ket{w} \in V wuth the following properties:

(v+w)+y=v+(w+y)(\ket{v} + \ket{w}) + \ket{y} = \ket{v} + (\ket{w} + \ket{y})
  1. Scalar multiplication. For all aFa \in \mathbb{F}, vV\ket{v} \in V, there is a notion of scalar multiplication such that avVa\ket{v} \in V with the following properties:

  1. Distributive properties

a(v+w)=av+bwa\left(\ket{v} + \ket{w}\right) = a \ket{v} + b \ket{w}
(a+b)v=av+bv(a + b)\ket{v} = a\ket{v} + b \ket{v}

From these rules we can also deduce the existence of an additive inverse: for enery vV\ket{v} \in V, there exists a vector vV\ket{-v} \in V such that v+v=0\ket{v} + \ket{-v} = \ket{0}. This can be seen bu construction: set v=(1)v\ket{-v} = (-1) \ket{v}. Then

v+v=v+(1)v=(1+(1)v=0v=0\ket{v} + \ket{-v} = \ket{v} + (-1) \ket{v} = (1 + (-1) \ket{v} = 0 \ket{v} = \ket{0}

Note that I have not yet introduced any notion ofthe length of a vector, whetehr two vectors are orthogonal, and so on. As we will see, this requires som eadditional structure.

Examples

Theer are a number of more and less familiar examples.

  1. Cn\mathbb{C}^n, the space of nn-component column vectors

v=(c1c2cn)\ket{v} = \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix}

with ckCc_k \in \mathbb{C}. We define vector addition and scalar multiplication in the usual way:

(c1c2cn)+(d1d2dn)=(c1+d1c2+d2cn+dn)\begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix} + \begin{pmatrix} d_1 \\ d_2 \\ \vdots \\ d_n \end{pmatrix} = \begin{pmatrix} c_1+ d_1 \\ c_2+d_2 \\ \vdots \\ c_n+d_n \end{pmatrix}
a(c1c2cn)=(ac1ac2acn)a \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix} = \begin{pmatrix} a c_1 \\ a c_2 \\ \vdots \\ a c_n \end{pmatrix}

for any aCa \in \mathbb{C}.

Note we can do ths same with ck,dkRc_k, d_k \in \mathbb{R}: then we have a real vector space. Here the zero vector is defined by ck=0c_k = 0.

  1. The space of n×nn\times n complex-valued matrices Mn(C)M_n(\mathbb{C}). Addition and scalar multiplication are just matrix addition and scalar miltiplication (for MMnM \in M_n, aMaM is elementwise multiplication by aa.

  2. Degree-nn polynomials over C\mathbb{C}:

a0,an=a0+a1x+a2x2+anxn\ket{a_0,\ldots a_n} = a_0 + a_1 x + a_2 x^2 + \ldots a_n x^n

with addition and scalar multiplication working in the standard way. Note that this is clearly equivalent to Cn\mathbb{C}^n. Note also that there is no reason for nn to be finite -- we could work with the space of all polynomials.

  1. Complex functions on an interval: let x[0,1]x \in [0,1]. The set of all functions ψ(x)\psi(x) forms a vector space under the standard addition and scalar multiplication of functions if we choose the right boundary conditions. These boundary conditions yield vector spaces:

  1. Coplex square-integrable functions on R\mathbb{R}: that is, functions ψ(x)\psi(x) for xRx\in \mathbb{R} such that

dxψ(x)2<\int_{-\infty}^{\infty} dx |\psi(x)|^2 < \infty

Subspaces

A set MVM \subset V is a vector subspace if it is a vector space under the same laws for addition and svcalar multiplication. A standard example is any plane through the origin, such as V=C3V = \mathbb{C}^3,

M={(c1c20)ciC}M = \left\{ \begin{pmatrix} c_1 \\ c_2 \\ 0 \end{pmatrix} \forall c_i \in \mathbb{C} \right\}

Similarly, any complex line through the origin, defined as the set of vectors

a(c1c2cn)a \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix}

for fixed ckCnc_k\in \mathbb{C}^n and all aCna \in \mathbb{C}^n.

A counterexample is any complex line that does not run through the origin, defined as the set of all vectors

a(c1c2cn)+(d1d2dn)a \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix} + \begin{pmatrix} d_1 \\ d_2 \\ \vdots \\ d_n \end{pmatrix}

with ck,dkc_k,d_k fixed and the same for all vectors in this space, and aa any complex number. It is clear that the sum of two vectors is a different vector, if there is at least one dk0d_k \neq 0.

Linear independence

Definition. A set of vectors v1,,vmV\ket{v_1},\ldots,\ket{v_m} \in V is linearly independent if

i=1makvk=0ak=0  k=1,,n\sum_{i = 1}^m a_k \ket{v_k} = 0 \Leftrightarrow a_k = 0\ \forall\ k = 1,\ldots,n

Let us give some examples.

  1. C3\mathbb{C}^3. These vectors are linearly independent:

(100) ;  (010) ;  (001)\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\ ; \ \ \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\ ; \ \ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}

Similarly these ar elinearly independent:

(101) ;  (011) ;  (001)\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}\ ; \ \ \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}\ ; \ \ \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}

However, these three are not:

(101) ;  (011) ; (1/21/21)\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}\ ; \ \ \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}\ ; \ \begin{pmatrix} 1/2 \\ 1/2 \\ 1 \end{pmatrix}

as we can see because

(101)+(011)2(1/21/21)=0\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix} - 2 \begin{pmatrix} 1/2 \\ 1/2 \\ 1 \end{pmatrix} = 0
  1. In the space of functions on the interval [0,1][0,1] satisfying periodic boundary conditions, the vectors

n=sin(nπxL)\ket{n} = \sin\left(\frac{n\pi x}{L}\right)

are linearly independent. Similarly, for nnth order polynomials, the monomials k=xk\ket{k} = x^k are a linearly independent set of nnth order polynomials.

Dimension of a vector space

Definition: the dimension of a vector space VV is the maximum number of linearly independent vectors in VV. Any such maximal collection is called a basis.

Theorem: Given a basis k\ket{k}, k=1,,nk = 1,\ldots,n, then for any vector v\ket{v} there is a unique set of complex numbers ak=1,,na_{k = 1,\ldots,n} such that

k=1nakk\sum_{k = 1}^n a_k \ket{k}

Proof. Assume the contrary, that

v=k=1nakk=k=1nbkk\ket{v} = \sum_{k = 1}^n a_k \ket{k} = \sum_{k = 1}^n b_k \ket{k}

for ak,bkCa_k,b_k \in \mathbb{C} different numbers. The point is that if this is true,

0=vv=k=1nakkk=1nbkk=k=1n(ak+bk)k\ket{0} = \ket{v} - \ket{v} = \sum_{k = 1}^n a_k \ket{k} - \sum_{k = 1}^n b_k \ket{k} = \sum_{k = 1}^n (a_k + b_k)\ket{k}

but this cannot be zero of ak,bka_k,b_k differ in any way, so we have a contradiction to our supposition. Thus ak=bka_k = b_k.