Vector spaces#
Definition#
We will consider complex vector spaces but let us start with a bit of generality so that we can compare them to more familiar real vector spaces. Consider \(\mathbb{F} = \mathbb{R}\) or \(\mathbb{C}\). (Actually the following works for many filds whjich includes the rational numbers, integers modulo \(p\), and so forth). A vector space over \(\mathbb{F}\) also called “real” and “complex” vector spaces for \(F = \mathbb{R},\mathbb{C}\) respectively, is a set \(V\) of elements \(\ket{v}\) with the following properties:
Vector addition. For all \(\ket{v},\ket{w}\in V\) there is a notion of addition \(\ket{v} + \ket{w} \in V\) wuth the following properties:
Addition is commutative: \(\ket{v} + \ket{w} = \ket{w} + \ket{v}\).
Addition is *associative. If there is a third vector \(\ket{y} \in V\),
a zero vector \(\ket{0}\in V\) exists such that \(\ket{v} + \ket{0} = \ket{v}\).
Scalar multiplication. For all \(a \in \mathbb{F}\), \(\ket{v} \in V\), there is a notion of scalar multiplication such that \(a\ket{v} \in V\) with the following properties:
Multiplication is associative: For all \(b \in \mathbb{F}\), \(a(b\ket{v}) = (ab)\ket{v}\), where \(ab\) is the standard mjultiplication of numbers in \(\mathbb{F}\).
\(1\ket{v} = \ket{v}\).
\(0 \ket{v} = \ket{0}\).
Distributive properties
For all \(a\in \mathbb{F}\), \(\ket{v},\ket{w} \in V\),
for all \(a,b \in \mathbb{F}\), \(\ket{v} \in V\),
From these rules we can also deduce the existence of an additive inverse: for enery \(\ket{v} \in V\), there exists a vector \(\ket{-v} \in V\) such that \(\ket{v} + \ket{-v} = \ket{0}\). This can be seen bu construction: set \(\ket{-v} = (-1) \ket{v}\). Then
Note that I have not yet introduced any notion ofthe length of a vector, whetehr two vectors are orthogonal, and so on. As we will see, this requires som eadditional structure.
Examples#
Theer are a number of more and less familiar examples.
\(\mathbb{C}^n\), the space of \(n\)-component column vectors
with \(c_k \in \mathbb{C}\). We define vector addition and scalar multiplication in the usual way:
for any \(a \in \mathbb{C}\).
Note we can do ths same with \(c_k, d_k \in \mathbb{R}\): then we have a real vector space. Here the zero vector is defined by \(c_k = 0\).
The space of \(n\times n\) complex-valued matrices \(M_n(\mathbb{C})\). Addition and scalar multiplication are just matrix addition and scalar miltiplication (for \(M \in M_n\), \(aM\) is elementwise multiplication by \(a\).
Degree-\(n\) polynomials over \(\mathbb{C}\):
with addition and scalar multiplication working in the standard way. Note that this is clearly equivalent to \(\mathbb{C}^n\). Note also that there is no reason for \(n\) to be finite – we could work with the space of all polynomials.
Complex functions on an interval: let \(x \in [0,1]\). The set of all functions \(\psi(x)\) forms a vector space under the standard addition and scalar multiplication of functions if we choose the right boundary conditions. These boundary conditions yield vector spaces:
Dirichlet \(\psi(0) = \psi(1) = 0\).
Neumann \(\psi'(0) = \psi'(1) = 0\)
Periodic \(\psi(0) = \psi(1)\) (so \(\psi\) is a function on a circle). However, the boundary condition \(\psi(0) = a\), \(\psi(1) = b\) for nonzero \(a,b \in \mathbb{C}\) is not a vector space under standard addition of functions: the sum of two such functions does not satisfy the required boundary conditions and so is not in \(V\).
Coplex square-integrable functions on \(\mathbb{R}\): that is, functions \(\psi(x)\) for \(x\in \mathbb{R}\) such that
Subspaces#
A set \(M \subset V\) is a vector subspace if it is a vector space under the same laws for addition and svcalar multiplication. A standard example is any plane through the origin, such as \(V = \mathbb{C}^3\),
Similarly, any complex line through the origin, defined as the set of vectors
for fixed \(c_k\in \mathbb{C}^n\) and all \(a \in \mathbb{C}^n\).
A counterexample is any complex line that does not run through the origin, defined as the set of all vectors
with \(c_k,d_k\) fixed and the same for all vectors in this space, and \(a\) any complex number. It is clear that the sum of two vectors is a different vector, if there is at least one \(d_k \neq 0\).
Linear independence#
Definition. A set of vectors \(\ket{v_1},\ldots,\ket{v_m} \in V\) is linearly independent if
Let us give some examples.
\(\mathbb{C}^3\). These vectors are linearly independent:
Similarly these ar elinearly independent:
However, these three are not:
as we can see because
In the space of functions on the interval \([0,1]\) satisfying periodic boundary conditions, the vectors
are linearly independent. Similarly, for \(n\)th order polynomials, the monomials \(\ket{k} = x^k\) are a linearly independent set of \(n\)th order polynomials.
Dimension of a vector space#
Definition: the dimension of a vector space \(V\) is the maximum number of linearly independent vectors in \(V\). Any such maximal collection is called a basis.
Theorem: Given a basis \(\ket{k}\), \(k = 1,\ldots,n\), then for any vector \(\ket{v}\) there is a unique set of complex numbers \(a_{k - 1,\ldots,n\) such that
Proof. Assume the contrary, that
for \(a_k,b_k \in \mathbb{C}\) different numbers. The point is that if this is true,
but this cannot be zero of \(a_k,b_k\) differ in any way, so we have a contradiction to our supposition. Thus \(a_k = b_k\).