Adjoints and inner products#
Dual vector spaces#
Definition#
Let \(V\) be a vector space over \(\CC\). the dual vector space \(V^*\) is the space of all linear maps \(f:V \to \CC\).
Properties and notation#
\(V^*\) is a vector space.
Consider linear maps \(f_{1,2}\), and \(a,b \in \CC\). Then we can define a linear map \(a f_1 * b f_2\) by their action on a vector \(\ket{v}\).
One can show that this defines a linear map: for any \(c,d\in \CC\) and \(\ket{v_{1,2}} \in V\),
which follows from \(f_{1,2}\) being linear maps/
\(\text{dim}(V^*) = \text{dim}(V)\). It is instructive to show this. Consider a basis \(\ket{i}\) of \(V\), \(i = 1,\ldots d = \text{dim}(V)\). A general vector \(\ket{v}\) can be expressed as
for a unique set of coefficients \(c_i \in \CC\). Now
Thus, the map \(f\) is completely specified by \(d\) complex numbers \(f(\ket{i}) = c_i \in CC\). Thus, if we define \(f_i\) by \(f_i(\ket{j}) = \delta_{ij}\), we can show that each \(f_i\) is a linear map. Furthermore, \(f_i\) is linearly independent of \(f_{j \neq i}\) (you should convince yourself of this).
Any function \(f\) can always be written as \(f = \sum_{i = 1}^d c_i f_i\), so this basis is maximal, and \(f_i\) form a complete basis for \(V^*\). There are \(d\) such independent basis functions, so \(\text{dim}(V^*) = d\).
Notation. We can express \(f \in V^*\) as a “bra vector” \(\bra{f}\). We then call elements \(\ket{v} \in V\) “ket vectors”. We can then write
as a “bra(c)ket”. I didn’t do this, please blame Dirac. Anyhow the notation is unfortunately standard. With this notation we can define the linear structure of \(V^*\) as
Finally, The dual vector space for \(V^*\) is \(V\), or \((V^*)^* = V\): for any \(\ket{v}\) the map from \(V^* \to \CC\) is just \(\ket{v}: f \to \brket{f}{v}\).
Example#
If \(V = \CC^3\) is represented as the space of column vectors. we can represent \(V^*\) as the set of row vectors. That is, consider a basis
We can define any linear map \(f\) by \(f(\ket{i} = c_i\). Then if \(\ket{v} = \sum_i a_i \ket{i}\),
Adjoint maps#
Since \(\dim V = \dim V^*\), we expect that there is an isomorphism (a map that is one-to-one and onto) between them. Choosing such a map leads to a choice of “inner product” on \(V\) itself: a way of assigning to \(\ket{v}\) a number corresponding to some notion of its length.
Definition#
Let \(V\) be a vector space over \(\CC\). An adjoint map is a map \(\cal{A}: V \to V^*\), which we denote by \(A\ket{v} \equiv \bra{f_v}\) with the properties
Skew symmetry: \(\brket{f_w}{v} = \brket{f_v}{w}^*\).
Positive semi-definiteness:
In general we write \(\bra{f_v} = \bra{v}\).
Properties#
Antilinearity. Using skew-symmetry you can show that for \(\ket{v_{1,2}} \in V\), \(a, b \in \CC\),
Schwarz inequality
Triangle inequality
Examples#
\(V = \CC^3\).
If
then
\(V = M_2(\CC)\).
\(V = L^2(\CR)\), the space of complex square-integrable functions on the real line where \(\ket{\psi}\) is represented by the function \(\psi(x)\). A good inner product, which defines an adjoint map, is
Additional definitions and a comment#
\(\brket{v}{w}\) is the inner product of \(\ket{v}\), \(\ket{w}\).
\(||v||^2 = \brket{v}{v}\) is called the norm of \(\ket{v}\).
\(V\) with an adjoint map is called an inner product space.
An inner product space (over \(\CC\)) is called a Hilbert space if either:
\(\text{dim}(V) < \infty\), or
Cauchy sequences in \(V\) are complete.
To explain the last possibility, note that \(\ket{v_i}\), \(i = 1,\ldots,\infty\) is a Cuachy sequence if for any \(\eps > 0\), there exists some integer \(N\) such that
Such a sequence is complete if it converges to a vector in \(V\).
There is no unique adjoint map.
Actions of operators#
Goven a linear operator \(A\) and \(\ket{v} \in V\), \(A\ket{v}\) is a vector and \(\bra{w} A \ket{v}\) is a complex number. We can therefore define \(\bra{A w} \equiv \bra{w} A\) such that \(\brket{A w}{v} = \bra{w} A \ket{v}\).
Orthonormal bases#
Definitions#
Let \(V\) be a vector space over \(\CC\).
\(\ket{v} \in V\) is a normal vector if \(||v||^2 = \brket{v}{v} = 1\).
\(\ket{v},\ket{w} \in V\) are orthogonal if \(\brket{v}{w} = 0\).
An orthonormal basis is a basis \(\ket{i} \in V\), \(i = 1,\ldots,d = \text{dim} V\) such that for \(\cal{A}: \ket{i} \to \bra{i}\), \brket{i}{j} = \delta_{ij}$.
Examples#
We can write \(\ket{v} = \sum_i v_i \ket{i}\); the antilienarity of the adjoint map means that \(\bra{v} = \sum_i \bra{i} v^*_i\). This means that
Similarly, for \(\ket{w} = \sum_i w_i \ket{i}\),
This works if we idenfity
and thus
The basis element \(\ket{i}\) is a column vector with all zeros except a \(1\) in the \(i\)th row.
If \(V = M_2(\CC)\), the space of \(2\times 2\) complex matrices, a natural inner product is
where \(m,n\) are \(2\times 2\) matrices. This clearly defines an adjoint map from \(\ket{n}\) to a linear map. An orthonormal basis is:
Consider the vector space of complex functions on the interval \(0,L\) with Dirichlet boundary condittions. You can convince yourself that the basis
is orthonormal with respect to the inner product (176)
The Gram-Schmidt machine#
Theorem: every finite-dimensional vector space or infinite dimensional vector space with a countable basis has an orthonormal basis.
Proof (partial): Given a basis \(\ket{v_1},\ket{v_2},\ldots,\ket{v_d}\), we can construct a basis iteratively. Define
Matrix elements of operators#
Since \(\ket{i}\) is a basis, we can write the action of operators in this basis: \(A\ket{j} = A_{ij}\ket{i}\). As notation, we will sometimes write
We understand this to mean
where \(\ket{v} = \sum_i v_i \ket{i}\), and for dual vectors \(\bra{v} = \sum_i \bra{i} v_i^*\),
Thus
A particularly important example is the identity operator \(\bf{1}\) for which \(\bf{1}_{ij} = \delta_{ij}\). This can be represented as above by:
for any orthonormal basis. This is called a resolution of the identity, associated to a given basis.
In this basis, an important operator on \(A\) is the transpose. That is given a linear operator \(A\), we can define the transpose \(A^T\) via its matrix elements
In particular, we can write
Adjoints of operators#
The vector \(A\ket{v} \equiv \ket{Av} = A_{lk} v_k \ket{l}\) has a natural adjoint
which defines the Hermitian conjugate \(A^{\dagger}\). We can either define it as \({\cal A}: A\ket{v} \to \bra{a} A^{\dagger}\) or via its matrix elements in an orthonormal basis,
Hermitian and unitary operators#
Definition. A Hermitian operator is an operator \(A = A^{\dagger}\).
Note that this does not mean the operator has real matrix elements. The following operator on \(\CC^2\) is Hermitian:
Definition. A Unitary operator is an operator \(U\) such that \(U^{\dagger} = U^{-1}\).
An important property of this operator is that it is norm-preserving:
An example of a unitary operator acting on \(\CC^2\):
Two nontrivial examples for \(L^2(\CR)\):
The position operator \({\hat x}: \psi(x) \to x \psi(x)\). Since
as expected for a Hermitian operator.
the operator \(\hat{p} = - i\hbar \frac{\del}{\del x}\), when acting on \(\psi(x)\).
The second line follows from integration by larts, and the boundary terms vanish because \(\psi\) is sequare integrable. In other words for every \(\ket{\psi},\ket{\chi}\), \(\bra{\chi} {\hat p} \ket{\psi} = \bra{\chi} {\hat p}^{\dagger} \ket{\psi}\). From this we can deduce that \({\hat p} = {\hat p}^{\dagger}\).
The same argument follows for the case of complex functions with periodic boundary conditions. For Dirichlet boundary conditions, \({\hat p}\) fails to be an operator on teh Hilbert space, as the derivative of a function with Dirichlet boundary conditions does not in general satisfy Dirichlet boundary conditions. (Similarly for Neumann boundary conditions).