The rotation group ends up being a particularly powerful tool for studying three-dimensional systems, most famously atomic systems for which the Coulomb potential is rotationally invariant (independent of the direction of the separation between charged objects.)
In general rotations acting on vectors in Rd are defined as real linear ransformations that preserve the norm. Thus
This last equality works if and only if RTR=1, that is, if RT−R−1. Such matrices are called orthogonal.
Successive rotations are implemented by multiplying the corresponding matrices. You can convince yourself quickly that the product of two orthogonal matrices is itself orthogonal:
The corresponding group of rotation is thus the group of orthogonal matrices under matrix multiplication, which is known as O(d) or the “orthogonal group”.
detR=±1. The matrices with opposite determinants are not continuously deformable into each other, and only matrices with positive determinant are continuously deformable to the identity matrix. The group of d×d orthogonal matrices with unit determinant are called SO(d) or the “special orthogonal group”. It is this group which is usually taken to be the rotation group.
We can describe the general matrix more explicitly as follows. I calim any rotation can be described as the rotation of a vector about some axis by some angle θ. We will specify the axis by the unit vector n^.
As shown in the figure above, we can break up the vector V=V∥+V⊥, where parallel and perpendicular are with respect to n^:
We will express V′ in terms of its projection onto three orthonormal vectors: n^, V^⊥=∣∣V⊥∣∣V⊥, and a third vector orthogonal to both of them. Since n^×V^∥=0,
n^×V^=n^×V⊥ is orthogonal to both n^,V^⊥. Thus, we can write
Here ϵIJK is the totally antisymmetric tensor defined in the appendices. Note that I have raised and lowered various indices. In the present case, this operation does not change the numerical value of any of the vectors or tensors. I leave it as an exercise to show that this is an orthogonal matrix.
You can see that there is a 3-parameter family of such matrices, labeled by θ and by the unit vector n^. As it happens, the demand of orthogonality leaves only 3 independent parameters specifying a 3×3 orthogonal matrix, and in fact all members of SO(3) can be written in this way.
Since this is linear in n^ and we can write a general vector n^ as a linear combination of x^,y^,z^, we can generate general infinitesimal transformations from rotations about each of these axes. Now wroting R(n)=1−iθJInI/ℏ, we have
We will see later why we use this normaization by ℏ. Finally, recall that the group structure can be built from the commutation relations of the individual matrices. For these, you can show via brute force that
Given all of this, we can build up any finite rotation from these matrices by a succession of infinitesimal rotations. That is, consider a rotation about n^ by an angle θ. We can achieve this with N successive rotations about the same axis, each with angle θ/N. As N→∞, each rotation is well approximated by
The group of n×n unitary matrices is called U(n). In generak the determinants of such matrices are pure phase. The special unitary groupSU(n) consists of all such matrices with determinant 1. Since the determinant of the product oof matrices is the product of their determinants, this is a genuine subgroup of U(n).
We wish to describe the simplest special unitary group, SU(n). All such matrices take the form
which takes the same form as the infinitesimal rotation matrices in SO(3), for which JI have the same commutation relations. It is tempting to state that SO(3) and SU(2) are the same group. As it happens, this is not quite true; rather (as we will see), SU(2) is the double cover of SO(3); that is if we map SU(2) to SO(3) the map is generally 2-to-1.