From: Chris Hillman
Newsgroups: sci.physics.relativity
Subject: What is Lie Theory?
Date: Sun, 17 Sep 2000 17:46:06 -0700
This post is an abbreviated repost of an old post to sci.physics.research
(if memory serves correctly), but since this stuff forms part of the
background of my planned series of posts on Cartanian manifolds and how
these can be used to construct "classical" Yang-Mills theories, which are
rather beautiful classical field theories which generalize Maxwell's
theory of electromagnetism, and which also are neatly subsumed with our
grand phenomenological theory of gravitation, aka gtr.
A Lie group is a group which is also a smooth manifold, such that the maps
(g1, g2) -> g1 g2 g -> g^(-1)
are smooth. The most fundamental and amazing fact about Lie groups is
that every Lie group (a -nonlinear- creature!) is almost completely
determined by an associated object called its Lie algebra (a -linear-
creature, and thus much easier to work with). What I want to do in this
post is to give readers some intuition for various operations which one
encounters in elementary Lie theory, usually (alas) without being given
the simple and memorable intuitive meaning of each operation.
Let's start with a simple but nontrivial example: SO(2), the group of all
rotations around the origin of E^2, is a Lie group. As a manifold, it is
nothing other than the circle! To see this, notice that you can
parametrize SO(2) by a single parameter t:
[ cos t -sin t ] (0)
[ sin t cos t ]
And of course ( cos t, -sin t, sin t, cos t) is literally a circle in E^4.
To find the Lie algebra so(2) of SO(2), take equation (0), differentiate
with respect to t, and set t = 0, which gives the "generator" of the Lie
algebra so(2), namely
A = [ 0 -1 ]
[ 1 0 ]
The Lie algebra so(2) consists of all real multiples tA. We can get from
the Lie algebra (which is a vector space) back to the Lie group by
exponentiation. That is, plug tA into the McLaurin series
exp(x) = 1 + x + x^2/2 + x^3/6 + x^4/24 + ....
Notice that A^2 = -I, so
exp tA = I + tA + (tA)^2/2 + (tA)^3/6 + (tA)^4/24 + ....
= I + tA + t^2/2 A^2 + t^3/6 A^3 + ....
= I + tA - t^2/2 I - t^3/6 A +
= (1 - t^2/2 + ...) I + (t - t^3/6 + ...) A
= cos t I + sin t A
= [ cos t -sin t ]
[ sin t cos t ]
The essential point here is that exponentiating matrices is a great idea,
because "fractional powers" make perfect sense: if Q = exp(A), then
Q^t = (exp A)^t = exp(tA)
Also, just as for real numbers, exponentiation converts a linear operation
(addition) into a nonlinear one (multiplication):
Q^(s+t) = exp((s+t)A) = (exp A)^s (exp A)^t = Q^s Q^t
If you have had a course in ODE's, you might see that this is connected to
the notion of the "flow lines" in E^2 corresponding to the action by SO(2)
on E^2. Namely, concentric circles around the origin. In the study of
ODEs, you might remember that given x** = -x, which is a -second- order
equation, we can put y = x* to get a system of -first- order equations in
"phase space" (x,x*), namely:
x* = -y
y* = x
or
[x]* [0 -1] [x]
[y] = [1 0] [y]
where you recognize our friend the matrix A. Even better, A^2 = -I is
like x** = -x, and we can solve x** = -x by using matrix exponentiation:
[x(t)] = exp t Q [x(0)] = [ cos t -sin t ] [x(0)]
[y(t)] [y(0)] [ sin t cos t ] [y(0)]
where you recognize Q = exp(tA). That is, the solution to x** = -x is
x(t) = a cos(t) + b sin(t)
But--- and this is the point--- simple harmonic motions, which obey the
second order equation x** = -x, correspond to flowing at unit speed around
concentric circles in phase space (x,x*). And we got from the Lie algebra
so(2) to the action by SO(2) on phase space by exponentiating. This is
getting close to the historical origin of Lie theory!
Exercise: find exp(tA) when
[ 0 1 ] [ 0 1 ] [ 1 0 ] [ 1 0 ]
(i) A = [ 1 0 ] (ii) A = [ 0 0 ] (iii) A = [ 0 -1 ] (iv) A = [ 0 1 ]
Show that the sets {exp(tA):t in R} define four groups, and describe how
these groups act on vectors in R^2. Are any of these three groups in fact
isomorphic to each other as Lie groups?
Another crucial point to appreciate: the power series
P^t = exp tA = I + tA + t^2/2 A^2 + ...
tells us that when t is small
_________________
| |
| P^t ~ I + tA |
| | (1)
| P = exp A |
|_________________|
Now, the matrices P^t act on vectors in R^2 (rotating them
counterclockwise as t increases), and (1) says that the element A of the
Lie algebra can be thought of as an "infinitesimal rotation".
But to see the real power of this stuff, we have to look at something a
bit more complicated, like SO(3), the group of rotations about the origin
in E^3. You might recall that this is the set of all real three by three
orthogonal matrices of determinant one (P is orthogonal if is invertible
and if the inverse of P equals the transpose of P). SO(2) is an abelian
group; that means that for any P,Q in SO(2), PQ=QP. But SO(3) is
nonabelian: for most pairs of P,Q in SO(3), PQ =/= QP. However, if P,Q
belong to the same one dimensional subgroup (a circle) of SO(3), then they
commute.
If you have a desktop calendar cube or Rubik cube handy, try rotating
around the z axis and then around the x axis (holding the axes mentally
fixed in space as you rotate the cube), and compare with doing these
operations in the other order. You don't get the same result!
Now, it is natural to inquire whether we can generalize from scalar
multiples of one matrix to sums of matrices: does
exp(A+B) = (exp A) (exp B) (2)
for all matrices A,B in the Lie algebra? Well, it turns out that the
answer is in general "no"; this rule works only for abelian Lie groups,
and positive real numbers form an -abelian- Lie group, which is why
exp(a+b) = (exp a) (exp b)
is always true. But we can still try to fix up (2) to make a true
statement, and this leads naturally to the extra structure a Lie algebra
possesses which makes it more than just a vector space: it also has a kind
of "multiplication" of its own, called the Lie product. To see how this
new idea arises, go back to power series. We know that
P^s Q^t = (exp A)^s (exp B)^t = exp(sA) exp(tB)
and
Q^t P^s = (exp B)^t (exp A)^s = exp(tB) exp(sA)
are not the same, but for small s,t they are -very nearly- the same. So
let's see if we can determine how quickly they diverge from each other
exp(sA) exp(tB) = (I + sA + s^2/2 A^2 +...)(I + tB + t^2/2 B^2 + ...)
= I + (sA + tB) + (s^2/2 A^2 + st AB + t^2/2 B^2) + ...
first order second order
whereas
exp(tB) exp(sA) = (I + tB + t^2/2 B^2 +...)(I + sA + s^2/2 A^2 + ...)
= I + (tB + sA) + (t^2/2 B^2 + st BA + s^2/2 A^2) + ...
first order second order
Up to first order, these agree, and the only second order difference is st
AB instead of st BA. Indeed, subtracting the second series from the first
gives, up to second order,
st (AB - BA)
Here, [A,B] = AB-BA is called the Lie product. This operation is not
associative, but it obeys some other algebraic laws.
This is a good time to pause and figure out what the Lie algebra so(3)
of the Lie group SO(3) is. Remember that P is in SO(3) if PP' = I where
prime denotes transpose, and if det P = I. Focus attention on PP' = I and
recall how we obtained so(2) from SO(2): we parametrized the group,
differentiated with respect to our parameter, and took the derivative at
zero. This isn't yet obvious, but SO(3) is a three dimensional manifold,
so parameterizing it requires three real numbers. Do we need to deal with
partial derivatives? Fortunately, no! If we just think of consider a
-curve- in SO(3), that is, a one dimensional submanifold parameterized by
some real number t, and differentiate with respect to t, we get the result
we need. By assumption,
P(t) P(t)' = I
Differentiating gives
P*(t) P(t)' + P(t) P*(t)' = 0
and setting t = 0 gives
P*(0) I + I P*(0)' = 0
or, setting A = P*(0),
A = -A'
In other words, the Lie algebra so(3) consists of skew-adjoint matrices.
As the whole world knows, the three by three skew adjoint matrices forms a
three dimensional vector space (because there are only three independent
components in each three by three skew adjoint matrix), but what only
students of Lie theory know :-) is that they form a -Lie algebra-, and
even better, the Lie product
[A,B] = AB-BA
can be identified with the vector cross product in E^3!
Another good example: SL_2(R) consists of all two by two real matrices
with determinant one. Take a parameterized curve A(t) in SL_(R) and
compute A'(0) to find out what the Lie algebra sl_2(R) is! If you know
about linear fractional transformations (aka Moebius transformations),
note that SL_2(R) acts on the upper half plane, which may be identified
with H^2, the velocity space of particles in E^(1,2). The Lie product
here can be identified with a lesser known vector cross product in
E^(1,2). However, these vector cross products are purely "accidents of
small dimensions", while the Lie theory we are discussing works is true
much more generally.
Something very important I am not going to try to explain here: SO(3) is
simply connected as a manifold, but it is has a double cover which -is-
simply connected, and therefore is what is called a universal cover in the
sense of homotopy theory. This group is called SU(2) and it gives the
spinorial representation of SO(3) in terms of conjugation. There is a
beautiful explanation of the half angle formulas which come up here in
terms of reflections, which is due to Rodrigues (sometime around 1830),
and which is probably the same way Gauss found out how rotations combine
even earlier. But I'll just say one thing about this: SO(3) and its
universal cover SU(2) share the same Lie algebra. SU(2) is the group of
unit norm quaternions, and one can write a formula analogous to Euler's
formula e^(it) = cos(t) + i sin(t), which is due to Steenrod ("twist
algebra"). SU(2) is a double cover of SO(3), so there two unit norm
quaternions representing each element of SO(3). The simplest way to
understand why this "extra bit" of information is incredibly useful is to
observe that there is something funny about half turns, i.e rotations
through pi about line L through the origin: the rotations about L through
pi is the same, "as far as SO(3) knows", as the rotation about L through
-pi. Passing to the double cover SU(2) removes that ambiguity in the half
turns! Its all very interesting and beautiful, and there are many ways to
understand this--- I think the most elementary way is in terms of linear
fractional transformations. Similar remarks hold for spinorial
representations of the two dimensional Lorentz group SO(1,2) and its
double cover SU(1,1). Indeed, SL_2(C) gives us a spinorial representation
of the (proper orthochronous) Lorentz group SO(1,3). The Moebius group
PSL_2(C) is actually isomorphic to SO(1,3), which is a starting point for
twistor theory.
Something else which eventually becomes avoidable: you can go a long way
in Lie theory using just matrix groups, but eventually you encounter the
fact that not all Lie groups are matrix groups. The general theory of Lie
groups is quite a bit more sophisticated than the simple ideas which work
for matrix groups, but the basic intuition for the operations we are
discussing here remains the same.
Back to what the Lie product does for us in terms of understanding
noncommutativity in SO(3), or indeed in any nonabelian Lie group. (What
about abelian Lie groups? Well, of course in an abelian Lie group, AB-BA
= 0 always, so the Lie product is trivial; this again shows that the whole
"reason for being" of the Lie product is to come as close as we can to
"linearizing" noncommutation in nonabelian Lie groups). Suppose we take A
in the Lie algebra and P in the corresponding Lie group, and look at the
"conjugate"
P^s Q P^(-s)
where P = exp A, so P^s = exp(sA). If you've taken a good linear algebra
course, you'll know how important this operation of "conjugation" is, for
instance in the theory of "eigenthings" of Q, which concerns properties of
Q which are invariant under conjugation by P. Change of basis is another
fundamental place where conjugation is the essential concept. And if you
know about spinors, you may know that conjugation in another Lie group,
SU(2) (which is as a matter of fact -very- closely related to SO(3), but I
don't want to get into that) is essential in using quaternions to study
how rotations combine in E^3.
Once again we use power series to find out what the leading terms in t are
like:
P^s Q P^(-s) = (I + sA + s^2/2 A^2 + ..) Q (I - sA + s^2/2 A^2 -...)
= (Q + s AQ + s^2/2 A^2 Q + ...)(I - sA + s^2/2 A^2...)
= P + (s AQ -s QA) + (s^2 AQA + s^2/2 A^2 Q + s^2/2 QA^2) +
first order second order
= Q + s[A,Q] + ...
So, the Lie product determines the linear approximation to conjugation:
_____________________________
| |
| P^s Q P^(-s) ~ Q + s [A,Q] |
| | (3)
| P = exp A |
|_____________________________|
Even more: for small s,t we expect that P^s Q^t should agree very nearly
with Q^t P^s, so P^s Q^t (Q^t P^s)^(-1) should agree very nearly with I.
Let's check it out!: writing exp(A) = P, exp(B) = Q, we have
P^s Q^t P^(-s) Q^(-t) = { P^s Q^t P^(-s) } Q^(-t)
= { Q^t + s[A,Q^t} ... } Q^(-t)
= I + s [A,Q^t] Q^(-t) + ...
= I + s A - s Q^t A Q^t + ...
= I + s A - s {A + t[B,A] +..} + ...
= I - st [B,A] + ....
This gives a -second order- approximation to the commutator:
______________________________________
| |
| P^s Q^t P^(-s) Q^(-t) ~ I + st [A,B] |
| | (4)
| P = exp(A), Q = exp(B) |
|______________________________________|
Notice that the absence of -first order- terms on the right hand side
means that "all Lie groups commute up to first order"! Moreover, "the
second order correction term" is exactly the Lie product!
The Lie algebra so(3) is a very good place to test this stuff out in a
concrete setting. Three linearly independent elements of so(3) are
[ 0 0 0 ]
X = [ 0 0 -1 ]
[ 0 1 0 ]
[ 0 0 1 ]
Y = [ 0 0 0 ]
[ -1 0 0 ]
[ 0 -1 0 ]
Z = [ 1 0 0 ]
[ 0 0 0 ]
These generate counterclockwise rotations about the x,y,z axes, as you can
confirm by exponentiating them. Computing the commutators, we find
[X,Y] = Z [X,Z] = -Y [Y,Z] = X
Let's let P = exp(X), Q = exp(Y). With s = t = 0.01,
using Mathematica I find:
[ 1.00000 0.00000 0.00000 ]
P^t = [ 0.00000 0.99955 -0.01000 ]
[ 0.00000 0.01000 0.99995 ]
[ 0.99995 0.00000 0.01000 ]
Q^t = [ 0.00000 1.00000 0.00000 ]
[ -0.01000 0.00000 0.99995 ]
The commutator is
[ 1.00000 -0.00010 0.00000 ]
P^t Q^t P^(-t) Q(-t) = [ 0.00010 1.00000 0.00000 ]
[ 0.00000 0.00000 1.00000 ]
Next, [X,Y] = Z. Letting R = exp(Z), using Mathematica I find
[ 1.00000 -0.00010 0.00000 ]
R^(t^2) = [ 0.00010 1.00000 0.00000 ]
[ 0.00000 0.00000 1.00000 ]
which agrees to five decimal places with the commutator.
Exercise: find a basis for sl(2,R) and use these three matrices to verify
the conjugation and commutator approximations in this group. Can you do
the same for so(1,2)?
Hint: if you get stuck, try to find three generators for SL(2,R) and
SO(1,2) and look back at what we did with equation (0).
You might recall that this point that curvature is a -second order
effect-, and we just found a -second order approximation- to something.
Recall the definition of the Riemann curvature operator as
D_X D_Y - D_Y D_X - D_[X,Y]
This operator applied to a vector Z gives the difference between Z and the
vector we get if we parallel transport Z around a quadrilateral with edges
sX, tY, st[X,Y], -tY, -sX. Of course, this description only makes sense
because the commutator closes up the pentagonal loop to second order! The
fact that this works, that the pentagon really is a loop up to second
order, is a generalization of the second order approximation for a
commutator in a Lie group which we have just been discussing--- indeed,
Lie groups are Riemannian manifolds in a natural way, with "constant"
Riemann curvature tensors (they are "homogeneous").
If you know about linear fractional transformations and the representation
of SO(1,2) by complex two by two matrices, and if you know about
hyperbolic and elliptic LFT's and their fixed points, and about the action
by SO(1,2) on the unit disk model of H^2, another excellent way to test
out (4) is to consider small boosts in the x,y directions. A boost by t
(small) along x followed by a boost by t along y followed by a rotation by
t^2 about the origin agrees very nearly with a boost by t along y followed
by a boost by t along x. A closely related fact is responsible for the
Thomas precession: the composition of two small boosts in different
directions is not a boost, but a boost composed with a very small
rotation.
Another reasonable question to ask is: how does P^t Q^t differ from
(PQ)^t? The famous Campbell-Baker-Hausdorff formula gives the answer to
that, also in terms of the Lie product! Lie theory is great stuff!
And there's more: the Lie algebra is the tangent space to the Lie group at
the element I (the identity matrix), in the sense of the theory of
manifolds! Now, Lie groups are smooth manifolds, and vectors on smooth
manifolds correspond to first order linear differential operators. This
again brings us back to something close to the historical origins of Lie
theory in the study of partial differential equations. Lie wanted to come
up with solutions to specific differential equations by taking advantage
of "internal symmetries", in much the same spirit that Galois showed that
the classical solution to the cubic equation t^3 - p t + q = 0 arises by
taking advantage of symmetries which do not exist for some quintic
equations. And indeed, "solvable Lie algebras" play a role here, just as
solvable groups play a role in Galois theory. The concept analogous to
"normal subgroup" is, as you would expect, defined in terms of a kind of
"closure" under the Lie product. This line of thought leads in several
different directions, all fascinating. Interesting enough, Lie's
programme of solving differential equations by symmetry is still being
worked out even today, and one of the interesting applications is to
solving the EFE or geodesic equations in closed form. See in particular
the recent book by Stephani.
Chris Hillman
======================================================================
From: Chris Hillman
Newsgroups: sci.physics.relativity
Subject: The Joy of Forms: Invariant ONB's for Lie Groups
Date: Fri, 27 Oct 2000 17:16:18 -0700
In this post, I will discuss some material which can also be found in
Flanders, but I will give examples of greater interest to students in gtr
(in particular, I'll discuss Lie groups whose Lie algebras include all
one, two, and three dimensional Lie groups). This post not only will show
yet another place where differential forms are really useful, but it will
look ahead to my planned posts on Cartan manifolds and their connection
(no pun intended) to gauge theories in physics.
Let me start with a brief explanation in terms of matrix groups, which
hopefully will be familiar to most readers, of the core concepts of Lie
theory. (This part of the post will repeat with little change a post
which originally appeared in sci.physics.research. I'd like to point out
that there happens to be a thread in that group right now called "Lie
Groups for Grunts" which should nicely complement what I'm about to say
here.)
A Lie group is a group which is also a smooth manifold, such that the maps
(g1, g2) -> g1 g2 g -> g^(-1)
are smooth. A fundamental aspect of the theory of Lie groups is that
every Lie group (a nonlinear creature, if you like) is almost completely
determined by an associated object called its Lie algebra (a linear
creature, if you like, and thus much easier to work with). An example:
SO(2), the group of all rotations around the origin of E^2, is a Lie
group. As a manifold, it is nothing other than the circle! To see this,
notice that you can parametrize SO(2) by a single parameter p by
[ cos t -sin t ] (0)
[ sin t cos t ]
and that ( cos t, -sin t, sin t, cos t) is literally a circle in E^4. To
find the Lie algebra so(2) of SO(2), take equation (0), differentiate with
respect to t, and set t = 0, which gives the "generator" of the Lie
algebra so(2), namely
A = [ 0 -1 ]
[ 1 0 ]
The Lie algebra so(2) consists of all real multiples tA. We can get from
the Lie algebra (which is a vector space) back to the Lie group by
exponentiation. That is, plug tA into the McLaurin series
exp(x) = 1 + x + x^2/2 + x^3/6 + x^4/24 + ....
Notice that A^2 = -I, so
exp tA = I + tA + (tA)^2/2 + (tA)^3/6 + (tA)^4/24 + ....
= I + tA + t^2/2 A^2 + t^3/6 A^3 + ....
= I + tA - t^2/2 I - t^3/6 A +
= (1 - t^2/2 + ...) I + (t - t^3/6 + ...) A
= cos t I + sin t A
= [ cos t -sin t ]
[ sin t cos t ]
The essential point here is that exponentiating matrices is a great idea,
because "fractional powers" make perfect sense: if Q = exp(A), then
Q^t = (exp A)^t = exp(tA)
Also, just as for real numbers, exponentiation converts a linear operation
(addition) into a nonlinear one (multiplication):
Q^(s+t) = exp((s+t)A) = (exp A)^s (exp A)^t = Q^s Q^t
If you have had a course in ODE's, you might see that this is connected to
the notion of the "flow lines" in E^2 corresponding to the action by SO(2)
on E^2. Namely, concentric circles around the origin. In the study of
ODEs, you might remember that given x** = -x, which is a -second- order
equation, we can put y = x* to get a system of -first- order equations in
"phase space" (x,x*), namely:
x* = -y
y* = x
or
[x]* [0 -1] [x]
[y] = [1 0] [y]
where you recognize our friend the matrix A. Even better, A^2 = -I is
like x** = -x, and we can solve x** = -x by using matrix exponentiation:
[x(t)] = exp t Q [x(0)] = [ cos t -sin t ] [x(0)]
[y(t)] [y(0)] [ sin t cos t ] [y(0)]
where you recognize Q = exp(tA). That is, the solution to x** = -x is
x(t) = a cos(t) + b sin(t)
But--- and this is the point--- simple harmonic motions, which obey the
second order equation x** = -x, correspond to flowing at unit speed around
concentric circles in phase space (x,x*). And we got from the Lie algebra
so(2) to the action by SO(2) on phase space by exponentiating. This is
getting close to the historical origin of Lie theory!
Here is crucial point lying behind all of this stuff: the power series
P^t = exp tA = I + tA + t^2/2 A^2 + ...
tells us that when t is small
_________________
| |
| P^t ~ I + tA |
| | (1)
| P = exp A |
|_________________|
Thus, not only can A be thought as an infinitesimal motion, but the
tangent line to the curve Q(t) = P^t in G at the identity I of G, is given
by I + tA. IOW, the subspace generated by A in the Lie algebra is a
linear approximation to the unipotent (one dimensional) subgroup generated
by P; in fact, it is the Lie algebra of this one dimensional Lie group. In
other words, recalling that a Lie group is a smooth manifold and therefore
has a tangent space at each point, the Lie algebra of the group G is
nothing other than than the tangent space to the identity I. Of course,
all the other tangent spaces must look just like this one--- G is a
"homogeneous space" in the sense of manifold theory.
To see the real power of this stuff, we have to look at something a bit
more complicated, like SO(3), the group of rotations about the origin in
E^3. You might recall that this is the set of all real three by three
orthogonal matrices of determinant one (P is orthogonal if is invertible
and if the inverse of P equals the transpose of P). SO(2) is an abelian
group; that means that for any P,Q in SO(2), PQ=QP. But SO(3) is
nonabelian: for most pairs of P,Q in SO(3), PQ =/= QP. However, if P,Q
belong to the same one dimensional subgroup (a circle) of SO(3), then they
commute.
If you have a desktop calendar cube or Rubik cube handy, try rotating
around the z axis and then around the x axis (holding the axes mentally
fixed in space as you rotate the cube), and compare with doing these
operations in the other order. You don't get the same result!
Now, it is natural to inquire whether we can generalize from scalar
multiples of one matrix to sums of matrices: does
exp(A+B) = (exp A) (exp B) (2)
for all matrices A,B in the Lie algebra? Well, it turns out that the
answer is in general "no"; this rule works only for abelian Lie groups,
and positive real numbers form an -abelian- Lie group, which is why
exp(a+b) = (exp a) (exp b)
is always true. But we can still try to fix up (2) to make a true
statement, and this leads naturally to the extra structure a Lie algebra
possesses which makes it more than just a vector space: it also has a kind
of "multiplication" of its own, called the Lie product. To see how this
new idea arises, go back to power series. We know that
P^s Q^t = (exp A)^s (exp B)^t = exp(sA) exp(tB)
and
Q^t P^s = (exp B)^t (exp A)^s = exp(tB) exp(sA)
are not the same, but for small s,t they are -very nearly- the same. So
let's see if we can determine how quickly they diverge from each other
exp(sA) exp(tB) = (I + sA + s^2/2 A^2 +...)(I + tB + t^2/2 B^2 + ...)
= I + (sA + tB) + (s^2/2 A^2 + st AB + t^2/2 B^2) + ...
first order second order
whereas
exp(tB) exp(sA) = (I + tB + t^2/2 B^2 +...)(I + sA + s^2/2 A^2 + ...)
= I + (tB + sA) + (t^2/2 B^2 + st BA + s^2/2 A^2) + ...
first order second order
Up to first order, these agree, and the only second order difference is st
AB instead of st BA. Indeed, subtracting the second series from the first
gives, up to second order,
st (AB - BA)
Here, [A,B] = AB-BA is called the Lie product. This operation is not
associative, but it obeys some other algebraic laws.
This is a good time to pause and figure out what the Lie algebra so(3)
of the Lie group SO(3) is. Remember that P is in SO(3) if PP' = I where
prime denotes transpose, and if det P = I. Focus attention on PP' = I and
recall how we obtained so(2) from SO(2): we parametrized the group,
differentiated with respect to our parameter, and took the derivative at
zero. This isn't yet obvious, but SO(3) is a three dimensional manifold,
so parameterizing it requires three real numbers. Do we need to deal with
partial derivatives? Fortunately, no! If we just think of consider a
-curve- in SO(3), that is, a one dimensional submanifold parameterized by
some real number t, and differentiate with respect to t, we get the result
we need. By assumption,
P(t) P(t)' = I
Differentiating gives
P*(t) P(t)' + P(t) P*(t)' = 0
and setting t = 0 gives
P*(0) I + I P*(0)' = 0
or, setting A = P*(0),
A = -A'
In other words, the Lie algebra so(3) consists of skew-adjoint matrices.
As the whole world knows, the three by three skew adjoint matrices forms a
three dimensional vector space (because there are only three independent
components in each three by three skew adjoint matrix), but what only
students of Lie theory know :-) is that they form a -Lie algebra-, and
even better, the Lie product
[A,B] = AB-BA
can be identified with the vector cross product in E^3!
Let's spell this out. Three linearly independent elements of so(3) are
[ 0 0 0 ]
X = [ 0 0 -1 ]
[ 0 1 0 ]
[ 0 0 1 ]
Y = [ 0 0 0 ]
[ -1 0 0 ]
[ 0 -1 0 ]
Z = [ 1 0 0 ]
[ 0 0 0 ]
These generate counterclockwise rotations about the x,y,z axes, as you can
confirm by exponentiating them. Computing the commutators, we find
[X,Y] = Z [X,Z] = -Y [Y,Z] = X
This defines the Lie algebra so(3). Now, we can exponentiate to obtain
three generators of the Lie group SO(3):
[ 1 0 0 ]
exp(x X) = [ 0 cos(x) -sin(x) ]
[ 0 sin(x) cos(x) ]
[ cos(y) 0 sin(y) ]
exp(y Y) = [ 0 1 0 ]
[ -sin(y) 0 cos(y) ]
[ cos(z) -sin(z) 0 ]
exp(z Z) = [ sin(z) cos(z) 0 ]
[ 0 0 1 ]
If we multiply these in the order listed we obtain a parameterization of
SO(3) which is too awkward for me to write out here. (Exercise: use Maple
or Mathematica to carry out the multiplication and simplify the result.
If you have done this right, taking the derivative wrt x and x = 0 should
give back X and so forth.)
Exercise: recall that the O(1,2) adjoint of a matrix A is given by
A# = L^(-1) A' L
where A' is the ordinary transpose. Use this to determine the Lie algebra
so(1,2) of the Lie group SO(1,2).
Exercise: SL_2(R) consists of all two by two real matrices with
determinant one. Take a parameterized curve A(t) in SL_(R) and compute
A'(0) to find out what the Lie algebra sl_2(R) is.
Remark: if you know about linear fractional transformations (aka Moebius
transformations), note that SL_2(R) acts on the upper half plane, which
may be identified with H^2, the velocity space of particles in E^(1,2).
The Lie product here can be identified with a lesser known vector cross
product in E^(1,2). However, these vector cross products are purely
"accidents of small dimensions", while the Lie theory we are discussing
works is true much more generally. Also, notice that the remarks above
suggest that SO(1,2) is isomorphic to SL(2,R); this is in fact true.
Something very important I am not going to try to explain here: SO(3) and
SO(1,2) are not simply connected as a manifold, but they possess double
covers which -are- simply connected, and therefore is what is called a
universal cover in the sense of homotopy theory. This group is called
SU(2) and it gives the spinorial representation of SO(3) in terms of
conjugation. There is a beautiful explanation of the half angle formulas
which come up here in terms of reflections, which is due to Rodrigues
(sometime around 1830), and which is probably the same way Gauss found out
how rotations combine even earlier. But I'll just say one thing about
this: SO(3) and its universal cover SU(2) share the same Lie algebra.
SU(2) is the group of unit norm quaternions, and one can write a formula
analogous to Euler's formula e^(it) = cos(t) + i sin(t), which is due to
Steenrod ("twist algebra"). SU(2) is a double cover of SO(3), so there
two unit norm quaternions representing each element of SO(3). The
simplest way to understand why this "extra bit" of information is
incredibly useful is to observe that there is something funny about half
turns, i.e rotations through pi about line L through the origin: the
rotations about L through pi is the same, "as far as SO(3) knows", as the
rotation about L through -pi. Passing to the double cover SU(2) removes
that ambiguity in the half turns. It's all very interesting and
beautiful, and there are many ways to understand this--- I think the most
elementary way is in terms of linear fractional transformations.
Similar remarks hold for spinorial representations of the two dimensional
Lorentz group SO(1,2) and its double cover SU(1,1). Indeed, SL_2(C) gives
us a spinorial representation of the (proper orthochronous) Lorentz group
SO(1,3). The Moebius group PSL_2(C) is actually isomorphic to SO(1,3),
which is a starting point for twistor theory. One caveat: you can go a
long way in Lie theory using just matrix groups, but eventually you
encounter the fact that not all Lie groups are matrix groups. In
particular, the universal cover of SL(2,R) is not SU(1,1) and in fact is
not isomorphic to any matrix Lie group! The general theory of Lie groups
is quite a bit more sophisticated than the simple ideas which work for
matrix groups. A very good undergraduate level introduction can be found
in the book
Roger Carter, Graeme Segal, and Ian Macdonald
Lectures on Lie groups and Lie algebras
Cambridge University Press, 1995
The information you will find there complements very nicely what I am
saying here, and I highly recommend this book.
The central theorem in Lie theory is the relationship between G and its
Lie algebra g = T_I G is functorial: if G is isomorphic as a Lie group to
H, then the Lie algebras g and h are also isomorphic. Contrapositively,
if g and h are nonisomorphic Lie algebras, then any Lie groups G,H having
g,h respectively as their Lie groups are nonisomorphic. This means that
the classification of Lie algebras up to isomorphism does most of the work
needed to classify Lie groups up to isomorphism. For a sketch proof of
this and other fundamental theorems of Lie theory, see the above book.
Back to what the Lie product does for us in terms of understanding
noncommutativity in SO(3), or indeed in any nonabelian Lie group. Suppose
we take A in the Lie algebra and P in the corresponding Lie group, and
look at the "conjugate"
P^s Q P^(-s)
where P = exp A, so P^s = exp(sA). If you've taken a good linear algebra
course, you'll know how important this operation of "conjugation" is, for
instance in the theory of "eigenthings" of Q, which concerns properties of
Q which are invariant under conjugation by P. Change of basis is another
fundamental place where conjugation is the essential concept. And if you
know about spinors, you may know that conjugation in another Lie group,
SU(2) (which is as a matter of fact -very- closely related to SO(3), but I
don't want to get into that) is essential in using quaternions to study
how rotations combine in E^3.
Once again we use power series to find out what the leading terms in t are
like:
P^s Q P^(-s) = (I + sA + s^2/2 A^2 + ..) Q (I - sA + s^2/2 A^2 -...)
= (Q + s AQ + s^2/2 A^2 Q + ...)(I - sA + s^2/2 A^2...)
= P + (s AQ -s QA) + (s^2 AQA + s^2/2 A^2 Q + s^2/2 QA^2) +
first order second order
= Q + s[A,Q] + ...
So, the Lie product determines the linear approximation to conjugation:
_____________________________
| |
| P^s Q P^(-s) ~ Q + s [A,Q] |
| | (3)
| P = exp A |
|_____________________________|
Even more: for small s,t we expect that P^s Q^t should agree very nearly
with Q^t P^s, so P^s Q^t (Q^t P^s)^(-1) should agree very nearly with I.
Lets check it out!: writing exp(A) = P, exp(B) = Q, we have
P^s Q^t P^(-s) Q^(-t) = { P^s Q^t P^(-s) } Q^(-t)
= { Q^t + s[A,Q^t} ... } Q^(-t)
= I + s [A,Q^t] Q^(-t) + ...
= I + s A - s Q^t A Q^t + ...
= I + s A - s {A + t[B,A] +..} + ...
= I - st [B,A] + ....
or
______________________________________
| |
| P^s Q^t P^(-s) Q^(-t) ~ I + st [A,B] |
| | (4)
| P = exp(A), Q = exp(B) |
|______________________________________|
So, "all Lie groups commute up to first order"! Moreover, "the second
order correction term" is exactly the Lie product!
(What about -abelian- Lie groups? Well, of course in an abelian Lie
group, AB-BA = 0 always, so the Lie product is trivial; this again shows
that the whole "reason for being" of the Lie product is to come as close
as we can to "linearizing" noncommutation in nonabelian Lie groups).
The Lie algebra so(3) is a very good place to test this stuff out in a
concrete setting. Recall our basis for so(3):
[ 0 0 0 ]
X = [ 0 0 -1 ]
[ 0 1 0 ]
[ 0 0 1 ]
Y = [ 0 0 0 ]
[ -1 0 0 ]
[ 0 -1 0 ]
Z = [ 1 0 0 ]
[ 0 0 0 ]
where the commutators are
[X,Y] = Z [X,Z] = -Y [Y,Z] = X
Let's set P = exp(X), Q = exp(Y). With s = t = 0.01, using Mathematica or
Maple or even a hand computation, you can verify that:
[ 1.00000 0.00000 0.00000 ]
P = [ 0.00000 0.99995 -0.01000 ]
[ 0.00000 0.01000 0.99995 ]
[ 0.99995 0.00000 0.01000 ]
Q = [ 0.00000 1.00000 0.00000 ]
[ -0.01000 0.00000 0.99995 ]
The commutator is
[ 1.00000 -0.00010 0.00000 ]
P^t Q^t P^(-t) Q(-t) = [ 0.00010 1.00000 0.00000 ]
[ 0.00000 0.00000 1.00000 ]
Next, [X,Y] = Z, and using Maple or Mathematica you can verify that
[ 1.00000 -0.00010 0.00000 ]
I + s t Z = [ 0.00010 1.00000 0.00000 ]
[ 0.00000 0.00000 1.00000 ]
which agrees to five decimal places with the commutator.
OK, now that we have obtained some intuition for the most essential
concepts of Lie theory, let's see if we can enumerate the lowest
dimensional real Lie algebras g and for each, find a Lie group G whose Lie
algebra is g.
Starting with one-dimensional Lie algebras: g is in this instance a
one-dimensional real vector space, i.e. a copy of R, and of course for any
two real numbers [s,t] = st-ts = 0, so the Lie product is trivial. Not
very interesting, but there -is- something interesting about the
one-dimensional Lie groups. Up to isomorphism, there are only two: SO(2),
which as a manifold is the circle, and SO(1,1), which as a manifold is the
real line. Thus, already in one dimensions we can see the distinction
between local and global structure: in a sufficiently small neighborhood
of any point, SO(2) and SO(1,1) are indistinguishable as manifolds and as
groups (since they share the same Lie algebra, which determines small
motions in the group). But their topology is completely different: SO(2)
is compact (for our purposes, it suffices to think of a compact manifold
as being on which is closed and has finite volume in a sense to explained
below); SO(1,1) is not. It turns out that the distinction between compact
and noncompact Lie groups is as important as the distinction between
simply connected and nonsimply connected ones. (There are still other
important distinctions, e.g. simple, semisimple; for these see the book
recommended above).
On to two-dimensional Lie algebras! Now g is R^2, as a vector space, and
one possibility is again the trivial Lie product [X,Y] = 0 for all X,Y in
g. It's easy to verify that the two by two matrices
[ 1 0 ] [ 0 0 ]
X1 = [ 0 0 ] X2 = [ 0 1 ]
satisfy [X1,X2] = 0, so they give a basis for our Lie algebra
g. Exponentiating, we find that we can write G as
[ exp(x) 0 ]
[ 0 exp(y) ]
This is of course the group of diagonal matrices with positive entries on
the diagonal. Geometrically this corresponds to the positive quadrant of
R^2, which is of course homeomorphic to R^2 itself, so G is simply
connected. However, consider the four by four matrices of form
[ cos(x) -sin(x) 0 0 ]
[ sin(x) cos(x) 0 0 ]
[ 0 0 cos(y) -sin(y) ]
[ 0 0 sin(y) cos(y) ]
Exercise: verify that this is abelian, and thus has the same Lie algebra.
Verify also that as a manifold it is homeomorphic to S^1 x S^1, i.e. the
torus T^2. This is compact, but of course not simply connected.
Exercise: write down a matrix Lie group homeomorphic to R x S^1 as a
manifold, and verify that this group also has the trivial Lie algebra.
Fortunately, there is one more two-dimensional real algebra. The easiest
way to find it is to recall that the affine group of the real line,
denoted A(1), which can be parameterized as follows:
[ exp(y) x ]
[ 0 1 ]
A(1) acts on vectors like
[ exp(y) x ] [ v ] [ exp(y) v + x ]
[ 0 1 ] [ 1 ] = [ 1 ]
Exercise: verify that the generators of a(1) are
[ 1 0 ] [ 0 1 ]
Y = [ 0 0 ] X = [ 0 0 ]
with the Lie product defined by
[X,Y] = Y
Can you figure out what is the topology of A(1)?
Before we go on to three dimensional Lie groups, I want to pause to
explain the connection with exterior calculus. The problem is this: how
can we construct an ONB on a given Lie group G which is invariant under
the group multiplication? IOW, how can we construct an explicit invariant
metric which turns G into a homogeneous Riemannian manifold or "space
form"? The answer is delightfully simple (it was found by Cartan and
independently by Maurer): if we parametrize the group G and write a
typical element as L, then we can compute dL componentwise, and then
L^(-1) dL
is a g-valued one-form which is invariant under multiplication from the
-left-, whereas
dL L^(-1)
is a g-valued 1-form which is invariant under multiplication on the
-right-. To see what this means, let K be a -fixed- element of G, and L
be the parametrized matrix representing a general element of G as above.
Then
(KL)^(-1) d(KL) = L^(-1) K^(-1) K dL = L^(-1) dL
(left invariance) and
d(LK) (LK)^(-1) = dL K K^(-1) L^(-1) = dL L^(-1)
So, we have left and right invariant g-valued one-forms, and as the reader
has probably already realized, each component (we are interested in the
nonzero ones, of course) give left and right invariant real valued
one-forms. We can read off a left and right invariant ONB from these.
The process, which works like magic, is best explained by example. Let's
take A(1): we have the parameterization
[ exp(y) x ]
L = [ 0 1 ]
Then
[ exp(y) dy dx ]
dL = [ 0 0 ]
and
[ dy exp(-y) dx ]
L^(-1) dL = [ 0 0 ]
Notice this is an element of the Lie algebra a(1)! Moreover, if
[ exp(b) a ]
K = [ 0 1 ]
where a,b are -constants-, then you can verify
(KL)^(-1) d(KL) = L^(-1) K^(-1) K dL = L^(-1) dL
Thus, if we take
o^1 = dy o^2 = exp(-y) dx
we obtain a left-invariant ONB of one-forms on the affine group A(1).
Exercise: verify that the corresponding left-invariant volume form is
o^1 /\ o^2 = exp(-y) dx /\ dy
that the corresponding left-invariant Riemannian metric on A(1) is
ds^2 = exp(-2y) dx^2 + dy^2
-infty < x, y < infty
that the connection one-forms are
w^1_2 = -w^2_1 = exp(-y) dx
and that the curvature two-forms are
R^1_2 = -R^2_1 = -(o^1 /\ o^2)
In short, A(1) is locally isometric to H^2, the hyperbolic plane.
As we have seen, the Killing vectors of a Riemannian manifold are always
an extremely important part of its invariant characterization. It is easy
to see that since the left-invariant metric does not depend upon the
coordinate x, d/dx is a Killing vector for this metric. IOW, the
unipotent subgroup generated by d/dx is a symmetry of A(1) wrt the
left-invariant metric. Are there any more? Recall that Killing's
equation reads, in tensor index gymnastic notation:
X^a_(;b) + X^b_(;a) = 0
If we write X = f(x,y) d/dx + g(x,y) d/dy, this becomes
exp(-2y) (f_x - g) = 0
exp(-2y) f_y + g_x = 0
g_y = 0
From the last, we find that g(x,y) = g(x). Plugging this back into the
Killing equation gives
f_x = g
or f(x,y) = int g(x) dx + h(y). Plugging this back in gives
g'(x) = -exp(-2y) h'(y)
Of course, each side here must be a constant in order for this to hold, so
g(x) = ax and then h(y) = -a/2 exp(2y) + b. Thus, the general solution is
X = [ax^2/2 - (a/2) exp(2y) + b] d/dx + ax d/dy
Thus, A(2) has a two dimensional isometry group generated by
d/dx
(x^2 - exp(2y)) d/dx + 2x d/dy
Exercise: verify that the right-invariant a(1)-valued one-form is
[ dy dx-y dx ]
dL L^(-1) = [ 0 0 ]
Read off the right-invariant ONB and the right-invariant metric, compute
the connection one-forms, and the curvature two-forms, and find that again
A(1) is locally isometric to H^2. Solve the Killing equation.
Now let's look at the three dimensional Lie groups. There are two ways to
proceed here. The usual procedure is that followed by Bianchi, one of the
pioneers of Lie theory and Riemannian geometry, along with Gauss, Riemann,
Lie, Killing, Klein, Poincare, Frobenius, Christoffel, Codazzi (all in the
nineteenth century) and later Elie Cartan, Weyl, Hausdorff, Borel, Weil,
Casimir, Chevalley, Chern, Harish-Chandra (to name some of the most
important early to mid twentieth century pioneers). In this procedure, we
write down a basis for a three dimensional Lie algebra X1, X2, X3, as yet
undetermined, and figure out what the possible commutators are, up to
change of basis. This determines the possible three dimensional real Lie
algebras, up to Lie algebra isomorphism. We can then try to find matrix
representations of these Lie algebras (this is a whole subject in itself,
in general, and a very important one!) and then exponentiate to find
corresponding matrix Lie groups. The enumeration of the Lie algebras is
carried out following Bianchi's reasoning in many books, for example
Stephani, General Relativity, so I'll just give the results here:
Lie Algebra g: Lie
Bianchi class [X1, X2] [X2, X3] [X3,X1] Groups G
I 0 0 0 R^3, T^3, etc.
II X3 0 0
III X2-X3 0 -X3
IV X2+X3 0 -X3
V X2 0 -X3
VI_a a X2 - X3 0 X2 - a X3
VI_0 -X3 0 X2
VII_a a X2 + X3 0 X2 - a X3
VII_0 X3 0 X2
VIII -X3 X1 X2
IX X3 X1 X2
where we'll fill in the rightmost column later (I've put in the first
entry because we already know what all the abelian Lie groups look like).
We could now try to find matrices X1, X2, X3 satisfying the above
requirements for each Bianchi class, as I said, and the reader is invited
to try that for a few classes.
A less systematic but easier procedure is to simply cook up some three
dimensional matrix Lie groups, compute a basis for their Lie algebra, and
see if we can change to a new basis which satisfies the above conditions
for one of the Bianchi classes; we'll call such a basis for g a -Bianchi
basis-.
So... hmm... how about the upper triangular matrices, UT(3,R)? This is
the group
[ 1 x z ]
L = [ 0 1 y ]
[ 0 0 1 ]
It's easy to see that the Lie algebra has the basis
[ 0 1 0 ] [ 0 0 1 ] [ 0 0 0 ]
X1 = [ 0 0 0 ] X2 = [ 0 0 0 ] X3 = [ 0 0 1 ]
[ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ]
and that this is a Bianchi basis; ut(3) is a three dimensional Lie algebra
of Bianchi class II.
Exercise: show that the left-invariant ONB is
o^1 = dx o^2 = dy o^3 = dz - x dy
Compute the connection one-forms and show that the curvature two-forms are
R^1_2 = -3/4 o^1 /\ o^2
R^1_3 = 1/4 o^1 /\ o^2
R^2_3 = 1/4 o^2 /\ o^3
Show that d/dy, d/dz, d/dx + y d/dz are three independent Killing vectors
wrt the left invariant metric on UT(3).
Exercise (for those familiar with manifolds and orbifolds): observe that
the subgroup H = UT(3,Z) (integer entries) gives a compact quotient G/H
which geometrically can be visualized as a solid three-cube with faces
identified in an appropriate manner. Is this a manifold or an orbifold?
Note that the universal cover is the Thurston homogeneous Riemannian
manifold denoted NIL. Thus, UT(3) is locally isometric to NIL. Is it
also globally isometric to NIL?
Exercise: show that the right invariant ONB of one-forms on UT(3,R) is
o^1 = dx o^2 = dy o^3 = dz - y dx
Compute the connection one-forms and curvature two-forms and show that we
again obtain NIL geometry. Solve the Killing equation.
Exercise: compute the Einstein tensor for the spacetime
ds^2 = -(dt + dy/t)^2 + t^2 [dx^2 + dy^2 + (dz-y dx)^2]
What does this spacetime represent, physically? Compute the expansion and
vorticity tensors and acceleration vector for X = e_1, where
e_1,e_2,e_3,e_4 is the ONB of vectors dual to the obvious ONB of
one-forms. Compute the electrogravitic and magnetogravitic tensors and the
three dimensional Riemann tensor of the hyperslices t = t0 (why is this
last demand not an impossible task?).
Hmm... what about E(2)? That is, the euclidean group generated by
translations in R^2 and rotations about the origin. We can write
[ cos(z) -sin(z) x ]
L = [ sin(z) cos(z) y ]
[ 0 0 1 ]
Note E(2) as written in this way acts on R^2 from the left via
[ u ] [ cos(z) -sin(z) x ] [ u ] [ u cos(z) - v sin(z) + x ]
[ v ] -> [ sin(z) cos(z) y ] [ v ] = [ u sin(z) + v cos(z) + y ]
[ 1 ] [ 0 0 1 ] [ 1 ] [ 1 ]
Exercise: show that the left-invariant ONB is
o^1 = cos(z) dx - sin(z) dy
o^2 = -sin(z) dx + cos(z) dy
o^3 = dz
Compute the connection one-forms and curvature two-forms. What familiar
Riemannian three-manifold is E(2) locally isometric to? Show that
d/dx, d/dy, y d/dx - x d/dy
are Killing vectors. To what unipotent subgroups of E(2) do they
correspond? Write down the obvious basis for the Lie algebra e(2) and
compute the commutators. What Bianchi class is e(2)?
Exercise (for readers who have studied principle bundles): notice that the
coordinate z is periodic, so that a typical chart covers
-infty < x,y < infty, -Pi < z < Pi
Conclude that E(2) is a circle bundle over R^2. Show further that E(2) is
the semidirect product of R^2 (as an additive abelian group) and SO(2).
Can you draw a picture showing the normal subgroup R^2 and the non-normal
subgroup SO(2)? Explain the geometric meaning of the commutators.
Exercise (for readers who have studied dynamical systems): why does it
make sense, intuitively, that E(2) should be locally isometric to the
manifold you found above? Make sense of the demand: compute Lyapunov
exponents for the obvious flows associated with the obvious unipotent
subgroups of E(2), and characterize these as dynamical systems.
(To be continued...)
==================================================================
From: Chris Hillman
Newsgroups: sci.physics.relativity
Subject: Re: The Joy of Forms: Invariant ONB's for Lie Groups
Date: Sat, 28 Oct 2000 16:53:34 -0700
If you've done the exercises, you found that e(2) is a three dimensional
real Lie algebra of Bianchi class VII_0. What about VII_a with a =/= 0?
It's not too hard to guess that to obtain this we can try replacing the
rotation with a loxodromic motion which dilates and rotates, so that our
new Lie group, which I'll call Lox(2,a), is parameterized like this:
[ exp(a*z)*cos(z) -exp(a*z)*sin(z) x ]
L = [ exp(a*z)*sin(z) exp(a*z)*cos(z) y ]
[ 0 0 1 ]
Exercise: Find a Bianchi basis for the Lie algebra lox(2,a) (not hard!)
and verify that this is in the Bianchi Lie algebra isomorphism class VIIa.
Verify further that that the left-invariant lox(2)-valued Maurer-Cartan
form is
[ a*o^3 -o^3 o^1 ]
L^(-1) dL = [ o^3 a*o^3 o^2 ]
[ 0 0 0 ]
where the left-invariant ONB of one-forms is
o^1 = dx + (ax-y) dz
o^2 = dy + (ay+x) dz
o^3 = dz
Compute the connection one-forms and curvature two-forms. To what
familiar three-dimensional Riemannian manifold is Lox(2,a) locally
isometric? Does this make sense intuitively? What about global isometry?
In particular, is Lox(2,a) simply connected and thus a universal cover?
Does Lox(2,a) have any compact left coset spaces (homogeneous Riemannian
manifolds with the same local geometry)? How about orbifolds?
Exercise: substitute hyperbolic trig functions (with the appropriate
signs!) for the circular trig functions appearing in Lox(2,a) and E(2), to
obtain the groups Lox(1,1,a) and E(2). Show that e(1,1) is of Bianchi
class VI_0 and that lox(1,1,a), where a =/= 0, is of Bianchi class VI_a.
The Riemannian manifold E(1,1) is locally isometric to the Thurston
homogeneous manifold called SOL. The Riemannian manifold Lox(1,1,a) is a
three dimensional Riemannian manifold, but it turns out not to admit any
compact quotients, so it is not a Thurston manifold.
Tardy extended remark:
I should have explained briefly what this Thurston manifold business is
all about. First, if H is a subgroup of a group G, then the left cosets
G/H = {gH: g \in G} form a "homogeneous G-set". If G is a Lie group and H
is a -closed- subgroup, then G/H is a homogeneous manifold. If G has been
given a Riemannian structure, G/H is a homogeneous Riemannian manifold.
If H is a -discrete- subgroup of G, and an additional technical condition
is satisfied, then G will be a covering space of G/H (see for example the
textbook by Boothby). Similarly for right cosets H\G.
For example, SU(2) is a covering space of SO(3), a double cover, in fact.
The case when G/H is -compact- is of particular interest because compact
three dimensional Riemannian homogeneous manifolds play an central role in
Thurston's programme for classifying all three dimensional topological
manifolds up to homeomorphism by smoothly evolving them into a "canonical
form" which consists of a sort of connected sum of standard pieces which
are compact quotients of one of the eight three dimensional Riemannian
homogeneous manifolds which admits compact quotients. These are E^3, H^3,
S^3, R x H^2, S^2 x R, NIL, SOL, and the universal cover of SL(2,R). It
turns out that all but S^2 x R turn up when we study three dimensional Lie
groups. Several of these turn up more than once, in fact, because they
have more than three independent Killing vectors (these manifolds are not
only -homogeneous- but also -isotropic-). S^2 x R does not turn up as a
three dimensional Lie group because it admits only two independent Killing
vectors, but it can be considered a circle bundle over one of our two
dimensional Lie groups. On the other hand, the Bianchi types VIa and IV
do not give rise to Lie groups which admit compact quotients, so these
groups are three dimensional Riemannian homogeneous manifolds, but not
Thurston manifolds.
One last very important point: it turns out that whenever G is a compact
Lie group, the left and right invariant ONBs agree, so that we have a
"bi-invariant metric". This is very important in harmonic analysis; one
of the nice things about compact Lie groups like SO(2) and SU(2) is that
because they have finite volume we can average over them. This is why
Fourier series are so useful--- we can represent an appropriate function
by taking moments by averaging against a kernel over SO(2). You can
perform a similar "harmonic analysis" by averaging against an appropriate
kernel over other compact groups too--- this is the subject of modern
harmonic analysis. The point is that the eigenfunctions of important
partial differential equations are precisely the basis functions appearing
in the harmonic analysis.
Exercise (for those with Maple or Mathematica, who have studied
manifolds): starting with the Lie algebras so(3) and so(1,2), parameterize
SO(3) and SO(1,3). Find left and right invariant ONBs. Show that SO(3)
is locally isometric to S^3, but globally isometric to RP^3, the quotient
of S^3 obtained by identifying antipodal points. On the other hand,
SO(1,2) is locally isometric to the Thurston manifold which is the
universal cover of SL(2,R). (Remark: someone complained the other day in
sci.physics.research that this is an awkward name, but as John Baez
replied, the universal cover of SL(2,R), despite its importance, does not
seem to have acquired a name of its own! It could also with equal justice
be called the universal cover of SO(1,3), of course. See the book by
Segal et al. mentioned in my previous post for a nice picture of manifold,
which turns out to be a Lie group which is not isomorphic to any matrix
Lie group.)
Exercise: looking ahead to my discussion of Cartan manifolds and gauge
theories, consider the three dimensional Lie group which I'll call Hom(2),
which can be parameterized like this:
[ exp(z) 0 x ]
L = [ 0 exp(z) y ]
[ 0 0 1 ]
This acts on R^2 by translations (x,y components) and dilations (z
components). Verify that a left-invariant ONB for Hom(2) is
o^1 = exp(-z) dx
o^2 = exp(-z) dy
o^3 = dz
Verify further that the two unipotent subgroups associated with the
Killing vectors d/dx, d/dy are translations along x,y respectively in R^2.
Verify that the third Killing vector d/dz + x d/dx + y d/dy is associated
with the unipotent subgroup of dilations. Compute the left-invariant
connection one-forms and curvature two-forms of Hom(2). To what familiar
three dimensional Riemannian homogeneous manifold is it isometric? Find a
Bianchi basis for the Lie algebra hom(2) (easy!) and verify that it is of
Bianchi class V. Thus, the three dimensional Lie algebras of Bianchi
class V and Bianchi class VII_a, a =/= 0, each give rise to the same three
dimensional Riemannian homogeneous manifold, which in fact has six
independent Killing vectors: like E^3, it is not only homogeneous but also
isotropic. Repeat, for the right-invariant ONB, connection, and curvature
of Hom(2).
Exercise: consider the three dimensional Lie algebra sp(1), which consists
of all two by two matrices of form
[ z x ]
A = [ y -z ]
The obvious basis is not a Bianchi basis. What is the Bianchi class of
sp(1)? Parameterize the Lie group Sp(1) by ZYX where
[ exp(z/2) 0 ]
Z = [ 0 exp(z/2) ]
[ 1 x ] [ 1 0 ]
X = [ 0 1 ] Y = [ y 1 ]
Compute the left-invariant ONB of one-forms, the left-invariant connection
one-forms and the left-invariant curvature two-forms. Find three
independent Killing vectors for Sp(1) given the left-invariant metric.
Repeat, for the right-invariant ONB.
Exercise: consider the Lie group parameterized by
[ exp(x) 0 z ]
L = [ 0 exp(y) 0 ]
[ 0 0 1 ]
What Bianchi class is its Lie algebra? Compute the left-invariant ONB of
one-forms, the left-invariant connection one-forms, and the left-invariant
curvature two-forms. To what Thurston manifold is this group locally
isometric? Show that wrt the left-invariant ONB, three linearly
independent Killing vectors are d/dx, d/dy, d/dx + x d/dz. To what
unipotent subgroups acting do these correspond?
Exercise: which Bianchi class has been left unaccounted for so far? Can
you find a matrix group which yields the missing Lie algebra? If not, can
you follow Bianchi's path, and find a matrix representation of the Lie
algebra and then exponentiate to find the missing three dimensional Lie
group?
If you've gotten this far, you now know an awful lot about three
dimensional Lie groups! I won't attempt to classify the four dimensional
Lie groups, but here is one example which will be the source of great fun
when we get to Cartan manifolds and gauge theory:
Exercise: consider the group parameterized by
[ exp(w) cos(z) -exp(w) sin(z) x ]
L = [ exp(w) sin(z) exp(w) cos(z) y ]
[ 0 0 1 ]
Notice that this is a semidirect product (normal subgroup R^2 as an
additive abelian group) and that it acts on R^2 in the usual way.
Compute the left-invariant and right-invariant ONB's of one-forms,
connection one-forms and curvature two-forms. To what easily described
Riemannian four manifold is it locally isometric? How about global
isometry?
In the next post in this thread, I plan to describe Cartan geometry, a
beautiful and elegant unification of Klein's unification in the late
nineteenth century of the "classical geometries" (elliptic, parabolic,
hyperbolic, projective geometries, Peuceaux geometry, Lie sphere geometry,
etc.) using the notion of homogeneous spaces G/H where G is a Lie group
and H is a closed subgroup, with the equally elegant theory of Gauss and
Riemann in which the geometry is "inhomogeneous". As usual, I'll provide
plenty of simple but nontrivial examples.
Chris Hillman
Home Page: http://www.math.washington.edu/~hillman/personal.html