## Morse Theory and Bott Periodicity

### Pei-Hsuan Chang

In this article, we will go through the basics of Morse theory, which Bott calls

“Baby Morse Theory”. It gives us a way to recover the homotopy type of a manifold.

After the proof of Morse theory, as an application to compact Lie groups, we will prove Bott periodicity theorem, which calculates the homotopy groups of a unitary group in arbitrary dimension.

We will follow the methods in J. Milnor’s Morse theory (§1⇠3, §23, and part of

§20,§22). However, there is a more elementary proof of Bott periodicity theorem, which does not involve Morse theory. It can be found in M. Atiyah and R. Bott, On the Periodicity Theorem for Complex Vector Boundle, Acta Methematica, Vol. 112 (1964), pp. 229⇠247.

### 1 Basic Morse Theory

The fundamental concept in Morse theory is: a ”good fuction” f : M ! R encodes a lot of information about M . To be more specific, Morse theory studies the critical points of good functions on M , and gives a nice way to recover the homotopy type of M .

Consider a smooth function f : M ! R. At each point p on M, f induces a map
f_{⇤} : TpM ! T^{f (p)}R between the tangent spaces of M at p and of R at f(p).

Definition 1.1. p 2 M is a critical point of f if the induced map f⇤ is zero. More
specifically, with local coordinate system (x^{1},· · · , x^{n}), p satisfies

@f

@x^{1}(p) = @f

@x^{2}(p) =· · · = @f

@x^{n}(p) = 0.

Definition 1.2. The Hessian Hf(p) (or f_{⇤⇤}) of a function f : M ! R at p is the
n⇥ n symmetric matrix whose ij-th entry is _{@x}^{@}^{i}^{2}_{@x}^{f}^{j}, where (x^{1},· · · , x^{n}) is the local
coordinate system at p. We said a critical point p is nondegenerate if the matrix Hf(p)
is nonsingular.

1

Definition 1.3. The index of a billinear form H is the maximal dimension of sunspace of V on which H is negative definite.

The point p is a nondegenerate critical point of f if and only if Hf(p) has nullity equal to 0. The index of Hf(p) on TpM will be refered to simply as the index of f at p.

As mention above, we are going to study a ”good function” on M . The notion of a

“good function” is formalised to mean a Morse function.

Definition 1.4. A map f : M ! R is a Morse function if all the critical points of f are nondegenerate. That is, if Hf(p) at each critical point p is non-singular.

To reach out goal of studying critical point p, we need a nice coordinate system to work near p. This important tool is Morse lemma. Furthermore, we only need information about the index at p to apply this proposition.

Proposition 1.5 (Morse lemma). If p is a nondegenerate critical point of f and the
index of f at p is , then there exists local coordinate (y1, y2,· · · , yn) in a neighborhood
U of p with y^{i}(p) = 0 for all i and such that the identity

f = f (p) (y^{1})^{2} · · · (y )^{2} + (y ^{+1})^{2}· · · + (y^{n})^{2}
holds throughout U .

Before we prove Morse lemma, we firstly show the following:

Lemma 1.6. Let f be a smooth function in a convex neighborhood V of 0 in R^{n}, with
f (0) = 0. Then

f (x1,· · · , x^{n}) =
Xn

i=1

xigi(x1,· · · , x^{n})
for some suitable smooth finction gi defined in V , with gi(0) = _{@x}^{@f}

i(0).

Proof.

f (x1,· · · xn) = Z 1

0

df (tx1,· · · , txn)

dt dt =

Z 1 0

Xn i=1

@f

@xi

(tx1,· · · , txn)xi dt.

Just define gi(x1,· · · , xn) = R1 0

@f

@x_{i}(tx1,· · · , txn)dt, then we get the result.

Now, we return to the proof of Morse lemma.

2

Proof of the Morse lemma. By linear algebra, we can easily show that for any such expression for f , must be the index of f at p.

It remains to show that there is such suitable coordinate system (y^{1},· · · , y^{n}) exists.

Without loss of generalization, we may assume that p is the origin of R^{n} and that
f (p) = f (0) = 0. By the previous lemma, we can write

f (x1,· · · , xn) = Xn

i=1

xigi(x1,· · · , xn)

for (x1,· · · xn) on some neighborhood of 0. Since 0 is assumed to be a critical point, gi(0) = @f

@x^{j}(0) = 0.

By applying the lemma to gi’s, we get

g_{i}(x_{1},· · · xn) =
Xn

j=1

x_{j}h_{ij}(x_{1},· · · xn),

for some smooth hij with

hij(0) = @gi

@x^{j}(0) =
Z 1

0

@^{2}f

@x^{i}@x^{j}(tx1,· · · , tx^{n})· t dt

X=0

= 1 2

@^{2}f

@x^{i}@x^{j}(0).

It follows that

f (x1,· · · , x^{n}) =
Xn
i,j=1

xixjhij(x1,· · · , x^{n}).

Let ¯hij = ^{1}_{2}(hij + hji), and then we have ¯hij = ¯hji and f =P

i,jxixj¯hij. Moreover, the
matrix (¯hij(0)) is equal to (^{1}_{2}_{@x}^{@}i^{2}@x^{f}^{j}(0)) and hence is nonsingular.

This give us the desired expression for f , in perhaps smaller neighborhood of 0. To complete the proof, we just imitate the prove of usual diagonalization for quadratic forms. The key step is as follows:

It will be proved by induction. Suppose that there is a coordinate (u1,· · · , u^{n}) in a
neighborhood U1 of 0 such that:

f =±(u1)^{2} ± · · · ± (ur 1)^{2}+ X

i,j r

uiujHij(u1,· · · un)

throughout U1, where the matrix (Hij) are symmetric. We may assume that Hrr(0)6= 0.

3

Now, let g(u1,· · · , u^{n}) be the square root of|H^{rr}(u1,· · · u^{n})|. Then g will be a smooth,
non-zero function throughout some smaller neighborhood U2 ⇢ U^{1} of 0. Next, we
introduce new coordinates (v1,· · · v^{n}) by vi = ui, for i6= r.

vr = g·

"

ur+ 1 Hrr

X

i>r

uiHir

#

By the inverse function theorem, (v1,· · · vn) can serve as a coordinate function within a sufficiently smaller neighborhood U3 of 0. So f can be expressed as

f =X

ir

±(v^{i})^{2}+ X

i,j>r

vivjH_{ij}^{0}

throughout U3. This complete the induction and the proof of Morse lemma.

Corollary 1.7. Nondegenerate critical points are isolated.

Another important tool for us to prove Morse theory is ”1-parameter group of dif- feomorphism”. It gives us a way to construct the deformation we need in the proof of Morse theory.

Definition 1.8. A 1-parameter group of di↵eomorphism of a manifold M is a smooth map ' :R ⇥ M ! M such that

1. for each t2 R the map 't: M ! M defined by 't(q) = '(t, q) is a di↵eomorphism of M onto itself,

2. for all t, s2 R we have '^{t+s} = 't 's.

Definition 1.9. Given a 1-parameter group ' of di↵eomorphisms of M we define a vector field X on M as follows. For every smooth real valued function f let

X_{q}(f ) = lim

h!0

f ('h(q)) f (q) h

This vector field X is said to generate the group '.

Lemma 1.10. A smooth vector field on M which vanishes outside of a compact set K ⇢ M generates a unique 1- parameter group of di↵eomorphisms of M.

4

Proof. Given any smooth curve t 7! C(t) 2 M, we can define the velocity vector

dc

dt 2 T^{c(t)}M by the idenity ^{dc}_{dt}(f ) = limh!0 f (c(t+h)) f (c(t))

h . Now, let ' be a 1-parameter
group of di↵eomorphisms, generated by the vector field X. Then for each fixed q the
curve t7! '^{t}(q) satisfies the di↵erential equation ^{d'}_{dt}^{t}^{(q)} = X't(q), with initial condition
'0(q) = q. This is true since

d't(q)

dt = X't(q)(f ) = lim

h!0

f ('t+h(q)) f ('t(q))

h = lim

h!0

f ('h(p)) f (p)

h = Xp(f ), where p = 't(q). Also, such a di↵erential equation locally, has a unique solution which depends smoothly on the initial condition.

Thus, for each point of M there exists a neighborhood U and a number ✏ > 0 so that this di↵erential equation has a unique solution for q2 U, and |t| < ✏.

By the compactness of K, we can cover it by finite number of such neighborhood U . Now, let ✏0 > 0 be the smalllest of the corresponding ✏. We setting 't(q) = q for q /2 K, then this di↵erential equation has a unique solution 't(q) for |t| < ✏0 and for all q2 M. Also, this solution is smooth as a function of both variables. Moreover, it is clear that 't+s = 't 's for|t|, |s|, |t + s| < ✏0, and each ' is a di↵eomorphism.

It remains to defined 't for |t| > ✏0. Any t can be expressed as t = k(✏0

2) + r
with k2 Z and |r| < ^{✏}_{2}^{0}. If K 0, then set

't = '✏0 2 '✏0

2 · · · '^{✏0}_{2} 'r,
where '^{✏0}

2 is iterated k times. If k < 0, we just replace '^{✏0}

2 by ' ^{✏0}

2. It is easy to see 't is well-defined, smooth and satisfies 't+s = 't 's.

Remark. The hypothesis that X vanishes outside a compact set is important. For
example, let M be the open interval (0, 1)⇢ R, and that X = _{dt}^{d} be standard vector
field on M . Then X does NOT generate any 1- parameter group of di↵eomorphisms of
M .

We now give the proof of Morse theory, and it will be cover by the two main theorems.

For each a2 R, we denote the set {p 2 M|f(p) a} by M^{a}. The following theorem
is significant as it shows us the homotopy type of M^{a} can only change if a moves past a

5

critical point, we will investigate the e↵ects of when a moves past a critical point after this theorem.

Theorem 1.11. If f is a smooth real valued function on M , a b and f ^{1}[a, b] is
compact and contains no critical points of f , then M^{a} is di↵eomorphic to M^{b}. In fact,
M^{a} is a deformation retract of M^{b}.

Proof. The idea of the proof is to push M^{b}down to M^{a}along the orthogonal trajectories
of the hypersurfaces f = constant.

Notice that the vector field grad f can be characterized by the identity

< X, grad f >= X(f ),

for any vector field X. grad f vanishes precisely at the critical points of f . Also, for a
curve c :R ! M with velocity vector ^{dc}dt, we have

< dc

dt, grad f >= dc

dt(f ) = d(c f ) dt .

Let ⇢ : M ! R be a smooth function which equal to <grad f,grad f >^{1} throughout the
compact set f ^{1}[a, b]; and which vanishes outside of a compact neighborhood of this
set. Define X by

X_{q} = ⇢(q)(grad f )_{q}.

Then X satisfies the condition of Lemma 1.10, so X generated a 1-parameter group of di↵eomorphism 't.

For each q2 M, consider the function g^{q}(t) = f ('t(q)). If 't(q)2 f ^{1}[a, b], then
dgq(t)

dt = df ('t(q))

dt =< d't(q)

dt , grad f >=< X, grad f >= +1.

Therefore, gq(t) is linear with derivative 1 as long as 't(q) 2 f ^{1}[a, b]. So f ('t(q)) =
t + f (q), whenever f ('t(q)) 2 [a, b]. Thus, '^{b a} : M ! M is a di↵eomorphism carries
M^{a} to M^{b}.

To see M^{a} is a deformation retract of M^{b}, define a parameter family of maps rt :
M^{b} ! M^{b} by

rt(q) =

( q if f (q) a

'_{t(a f (q))}(q) if a f(q) b.

It is easy to see r0 is identity, and r1 is a retraction from M^{b} to M^{a}. This complete the
proof.

6

With the next theorem, we will have completely characterized the homotopy type of M based on a Morse function f defined on it.

Theorem 1.12. Let f : M ! R be a smooth function, and let p be a nondegenerate
critical point of f with index . if f (p) = c, Suppose for some ✏ > 0, f ^{1}[c ✏, c + ✏] is
compact. It also contains no critical points of f other then p. Then for all sufficiently
small ✏, M^{c+✏} has the homotopy type of M^{c ✏} with a cell attached.

Proof. We first modify f to a new function, F , that agrees with f except for in a small
neighborhood of p. Then, when we look at those q such that F (q) c ✏, there will
be an extra portion that M^{c ✏} will not have. Studying this extra portion will allow us
to prove the theorem. By Morse lemma, we may write f as

f = c (x^{1})^{2} · · · (x )^{2}+ (x ^{+1})^{2}+· · · (x^{n})^{2},

where (x^{1},· · · , x^{n}) is a local coordinate in a neighborhood U of p such that
x^{1}(p) =· · · = x^{n}(p) = 0.

Next, choose ✏ > 0 sufficiently small so that the image of U under the di↵eomorphism
imbedding (x^{1},· · · , x^{n}) : U ! R^{n} contains the closed ball{(x^{1},· · · , x^{n})|P

(x^{i})^{2} 2✏},
and f ^{1}[c ✏, c + ✏] is compact and contains no critical points other that p.

We now let g be a smooth function such that:

1. g(0) > ✏;

2. g(r) = 0 for r > 2✏;

3. 1 < g^{0}(r) 0 for all r.

We defined F to be a function in U by

F = f g((x^{1})^{2}+· · · (x )^{2}+ 2((x ^{+1})^{2}+· · · (x^{n})^{2}).

It is convenient to define two functions ⇠, ⌘ : U ! [0, 1) by

⇠ = (x^{1})^{2}+· · · (x )^{2} and ⌘ = (x ^{+1})^{2}+· · · (x^{n})^{2}.

7

Then f = c ⇠ + ⌘ and F = c ⇠ + ⌘ g(⇠ + 2⌘). By the construction of g, g 0 for all r 0. Moreover, g(r) = 0 when r > 2✏. So we find that F f when ⇠ + 2⌘ 2✏, and F = f when ⇠ + 2⌘ > 2✏.

Claim 1. The region F ^{1}( 1, c + ✏] coincides with the region M^{c+✏}.
Proof of the claim. If ⇠ + 2⌘ 2✏, then F = f .

If ⇠ + 2⌘ > 2✏, we have

F f = x ⇠ + ⌘ c + 1

2⇠ + ⌘ c + ✏.

So the region ⇠ + 2⌘ 2✏ lies in both F ^{1}( 1, c + ✏] and M^{c+✏}. ⇤
Claim 2. The critical points of F in U are the same as those of f in U .

Proof of the claim. Notice that

@F

@⇠ = 1 g^{0}(⇠ + 2⌘) < 0

@F

@⌘ = 1 2g^{0}(⇠ + 2⌘) > 1,
and

dF = @F

@⇠ d⇠ + @F

@⌘ d⌘.

So dF = 0 in the region ⇠ + 2⌘ 2✏ if and only if d⇠ and d⌘ are both 0. Then F has no critical points in U other than the origin, which was the only critical point of f in

U . ⇤

Claim 3. The region F ^{1}( 1, c ✏] is a deformation retract of M^{c+✏}.

Proof of the claim. Consider F ^{1}[c ✏, c + ✏]. By Claim 1 and F f, we get
F ^{1}[c ✏, c + ✏]⇢ f ^{1}[c ✏, c + ✏].

Therefore, F ^{1}[c ✏, c + ✏] is compact. Also,

F (p) = c g(0) < c ✏,

so the only possible critical point p of F is not in F ^{1}[c ✏, c + ✏]. Thus, we can apply

Theorem 1.11 which gives the desired result. ⇤

8

For convenient, we denote this region F ^{1}( 1, c ✏] by M^{c ✏}[H, where H denotes
the closure of F ^{1}( 1, c ✏]\ M^{c ✏}.

Now consider the cell e consisting of all points q with ⇠(q) ✏, ⌘(q) = 0. Note
that e is contained in the ”handle” H, since ^{@F}_{@⇠} < 0, we have

F (q) F (p) < c ✏,

but f (q) = c ⇠(q) + ⌘(q) c ✏ for q2 e . So e ⇢ F ^{1}( 1, c ✏]\ M^{c ✏}⇢ H.

Claim 4. M^{c ✏}[ e is a deformation restract of M^{c ✏}[ H.

Proof of the claim. Define rt be the identity ontside U , and define rt within U as follows.

Case 1 On the region ⇠ ✏, define r^{t} by

(x^{1},· · · , x^{n})7! (x^{1},· · · , x , t ^{+1},· · · tx^{n}).

It is easy to cheak rt maps F ^{1}( 1, c ✏] into itself since ^{@F}_{@⌘} > 0. Also, r1 is the
identity and r0 maps this region into e .

Case 2 Within the region ✏ < ⇠ ⌘ + ✏, define r^{t} by

(x^{1},· · · , x^{n})7! (x^{1},· · · , x , stx ^{+1},· · · , stx^{n}),
where the number st2 [0, 1] is defined by

st = t + (1 t)

s⇠ ✏

⌘ .

Thus, r_{1} is again the identity, and r_{0} maps this region into f ^{1}(c ✏). Notice that r_{t} is
continous as ⇠ ! ✏, ⌘ ! 0, and this definition coincides with that of the Case 1 when

⇠ = ✏.

Case 3 On the region ⌘ + ✏ < ⇠ (i.e. on M^{c ✏}). Let r_{t} be the identity. This coincide
the Case 2 when ⇠ = ⌘ + ✏.

Hence, we get the desired maps r_{t}. ⇤

Together all these four claims, we complete the proof of the therorem.

It is amazing that a study of local behavior of a Morse fuction f can determine the homotopy type of M .

9

### 2 Conjugate Points and Path Space

To prove the Bott periodicity theorem, we need several tool. Some of them are appli- cations of Morse theory.

Let M be a smooth manifold and let p and q be two (not necessarily distinct) points of M . A piecewise smooth path from p to q will be meant a map ! : [0, 1]! M such that

1. there exists a subdivision 0 = t0 < t1 <· · · < tk= 1 of [0, 1] so that each !|[ti 1,ti]

is smooth;

2. !(0) = p and !(1) = q.

We denote the set of all piecewise smooth paths from p to q in M by ⌦(M ; p, q), or briefly by ⌦(M ) or ⌦.

Suppose now thatM is a Riemannian manifold. The length of a vector v 2 TpM
will be denoted by kvk =< v, v >^{1}^{2}. For ! 2 ⌦, define the energy of ! from a to b
(where 0 a < b 1) as

E_{a}^{b}(!) =
Z b

a

d!

dt

2

dt.

We denote E_{0}^{1} by E.

Definition 2.1. Let p = (a) and q = (b) be two points on the geodesic with a6= b.

p and q are said to be conjugate along (t) if there is a non-zero Jacobi field J along (t) which vanishes at t = a and t = b. The multiplicity of the conjugate points is the dimension of the vector space of all the Jacobi fields satisfying this condition.

Notice that the index of the Hessian of E

E_{⇤⇤}: T ⌦⇥ T ⌦ ! R

is defined to be the maximum dimension of a subspace of T ⌦ on on which E_{⇤⇤} is
negative definite.

To compute the index of a geodesic, We will state the following theorem without proof. This theorem allow us compute the index by counting the multiplicity of all the conjugate points.

Theorem 2.2 (Morse). The index of E_{⇤⇤} is equal to the number of points (t), with
0 < t < 1, such that (t) is conjugate to (0) along ; each such conjugate point being
counted with its multiplicity. This index is always finite.

10

Now, we are going to introduce a useful tool which connect the multiplicity and the eigenvalue of a special linear transformation on TpM .

Theorem 2.3. Let : R ! M be a geodesic in a locally symmetric manifold. Let
V = ^{d}_{dt}(0) be the velocity vector at p = (0). Define a linear transformation KV :
TpM ! T^{p}M by KV(W ) = R(V, W )V . Let e1,· · · , e^{n} denote the eigenvalues of KV.
The conjugate points to p along are the points (⇡k/pei) where k is any non-zero
integer, and ei is any positive eigenvalue of KV. The multiplicity of (t) as a conjugate
point is equal to the number of ei such that t is a multiple of ⇡/pei.

Proof. First observe that KV is self-adjoint:

< K_{V}(W ), W^{0} >=< W, K_{V}(W^{0}) > .
This follows immediately from the symmetry relation

< R(X, Y )Z, W >=< R(Z, W )X, Y > .

Therefore we may choose an orthonormal basis U1,· · · , U^{n} for Mp so that
KV(Ui) = eiUi,

where e1,· · · , e^{n} are the eigenvalues. Extend the Ui to vector fields along by parallel
translation. Then since M is locally symmetric, the condition

R(V, Ui)V = eiUi

remains ture along . Any vector field W along may be expressed uniquely as W (t) = W1(t)U1(t) +· · · Wn(t)Un(t).

Then the Jacobi equation^{D}_{dt}^{2}^{W}2 + KV(W ) = 0 takes the formP_{D}^{2}_{W}

dt^{2} Ui+P

eiWiUi = 0.

Since the Ui’s are everywhere linearly independent, this is equivalent to the system of n equations

d^{2}W_{i}

dt^{2} + eiWi = 0.

If ei > 0 then

wi(t) = cisin(p eit), 11

for some constant ci. Then the zeros of Wi(t) are at the the multiples of t = ⇡/pei. If ei = 0, then W(t) = cit and if ei < 0, then Wi(t) = cisinh(p

|e^{i}|t), for some
constant ci. Thus, if ei 0, W^{i}(t) vanish only at t = 0. This complete the proof.

Letp

d be the length of minimal geodesic from p to q, and denote E ^{1}[0, d] by ⌦^{d}.
The next theorem shows what conditions make the relative homotopy group ⇡i(⌦, ⌦^{d}) =
0. These conditions have something to do with the index of geodesic.

Theorem 2.4. If the space of minimal geodesics from p to q is a topological manifold,
and if every non-minimal geodesic from p to q has index at least _{0}, then the relative
homotopy group ⇡_{i}(⌦, ⌦^{d}) is zero for 0 i 0.

The proof will be based on the following lemmas:

Let K be a compact subset ofR^{n}, and U be a neighborhood of K, and let f : U ! R
be a smooth function such that all critical points of f in K have index .

Lemma 2.5. If g : U ! R is a smooth function such that

@g

@x_{i}

@f

@x_{i} < ✏ and @^{2}g

@x_{i}@x_{j}

@^{2}f

@x_{i}@x_{j} ✏,

for all i,j uniformly throughout K, for some small ✏, then all the critical points of g have index .

Proof. let

Kg =X

i

@g

@xi

> 0.

Let e^{1}_{g}(x) · · · e^{n}g(x) be the n eigenvalues of the matrix that has ij-th entry (_{@X}^{@g}

i@x_{j}).

So we see a critical point of x is of index at least if and only if e^{l}_{g}ambda(x) is
negative. Note that these functions are continuous as the eigenvalues of a matrix
depend continuously on the entries of the matrix.

Now, consider mg(x) = max{Kg(x), (eg) ^{0}} and define mf(x) similarly. As the
critical points of f have index at least 0, we must have e_{f}^{0}(x) > 0 whenever Kf(x) =
0. So mf(x) > 0, for all x 2 K. Now, let be the minimum of mf. Suppose g is so

”closed” to f so that

|Kg(x) Kf(x)| < and |eg^{0}(x) e_{f}^{0}(x)| < . (?)

12

Then mg is always positive; hence, every critical point of g will have index at least

0.

Finally, it is easy to show that (?) will be satisfied providing that

@g

@xi

@f

@xi

< ✏ and @^{2}g

@xi@xj

@^{2}f

@xi@xj ✏, for sufficiebtly small ✏. This proves the lemma.

We can now show a special case of the desired theorem.

Lemma 2.6. Let f : M ! R be a smooth function with minimum 0, such that each
M^{c} = f ^{1}[0, c] is compact. If M^{0} is a manifold, and the critical points of M \ M^{0} has
index at least 0, then ⇡r(M, M^{0}) = 0 for 0 r ^{0}.

Proof. Choose a neighborhood around each point of M^{0} so that M^{0} is a retract of the
open set which is unions of the neighborhood ˙We may assume that each point of U
is joined to the point of M^{0} of which it is in a neighborhood of (we can shrink the
neighborhood so that each neighborhood contains only one point of M^{0} if necessary).

Let I^{r} be the unit cube of dimension r < 0. Consider a function
h : (I^{r}, S^{r})! (M, M^{0}).

We are going to show that h is homotopic to a map h^{0} where f^{0}(I^{r})⇢ M^{0}.

Firstly, We choose a g that approximates f on M^{c}, where c is the maximum of f on
h(I^{r}). By the previous lemma, we can choose g such that it has no degenerate critical
points and each critical point has index at least .

Let be the minimum of f on M \ U, then g ^{1}(M^{c}) has the homotopy type of
the union of g ^{1}( 1, ] and cells of dimenstion . Then consider h : (I^{r}, S^{r}) !
(M^{c}, M^{0})⇢ (g ^{1}( 1, c + ✏), M^{0}).

Since r < , then h is homotopic to some h^{0} that maps into (g ^{1}( 1, ), M^{0}). This
is true because all the critical points of g have index > . However, g ^{1}( 1, 2✏] is
contained in U and U can be deformed into M^{0}, so we have ⇡r(M, M^{0}) = 0.

proof of the Theorem 2.4. We use the energy function restricted to Int⌦(t0,· · · , t^{k}) to
relate the previous theorem to geodesics. Note that the energy function satisfies all the
hypotheses of the previous theorem except that it does not range over [0,1). We can
fix this by just applying some di↵eomorphism that takes the range of E into [0,1).

Call such a di↵eomorphism f , then applying the previous lemma to the function f ? E
gives ⇡i(Int⌦(t0,· · · , t)k), ⌦^{d}) = 0 as desired.

13

There is a more useful form of Theorem 2.4, and in fact, this is what we use to prove the Bott periodicity theorme.

Corollary 2.7. If the space of minimal geodesics is a topological manifold, and if every
non minimal geodesic has index at least 0 then ⇡i(⌦^{d}) is isomorphic to ⇡i+1(M ) for i
at most 0 2.

Proof. ⇡i(⌦^{d}) is isomorphic to ⇡i(⌦) for i less than 0 1 because the relative homotopy
group is 0, and ⇡i(⌦) is isomorphic to ⇡i+1(M ).

### 3 Bott Periodicity Theorem

Let C^{n} be the space of n-tuples of complex numbers, equipped the standard Hermi-
tian inner product. The unitary group U (n) is the group of all linear transformations
S :C^{n} ! C^{n} which preserve this inner product. Equivalently, using the matrix repre-
sentation, U (n) is the group of all n⇥ n complex matrices S such that SS^{⇤} = I; where
S^{⇤} denotes the conjugate transpose of S.

For any n⇥ n complex matrix A the exponential of A is defined by the convergent power series expansion:

exp A = I + A + 1

2!A^{2}+ 1

3!A^{3}+· · ·
The following properties are easily verified:

1. exp(A^{⇤}) = (exp A)^{⇤}; exp(T AT ^{1}) = T (exp A)T ^{1};

2. If A and B commute then exp(A+B) = (exp A)(exp B). In particular, (exp A)(exp A) = I.

3. The function exp maps a neighborhood of 0 in the space of n⇥ n matrices di↵eo- morphically onto a neighborhood of I.

It follows from the above that A is skew-Hermitian (i.e. if A + A^{⇤} = 0) if and only
if exp A is unitary. It is easy to see:

4. U (n) is a smooth submanifold of the space of n⇥ n matrices;

5. the tangent space of TIU (n) can be identified with the space of n ⇥ n skew- Hermitian matrices.

14

Thus, the Lie algebra g of U (n) can be identified with the space of skew-Hermitian matrices. For any tangent vector at I extends uniquely to a left invariant vector field on U (n). A directly computation shows that the Lie bracket of left invariant vector field is as same as the Lie bracket of matrices, [A, B] = AB BA.

Since U (n) is compact, it processes a left and right invariant Riemannian metric.

Notice that the map exp : TIU (n) ! U(n) defined by exponentiation of matrices coincides with the exponential map defined by geodesics on the resulting Riemannian manifold. In fact, for each skew-Hermitian matrix A, the map t7! exp(tA) defines a 1-parameter subgroup of U (n) and hence defines a geodesic.

We now define an inner product by

< A, B >= Re(trace(AB^{⇤})) = ReX

i,j

AijB¯ij.

It is clearly that this is positive definite, 0 , A or B = 0, conjugate symmetric and linear on g. This inner product on g induced a left invariant Riemannian metric on U (n). To check that the resulting metric is also right invariant, we check it is invariant under the adjoint action U (n) on g.

Definition 3.1. An adjoint action is: each S 2 U(n) determines an automorphism
X 7! SXS ^{1} = (L_{S}R_{S}^{1})X. The induced mapping (L_{S}R ^{1})_{⇤} is denoted Ad_{S}. As
exp(T AT ^{1}) = T exp(A)T ^{1}, we then have Ad_{S}A = SAS ^{1}.

We see that the inner product is invariant under AdS by direct computation:

< AdSA, AdSB > = Re(trace((AdSA)(AdSB)))

= Re(trace(SAS ^{1}(SBS ^{1})^{⇤}))

= Re(trace(SAS ^{1}(S ^{1})^{⇤}B^{⇤}S^{⇤}))

= Re(trace(SAB^{⇤}S^{⇤})) (* S 2 U(n))

= Re(trace(AB^{⇤})) =< A, B >

It follows that the corresponding left invariant metric on U (n) is also right invariant.

Given A 2 g we know that there exists T 2 U(n) such that T AT ^{1} is in diagonal
form:

T AT ^{1} =
0
BB
BB

@ ia1

ia2

. ..

ian

1 CC CC A,

15

where the ai’s are real. Also, for any S 2 U(n), there exists T 2 U(n) such that:

T ST ^{1} =
0
BB
BB

@
e^{ia}^{1}

e^{ia}^{2}
. ..

e^{ia}^{n}
1
CC
CC
A,

where the a_{i}’s are real again. Hence, exp : g! U(n) is surjective.

We may treat the special unitary group SU (n) in the same way. SU (n) is defined
as the subgroup of U (n) consisting of matrices of determinant 1. It is easy to show that
for T 2 U(n) such that T AT ^{1} is diagonal, then

det(exp A) = det(T (exp A)T ^{1}) = det(exp(T AT ^{1})) = e^{trace(T AT} ^{1}^{)} = e^{trace A}.
This shows that the Lie algebra of SU (n), g^{0}, is the set of all matrices A such that
A + A^{⇤} = 0 and trace A = 0.

To apply Mores theory, we need to consider the geodesics from I to I. In other
words, we consider all A 2 g = TIU (n) such that exp A = I. Suppose A is such
matrix. If it is not of diagonal form, let T 2 U(n) so that T AT ^{1} is diagonal. Then we
have:

exp(T AT ^{1}) = T (exp)AT ^{1} = T ( I)T ^{1} = I.

Thus, we may assume that A is diagonal:

A = 0 B@

ia1

. ..

ia_{n}
1
CA .

Then

exp A = 0 B@

e^{ia}^{1}
. ..

e^{ia}^{n}
1
CA .

16

So in this case, exp A = I if and only if A is of the form 0

BB BB

@ ik1⇡

ik2⇡ . ..

ikn⇡ 1 CC CC A,

for some odd integers k_{1},· · · , kn.

It is clearly that the length of geodesic t7! exp tA from t = 0 to t = 1 is

|A| =p

Re(trace AA^{⇤})) =p

trace(AA^{⇤}),
so the length of the geodesic is determined by

⇡ q

k^{2}_{1}+· · · + k^{2}n.

Thus, A determines a minimal geodesic if and only if each ki = ±1, and in this case the length is ⇡p

n.

Now, treat A as a linear map from C^{n} to C^{n}, then A is complete determined by
Eigen(i⇡), the eigenspace with respect to eigenvalue i⇡, and Eigen( i⇡), the eigenspace
with respect to eigenvalue i⇡. In fact, since C^{n} splits as

Eigen(i⇡) Eigen( i⇡),

it can only determined by Eigen(i⇡), which can be an arbitrary subspace ofC^{n}. Hence,
the space of all minimal geodesic in U (n) from I to I can be identified with the space
of all sub-vector-space of C^{n}.

Unfortunately, this space is inconvenient to use since its element has varying dimen- sions. This difficulty can be removed by replacing U (n) by SU (n) and setting n = 2m.

In this case, all the discussion above remain valid. But the additional condition that
k1+· · · k2m= 0 with ki =±1 restricts Eigen(i⇡) to being an arbitrary m-dimensional
sub-vector-space ofC^{2m}. This proves the following:

Lemma 3.2. The space of minimal geodesics from I to I in SU (2m) is homeomorphic
to the complex Grassmann manifold Gm(C^{2m}), consisting of all m-dimensional sub-
vector-spaces of c^{2m}.

17

Lemma 3.2 shows the minimal geodesics from I to I in SU (2m) is a manifold. To apply Corollary 2.7, we also need the information about index of non-minimal geodesics.

Lemma 3.3. Every non-minimal geodesic from I to I in SU (2m) has index 2m+2.

Proof. To compute the index of non-minimal geodesic from I to I on SU (2m), let
A 2 g^{0} be a matrix corresponds to a geodesic from I to I (i.e. the eigenvalues of A
have the form ik_{1}⇡,· · · , ikn⇡ where k_{i}’s are odd integers with sum zero).

We need to find the conjugate points to I along the geodesic t7! exp(tA). According to Theorem 2.3, these will be determined by the positive eigenvalues of the linear transformation

K_{A} : g^{0} ! g^{0},
where

KA(W ) = R(A, W )A = 1

4[[A, W ], A].

We may assume A is diagonal matrix:

0 B@

ik_{1}⇡
. ..

ikn⇡ 1 CA

with k1 k2 · · · kn. If W = (wj`), then a direct computation shows that [A, W ] = i⇡(kj k`)wj`,

and hence

[A, [A, W ]] = ⇡^{2}(kj k`)^{2}wj`.
So,

KA(W ) = ⇡^{2}

4 (kj k`)^{2}wj`.

Now we find a basis for g^{0} consisting of eigenvectors of KA, as follows:

1. For each j < `, let Ej` be the matrix with +1 in the j`-th entry, 1 in the j`-th
place and zeros elsewhere. It is in g^{0} and is an eigenvector corresponding to the
eigenvalue ^{⇡}_{4}^{2}(kj k`)^{2}.

2. Similarly for each j < `, let E_{j`}^{0} be the matrix with +i in the j`-th entry, i in
the j`-th place and zeros elsewhere. It is in g^{0} and is an eigenvector corresponding
to the eigenvalue ^{⇡}_{4}^{2}(kj k`)^{2}

18

3. Each diagonal matrix in g^{0} is an eigenvector with eigenvalue 0.

Thus, the non-zero eigenvalues of K_{A} are the numbers ^{⇡}_{4}^{2}(k_{j} k_{`})^{2} with k_{j} > k_{`}.
Each such eigenvalue is to be counted twice.

Now consider the geodesic (t) = exp(tA). Each eigenvalue e = ^{⇡}_{4}^{2}(kj k`)^{2} > 0
give rise to a series of conjugate points along corresponding to the values

t = ⇡ pe, 2⇡

pe, 3⇡

pe,· · · .

This gives

t = 2

(kj k`), 4

(kj k`), 6

(kj k`),· · · .

The numder of such value of t in (0, 1) is equal to ^{k}^{j}_{2}^{k}^{`} 1 (We need to minus one
since the value t = 1 is not included).

Now let us apply the Index Theorem. For each j, ` with k_{j} > k_{`}, we obtain two
copies of the eigenvalue ^{⇡}_{4}^{2}(k_{j} k_{`})^{2}, and hence a contribution of 2(^{k}^{j}_{2}^{k}^{`} 1) to the
index. Sum over all j, `, this gives

= X

kj>k`

(kj k` 2)

for the index of the geodesic .

Now, we divided it into three cases:

Case 1 At least m+1 of the ki’s are negative. In this case at least one of the positive ki must be 3, and we have

m+1X

1

(3 ( 1) 2) = 2(m + 1).

Case 2 At least m+1 of the ki’s are positive. In this case at least one of the negative

3, so

m+1X

1

(1 ( 3) 2) = 2(m + 1).

Case 3 m of the ki are positive and m are negative but not all are ±1 (since we assume that is non-minimal). Then one is 3 and one is 3, so

m 1X

1

(3 ( 1) 2) +

m+1X

1

(1 ( 3) 2) + (3 ( 3) 2) = 4m 2(m + 1).

19

Thus, in either case we have 2m + 2. This proves Lemma.

Then now we can prove the following:

Theorem 3.4. The inclusion map G_{m}(C^{2m}) ! ⌦(SU(2m); I, I) induces isomor-
phisms of homotopy groups in dimensions 2m. Hence,

⇡iGm(C^{2m}) ⇠= ⇡i+1SU (2m),
for i 2m.

Proof. By Lemma 3.2, ⌦(SU (2m); I, I) is a topological manifold, since it is isomorphic
to Gm(C^{2m}). Also, every non-minimal geodesic has index at least 0 = 2m + 2, then
by Corollary 2.7,

⇡iGm(C^{2m}) = ⇡i(⌦^{⇡}^{2}^{n}) ⇠= ⇡i+1(SU (2m)),
for i 0 2 = 2m.

We are now going to establish the relation between of homotopy groups of U (m) and those of SU (m).

Lemma 3.5. The group ⇡iGm(C^{2m}) is isomorphic to ⇡i 1U (m) for i 2m. Moreover,

⇡i 1U (m) ⇠= ⇡i 1U (m + k) for i 2m, k 2 N; and

⇡j(U (m)) ⇠= ⇡j(SU (m)), for j 6= 1.

Proof. We can choose fibrations

U (m)! U(m + 1) ! S^{2m+1}
and

U (m)! U(2m) ! U(2m)/U(m).

From the first one, we get

· · · ! ⇡^{i}S^{2m+1}! ⇡^{i 1}U (m)! ⇡^{i 1}U (m + 1) ! ⇡^{i 1}S^{2m+1}! · · · ,
20

and this becomes

0! ⇡^{i 1}U (m)! ⇡^{i 1}U (m + 1)! 0,
when i 2m.

Also, the second fibration gives

· · · ! ⇡^{i}U (2m)/U (m)! ⇡^{i 1}U (m)! ⇡^{i 1}U (2m)! ⇡^{i 1}U (2m)/U (m)! · · · ,
which implies ⇡i(U (2m)/U (m)) = 0, for i 2⇡.

Notice that the complex Grassmann manifold Gm(C^{2m}) can be identified with
U (2m)/(U (m)⇥ U(m)), so we have a fibration:

U (m)! U(2m)/U(m) ! G^{m}(C^{2m}).

Using this fibration and ⇡i(U (m)/U (2m)) = 0, for i 2m, we now get:

⇡iGm(C^{2m}) ⇠= ⇡i 1U (m),
for i 2m.

Finally, from the fibration

SU (m)! U(m) ! S^{1},
we obtain that

⇡jSU (m) ⇠= ⇡jU (m), for j 6= 1. This proves the lemma.

From now on, we use ⇡iU to denote the i-th stable homotopy group of the unitary group.

So, we see that:

⇡i 1U = ⇡i 1U (m) ⇠= ⇡iGm(C^{2m}) ⇠= ⇡i+1SU (2m) ⇠= ⇡i+1U.

The first and the third isomorphism follows from Lemma 3.5, and the second isomor- phism comes from Theorem 3.4. This proves the famous Bott Periodicity Theorem.

Theorem 3.6 (Bott Periodicity Theorem). For i 1,

⇡i 1U ⇠= ⇡i+1U.

21

## Final Report on Atiyah–Singer Index Theorem via Twisted Signature Formula

### Chuang, Ping-Hsun June 2021

### Contents

1 Introduction 1

2 Local Formula for the Index 3

3 Invariance Theory 5

4 Twisted Signature Formula 13

5 Toward the Atiyah–Singer Index Theorem 19

6 Original Proof Atiyah–Singer Index Theorem 22

### 1 Introduction

Let E, F ! M be two real or complex vector bundle over a compact manifold
M . Let P : C^{1}(M, E) ! C^{1}(M, F ) be an elliptic operator. The index of
P is defined by

ind P := dim ker P dim coker P

= dim ker P^{⇤}P dim ker P P^{⇤}.

We had already studied that the index of an elliptic operator is expressible as an integral on M via the heat kernel. This is known as McKean-Singular formula

ind P = Z

M

(trExHP^{⇤}P (x, x, t) trFxHP P^{⇤}(x, x, t)) dvolM(x) .

1

Also, in the lecture, we had introduced the Dirac operator D on a Cli↵ord module E. In order to prove the local index theorem, we need to use the Lichnerowicz formula and the Getzler scaling method.

In this report, we will give a di↵erent approach to Atiyah–Singer index theorem via twisted signature formula due to Gilkey [Gil95]. Precisely, we will fiest study the index of twisted Dirac operator for the vector bundle E:

D : C^{1}⇣^±

(T^{⇤}M )⌦ E⌘

! C^{1}⇣^⌥

(T^{⇤}M )⌦ E⌘
.
Then, we will prove the twisted signature formula:

ind D = Z

M

L (M ) ch (E)

using the invariance theory. Finally, based on the twisted signature formula, we will prove the Atiyah–Singer index theorem for general elliptic operator P by interpreting the index as a function on K-ring under K-theory language.

In the second section, we will introduce the important invariant for an elliptic operator. These invariants are highly related to the heat kernel H (x, y, t) of the elliptic operator P . In the third section, we will study the invariance theory on manifolds and vector bundles. Explicitly, we will prove that all the invariants in terms of derivatives of metric and connection one form are all linearly span of wedge products of Pontryagin classes of T M and Chern classes of E. This will help us in proving the twisted signature formula. In the fourth section, we give the proof of twisted signature formula via the invariants. After that, in the fifth section, we will prove the Atiyah–

Singer Index Theorem for general elliptic operators. To achieve this goal, we need to interpret the index of an elliptic operator as a function in K-theory language:

ind : K (⌃ (T^{⇤}M ) ;C) /K (M; C) ! C,

where ⌃ (T^{⇤}M ) is the fiberwise suspension of the unit sphere bundle S (T^{⇤}M ).

Then, we will prove that the group K (⌃ (T^{⇤}M ) ;C) /K (M; C) is generated
by the special bundlesn

⇧_{+}(⌃ _{PE})o

E2Vect(M). Then, the Atiyah–Singer Index Theorem reduce to these special case that we may simply apply the twisted signature formula.

As a supplement, in the last section, we will give a sketch of original proof of Atiyah–Singer Index Theorem [AS63]. There are three key points in the proof. First, we introduce the group of elliptic symbols Ell (M ). By analyzing the group, we reduced the Atiyah–Singer Index Theorem to the twisted signature formula. Next, we introduce the cobordism ring A on the pair (M, E), where E is a complex vector bundle on M . Using the

2

knowledge of singular integral operators on manifolds, we can prove that the
null-cobordant elements in A satisfy the theorem. Therefore, the concept
of index is well-defined on the cobordism ring A. Finally, generalizing the
Thom isomorphism theorem, we see that A ⌦ Q is generated by CP^{2i}, 1
and (S^{2j}, V_{j}) as a polynomial algebra. Eventually, to achieve the theorem is
to verify the case CP^{2i} and S^{2j} as the generators of A⌦ Q.

### 2 Local Formula for the Index

Theorem 1. Let P be a self-adjoint elliptic operator of order d > 0 on a
vector bundle E over compact manifold M^{m} such that the symbol P(x, ⇠)
of P is positive definite for ⇠ 6= 0. Then,

1. If we choose a coordinate system for M near a point x2 M and choose a local frame for E, we can define en(x) depending on the symbol

P(x, ⇠) such that if H(t, x, y) is the heat kernel of e ^{tP} then
H(t, x, x)⇠

X1 n=0

t^{n m}^{d} e_{n}(x) as t! 0^{+}

i.e., given any integer k, there exists n(k) such that:

H(t, x, x) X

nn(k)

t^{n m}^{d} en(x)

1,k

< Ckt^{k} for 0 < t < 1.

2. Moreover, e_{n}(x)2 END(E, E) is invariantly defined independent of the
coordinate system and local frame for E.

Theorem 2.

(a) Let Pi : C^{1}(Ei) ! C^{1}(Ei) be elliptic self-adjoint di↵erential opera-
tors of order d > 0 with positive definite symbol. We set P = P1 P2 :
C^{1}(E_{1} E_{2})! C^{1}(E_{1} E_{2}). Then P is an elliptic self-adjoint par-
tial di↵erential operator of order d > 0 with positive definite symbol
and en(x, P1 P2) = en(x, P1) en(x, P2).

(b) Let Pi : C^{1}(Ei) ! C^{1}(Ei) be elliptic self-adjoint partial di↵erential
operators of order d > 0 with positive definite symbol defined over
compact manifolds Mi. We let

P = P_{1}⌦ 1 + 1 ⌦ P2 : C^{1}(E_{1}⌦ E2)! C^{1}(E_{1}⌦ E2)
3

over M = M1⇥M2. Then, P is an elliptic self-adjoint partial di↵erential operator of order d > 0 with positive definite symbol over M and

en(x, P ) = X

p+q=n

ep(x1, P1)⌦ e^{q}(x2, P2)

Proof. These follow from the identities:

e ^{t(P}^{1} ^{P}^{2}^{)}= e ^{tP}^{1} e ^{tP}^{2}
e ^{t(P}^{1}^{⌦1+1⌦P}^{2}^{)}= e ^{tP}^{1} ⌦ e ^{tP}^{2}
so the heat kernels satisfy the identities:

H (t, x, x, P1 P2) = H (t, x, x, P1) H (t, x, x, P2) H (t, x, x, P1⌦ 1 + 1 ⌦ P2) = H (t, x1, x1, P1)⌦ H (t, x2, x2, P2) . We equate equal powers of t in the asymptotic series:

Xt^{n m}^{d} en(x, P1 P2)

⇠X

t^{n m}^{d} en(x, P1) X

t^{n m}^{d} en(x, P2)
Xt^{n m}^{d} e_{n}(x, P_{1}⌦ 1 + 1 ⌦ P2)

⇠nX

t^{p m1}^{d} ep(x1, P1)o

⌦nX

t^{q m2}^{d} eq(x2, P2)o
Hence, the proof is complete.

We define the scalar invariant

an(x, P ) = Tr en(x, P ),

where the trace is the fiber trace in E over the point x. These scalar invariants
a_{n}(x, P ) gives

Tr e ^{tP} =
Z

M

TrExH(t, x, x) dvolM(x)

⇠ X1 n=0

t^{m n}^{d}
Z

M

an(x, P ) dvolM(x).

This is a spectral invariant of P which can be computed from local informa- tion about the symbol of P .

4

Let P be an elliptic operators on the vector bundle E and let i be the associated Laplacians. We define:

an(x, P ) =X

i

( 1)^{i}Tr en(x, i)
then McKean-Singer formula gives

ind(P ) =X

i

( 1)^{i}Tr e ^{t} ^{i} ⇠
X1
n=0

t^{n m}^{d}
Z

M

a_{n}(x, P ) dvol_{M}(x)
Let t! 0^{+}, we get the following theorem.

Theorem 3. Let P be an elliptic di↵erential operators on the vector bundle
E over compact manifold M^{m}.

(a) an(x, P ) can be computed in any coordinate system and relative to any
local frames depending on the symbol of P and of P^{⇤}.

(b) Z

M

an(x, P ) dvolM(x) =

⇢ ind(P ) if n = m 0 if n6= m.

### 3 Invariance Theory

We let Pm denote the ring of all invariant polynomials in{gij, gij;k, gij;k`, ...},
the derivatives of the metric, for a manifold M of dimension m. We defined
ord (gij;↵) = |↵|; let P^{m,n} be the subspace of invariant polynomials which are
homogeneous of order n. Then, we have the following useful coordinate free
characterization:

Lemma 4. Let P 2 Pm, then P 2 Pm,n if and only if P (c^{2}g) (x0) =
c ^{n}P (g) (x_{0}) for every c6= 0.

Proof. Fix c = 0 and let X be a normalized coordinate system for the metric g at the point x0. Suppose that x0 = (0, . . . , 0) is the center of the coordinate system X. Let Y = cX be a new coordinate system, then we have

@

@yi

= c ^{1} @

@xi

c^{2}g

✓ @

@yi

, @

@yj

◆

= g

✓ @

@xi

, @

@xj

◆

d^{↵}_{y} = c ^{|↵|}d^{↵}_{x} gij;↵ Y, c^{2}g = c ^{|↵|}gij;↵(X, g).

This implies that if A is any monomial of P that:

A Y, c^{2}g (x0) = c ^{ord(A)}A(X, g) (x0)

Since Y is normalized coordinate system for the metric c^{2}g, P (c^{2}g) (x0) =
P (Y, c^{2}g) (x_{0}) and P (g) (x_{0}) = P (X, g) (x_{0}). This proves the Lemma.

5

If P 2 Pm, we can always decompose P = P0+· · ·+Pninto homogeneous
polynomials. Above lemma implies the P_{j} are all invariant separately. There-
fore,P^{m} has a direct sum decompositionP^{m} =P^{m,0} P^{m,1} · · · P^{m,n} · · ·
and has the structure of a graded algebra. Using Gauss lemma and Taylor’s
theorem, we can always find a metric with the g_{ij;↵}(x_{0}) = c_{ij,↵} arbitrary con-
stants for |↵| 2 and gij(x0) = ij, gij;k(x0) = 0. Consequently, if P 2 P^{m}
is non-zero as a polynomial, then we can always find g so P (g) (x0)6= 0 so P
is non-zero as a formula.

Finally, note that P^{m,n} is zero if n is odd since we may take c = 1 in
the above lemma.

Lemma 5. an(x, p) defines an element ofP^{m,n}, where p is the Laplacian
on p-forms.

Proof. First, the Laplacian is defined to be = dd^{⇤}+ d^{⇤}d =±d ⇤ d ⇤ ± ⇤ d ⇤ d.

In the flat metric metric, Laplacian is given by = P _{@}^{2}

@x^{2}_{i} which is smooth
in the metric g. For general metric, we parametrize the metric and we can
di↵erentiate the matrix representation of Hodge star ⇤ and each derivative
applied to ⇤ reduces the order of di↵erentiation by 1 and increase the order
of gij;↵ by 1. Thus, the linear term and the constant term of Laplacian is
also smooth in g_{ij}, g_{ij;k}, and g_{ij;k`}.

Note that en(x, ) for the second order elliptic operator is defined as H(t, x, x)⇠X

n 0

t^{n m}^{2} en(x, P ),

where H(t, x, x) be the heat kernel of e ^{tP}. Then, a_{n}(x, ) = Tr e_{n}(x, ) is
the fiber trace and an(x, ) is homogeneous of order n inP^{m,n}.

Weyl’s theorem (See theorem 10 at the end of this section) on the invari-
ants of the orthogonal group gives a spanning set for the spaces P^{m,n}:
Lemma 6. We introduce formal variables Ri1i2i3i4;i5...ik for the multiple co-
variant derivatives of the curvature tensor. The order of such a variable is
k + 2. We consider the polynomials in these variables and contract on pairs
of indices. Then, all possible such expressions generate Pm. In particular,

(1) {1} spans P^{m,0}.
(2) {Rijij} spans Pm,2.

(3) {Rijij;kk, R_{ijij}R_{klkl}, R_{ijik}R_{ljlk}, R_{ijkl}R_{ijkl}} spans Pm,4. This particular
spanning set forP^{m,4} is linearly independent and forms a basis if m 4.

6

If I ={1 i1 · · · ip m}, let |I| = p and dx^{I} = dxi1 ^ · · · ^ dxip. A
p-form valued polynomial is a collection {PI} = P for |I| = p of polynomials
PI in{g^{ij}, gij;k,· · · }. We write P =P

|I|=pPIdx^{I} as a formal sum to represent
P . If all the {P^{I}} are homogeneous of order n, we say P is homogeneous of
order n. We define:

P (X, g) (x_{0}) =X

I

P_{I}(X, g)dx^{I} 2^p

(T^{⇤}M )

to be the evaluation of such a polynomial. We say P is invariant if P (X, g) (x0) = P (Y, g) (x0) for every normalized coordinate systems X and Y . Similar to above lemma, we have

Lemma 7. Let P be p-form valued and invariant. Then, P is homogeneous
of order n if and only if P (c^{2}g) (x0) = c^{p n}P (g) (x0) for every c 6= 0.

Proof. Fix c = 0 and let X be a normalized coordinate system for the metric g at the point x0. Suppose that x0 = (0, . . . , 0) is the center of the coordinate system X. Let Y = cX be a new coordinate system, then we have

@

@yi

= c ^{1} @

@xi

c^{2}g

✓ @

@yi

, @

@yj

◆

= g

✓ @

@xi

, @

@xj

◆

d^{↵}_{y} = c ^{|↵|}d^{↵}_{x} gij;↵ Y, c^{2}g = c ^{|↵|}gij;↵(X, g)
dy^{1}^ · · · ^ dy^{p} = c^{p}dx^{1}^ · · · ^ dx^{p}.

This implies that if A is any monomial of P that:

A Y, c^{2}g (x0) = c^{p ord(A)}A(X, g) (x0)

Since Y is normalized coordinate system for the metric c^{2}g, P (c^{2}g) (x0) =
P (Y, c^{2}g) (x0) and P (g) (x0) = P (X, g) (x0). This proves the Lemma.

LetPm,n,pbe the space of p-form valued invariants which are homogeneous of order n.

Let Pj(g) = pj(T M ) be the j-th Pontryagin class computed relative to the
curvature tensor of the Levi-Civita connection. If we expand pj in terms of
the curvature tensor, then pj is homogeneous of order 2j in the{Rijkl} tensor
so pj is homogeneous of order 4j and invariant in P^{m,4j,4j}. If ⇢ is a partition
of k = i1+· · ·+i^{j}, we define p⇢= pi1. . . pij 2 P^{m,4k,4k}. The{p^{⇢}} form a basis
of the Pontryagin 4k forms. Also, by considering products of these manifolds
with flat tori T^{m 4k} we see that the{p^{⇢}} are linearly independent in P^{m,4k,4k}
if 4k m.

7