JIA-MING (FRANK) LIOU

Let K be a field. All the vector spaces in this note are over K.

1. Some Linear Algebras

Let V be an n-dimensional vector space over K. A linear functional on V is a linear map
ϕ : V → K. The set of all linear functionals, denoted by V^{∗}, forms a vector space. The
addition and scalar multiplication are given as follows. Let ϕ, ψ ∈ V^{∗}, and a ∈ k. Define
ϕ + ψ and aϕ by

(ϕ + ψ)(v) = ϕ(v) + ψ(v), (aϕ)(v) = aϕ(v).

Theorem 1.1. V^{∗} has dimension n.

Proof. Let {v1, · · · , vn} be a basis for V. Let ψ_{i} : V → K such that ψi(vj) = δij. Claim that
the set β = {ψ_{i}: 1 ≤ i ≤ n} forms a basis for V^{∗}.

For any ϕ ∈ V^{∗}, one can check that
ϕ =

n

X

i=1

ϕ(e_{i})ψ_{i}.
Then ϕ is a linear combinaton of elements of β.

Assume that Pn

i=1aiψi = 0 for some a1, · · · , an ∈ K. Evaluate the sum for v_{k}, we see
that a_{k}= 0. This shows that β is linearly independent.

Remark. The basis {ψi} for V^{∗} in Theorem (1.1) is called the dual basis of {vi}.

Let ϕ_{1}, ϕ_{2} ∈ V^{∗}. We define a map ϕ_{1}∧ ϕ_{2}: V × V → K by
(ϕ_{1}∧ ϕ_{2})(w_{1}, w_{2}) = det [ϕ_{i}(w_{j})]^{2}_{i,j=1}.

We can check that ϕ1∧ ϕ_{2} : V × V → K is bilinear, and skew-symmetric, i.e. ϕ2∧ ϕ_{1} =

−ϕ_{1} ∧ ϕ_{2}. We call ϕ_{1}∧ ϕ_{2} the wedge product of ϕ_{1} and ϕ_{2}. Let Λ^{2}V^{∗} be the set of all
skew-symmetric bilinear maps from V × V to K.

Theorem 1.2. Λ^{2}V^{∗} is a vector space of dimensionn
2

.

Proof. Let {v_{i}} be a basis for V and {ψ_{i}} be its dual basis. Claim that {ψ_{i}∧ ψ_{j} : 1 ≤ i <

j ≤ n} forms a basis for V.

Suppose that

X

i<j

aijψi∧ ψ_{j} = 0.

Evaluate the equation for (vk, vl), we obtain akl= 0.

Let B be a skew-bilinear map. Denote b_{ij} = B(v_{i}, v_{j}). Then b_{ij} = −b_{ji}. For any x =
Pn

i=1xivi and y =P

iyivi, we see that B(x, y) =X

i,j

bijxiyj 1

=X

i<j

b_{ij}x_{i}y_{j}+X

i>j

b_{ij}x_{i}y_{j}

=X

i<j

b_{ij}(x_{i}y_{j} − x_{j}y_{i}).

Note that (ψ_{i}∧ ψ_{j})(x, y) = (x_{i}y_{j} − x_{j}y_{i}). Hence we find
B(x, y) = (X

i<j

bijψi∧ ψ_{j})(x, y).

Hence we prove that B =P

i<jb_{ij}ψ_{i}∧ ψ_{j}.

Definition 1.1. Let p be a natural number and Sp be the symmetric group on p-letters.

A map f : ×^{p}_{i=1}V → K is alternating if

f (v_{σ}_{1}, · · · , v_{σ}_{p}) = (sgn σ)f (v_{1}, · · · , v_{p}),
for any v1, · · · , vp ∈ V. f is called p-linear if for each 1 ≤ k ≤ p,

f (v_{1}, · · · , av_{k}+ bw_{k}, · · · , v_{p}) = af (v_{1}, · · · , v_{k}, · · · , v_{p}) + bf (v_{1}, · · · , w_{k}, · · · , v_{p}),
for any a, b ∈ K.

Let 1 ≤ p ≤ n. Given ϕ_{1}, · · · , ϕ_{p} ∈ V^{∗}, we define ϕ_{1}∧ · · · ∧ ϕ_{p} : ×^{p}_{i=1}V → k by
(ϕ1∧ · · · ∧ ϕ_{p})(w1, · · · , wp) = det [ϕi(wj)]^{p}_{i,j=1},

for all w_{1}, · · · , w_{k} ∈ V. Then we can check that ϕ_{1}∧ · · · ∧ ϕ_{p} is p-linear and alternating.

Let Λ^{p}V^{∗} be the set of all p-linear alternating maps from ×^{p}_{i=1}V to k and denote Λ^{0}V^{∗}=
V^{∗}.

Theorem 1.3. For 0 ≤ p, ≤ n, the set Λ^{p}V^{∗} forms a vector space of dimensionn
p

.
Proof. Let {v_{i}} be a basis for V and {ψ_{i}} be a basis for V^{∗} dual to V. We only need to
show that the set {ψi1 ∧ · · · ∧ ψ_{i}_{p} : 1 ≤ i1< · · · < ip ≤ n} forms a basis for Λ^{p}V^{∗}.

2. Tangent Vectors

Let p be a point in R^{n}. We denote (v)_{p} = p + v, where + is the addition in R^{n}. Let T_{p}R^{n}
be the set of all (v)_{p} with v ∈ R^{n}. We define the addition and scalar multiplication on T_{p}R^{n}
as follows. Let (v)p, (w)p ∈ T_{p}R^{n}, and a ∈ R. We define

(v)_{p}+ (w)_{p} = (v + w)_{p}, a · (v)_{p}= (av)_{p}.

Note that the addition and scalar multiplication here are different from those of R^{n}.
Proposition 2.1. The set TpR^{n} forms an n-dimensional real vector space.

Proof. The proof is obvious. Let {ei : 1 ≤ i ≤ n} be the standard basis for R^{n}. Then
{(e_{i})_{p}: 1 ≤ i ≤ n} forms a basis for T_{p}R^{n}.

Remark. We will use the notation vp for (v)p.

We say that TpR^{n} is the tangent space to R^{n} at p and elements of TpR^{n} are tangent
vectors at p. The dual space of T_{p}R^{n} is denoted by T^{∗}R^{n} and called the cotangent space of
R^{n} at p.

Let U be an open set on R^{n}. The algebra of (real-valued) smooth functions on U is
denoted by C^{∞}(U ). Let p ∈ U and f ∈ C^{∞}(U ), we define the directional derivative of f at
p along a vector v ∈ R^{n} by

v_{p}[f ] = d

dtf (p + tv)
_{t=0}

.

Let {ei : 1 ≤ i ≤ n} be the standard basis for R^{n}. Assume that v = Pn

i=1viei. From calculus, we know

vp[f ] =

n

X

i=1

vi

∂f

∂xi

(p).

Then we know

(1) v_{p} : C^{∞}(U ) → R is a linear functional, i.e. vp(af + bg) = av_{p}(f ) + bv_{p}(g) for all
a, b ∈ R and

(2) v_{p}(f g) = v_{p}(f )g(p) + f (p)v_{p}(g).

Note that if the tangent vector is given, the directional derivative only depends on the
partial derivatives of the function at the point p. Suppose f, g are two smooth functions
defined on some open sets, (not necessarily the same). If f and g agree on an open set
containing p, then vp(f ) = vp(g). This leads to another definition of tangent vectors of R^{n}.
Let V, W be open sets containing p in R^{n}. Let f be a smooth function on V and g smooth
function on W. We say that the pair (f, V ) is equivalent to the pair (g, W ) if there exists an
open set Z ⊂ V ∩ W contain p such that f = g on Z. The set of all equivalent classes [(f, V )]

is denoted by C_{p}^{∞}. For simplicity, we will simply denote [(f, V )] by [f ]. An equivalent class
[f ] is called a germ at p.

Proposition 2.2. The set C_{p}^{∞}forms an algebra over R.

Let [f ] ∈ C_{p}^{∞}. We define [f ](p) = f (p), where f is a representative of [f ]. Then [f ](p) is
well-defined. Now, if f1, f2 belong to the same germ [f ] at p, we can check that vp(f1) =
v_{p}(f_{2}). Hence we can define v_{p}([f ]) = v_{p}(f ) where f is a representative of [f ].

Definition 2.1. A point derivation δ_{p} at p is a linear functional on C_{p}^{∞} such that
δ_{p}([f ][g]) = δ_{p}([f ])[g](p) + [f ](p)δ_{p}([g]).

The set of all point derivations at p is denoted by T_{p}^{0}R^{n}.

Remark. By definition, all tangent vectors are point derivation at p.

Given [f ] ∈ C_{p}^{∞}, we define

∂

∂xi

(p)[f ] = ∂f

∂xi

(p) for a representative f of [f ]. Then we know that ∂

∂x_{i}(p) is a point derivation.

Theorem 2.1. The set

∂

∂x_{i}(p) : 1 ≤ i ≤ p

forms a basis for T_{p}^{0}R^{n}; Then T_{p}^{0}R^{n} is an
n-dimensional vector space.

Proof. It is easy to verify that the set is linearly independent.

Let δ_{p} ∈ T_{p}R^{n}. Denote δ_{p}[x_{i}] = v_{i}. Let [f ] ∈ C_{p}^{∞}. Choose a representative f of [f ].

Consider the Tayler expansion f (x) = f (p) +X

i

∂f

∂xi

(p)(xi− p_{i}) +X

i,j

(xi− p_{i})(xj− p_{j})
Z 1

0

(1 − t) ∂^{2}f

∂xi∂xj

(p + t(x − p))dt.

Using the properties of tangent vectors, we find
δ_{p}[f ] =X

i

v_{i} ∂

∂xi

(p)[f ].

In other words, we find δp =X

i

vi

∂

∂xi

(p).

We know that T_{p}R^{n}is a vector subspace of T_{p}^{0}R^{n}. Since they both have the same dimen-
sion, they must be equal. In fact, by elementary calculus,

(e_{i})_{p}[f ] = ∂f

∂x_{i}(p).

We find that (ei)p = ∂

∂xi

(p) by definition. Hence we conclude that the notion of tangent vectors are equivalent to the notion of point derivations.

Remark. Since we always choose a representative of a germ at p, from now on, we will simply use the notation f for [f ].

Let f be a germ at p. We define a linear functional df_{p}: T_{p}R^{n}→ R by
dfp(vp) = vp(f )

Then df_{p} ∈ T_{p}^{∗}R^{n}. Notice that for any v_{p} = (v_{1}, · · · , v)_{p} ∈ T_{p}R^{n}, we have (dx_{i})_{p}(v_{p}) = v_{i}.
This implies that (dx_{i})_{p}((e_{i})_{p}) = δ_{ij} for all i, j. Hence {(dx_{i})_{p} : 1 ≤ i ≤ p} is the dual basis
to {(ei)p: 1 ≤ i ≤ n}. We conclude that:

Theorem 2.2. The set {(dx_{i})_{p}: 1 ≤ i ≤ n} forms a basis for T_{p}^{∗}R^{n}.
3. Tangent Maps

Let U be an open subset of R^{n}. Suppose F : U → R^{m} is a smooth map, where F =
(F1, · · · , Fm). For each p ∈ U, we can define a linear map, called the tangent map,

dF_{p} : T_{p}R^{n}→ T_{F (p)}R^{m}
as follows.

Let f be a germ at F (p). Then f ◦ F is a germ at p. Given any v_{p}∈ T_{p}R^{n}, we set
(dFp(vp)) (f ) = vp(f ◦ F ).

Suppose F = F (y_{1}, · · · , y_{m}). By chain rule, ∂

∂x_{i}(p)(f ◦ F ) =

m

X

j=1

∂f

∂y_{j}(F (p))∂F_{j}

∂x_{i}(p). By
definition, dFp

∂

∂xi

(p)

(f ) =

m

X

j=1

∂F_{j}

∂xi

(p)∂f

∂yj

(F (p)). Hence we see that

dFp

∂

∂x_{i}(p)

=

m

X

j=1

∂Fj

∂x_{i}(p) ∂

∂y_{j}(F (p)).

If we choose the standard basis for TpR^{n}and T_{F (p)}R^{m}, the tangent map dFp is represented
by the Jacobi matrix of F at p.

Let U be an open set in R^{n} and F : U → R^{m} be a smooth map. Assume that V is an
open set in R^{m} containing F (U ) and G : V → R^{k}is a smooth map. For any x ∈ U, we have
(chain rule)

(3.1) d(G ◦ F )x: TxR^{n} −−−−→ T^{dF}^{x} _{F (x)}R^{m}

dG_{F (x)}

−−−−→ T_{G(F (x))}.
4. Differential Forms on R^{n}

Let U be an open set in R^{n}. We denote
T U = [

p∈U

TpR^{n}, T^{∗}U = [

p∈U

T_{p}^{∗}R^{n}.

Then we find that T U can be identify with U ×R^{n}and T^{∗}U can be identified with U ×(R^{n})^{∗}.
We have a natural projection π : T^{∗}U → U.

Let f be a smooth function on an open set U ⊂ R^{n}. The total derivative of f has the
formal expression:

df =X

i

∂f

∂x_{i}dx_{i}.

Since f is smooth, all the partial derivatives of f are all smooth. We shall use this idea to define the notion of one-form. A smooth one form ω on U is a formal expression

ω =X

i

ω_{i}dx_{i}

with ωi ∈ C^{∞}(U ). Now we can think of ω as a map from U to T^{∗}U. In fact, for each p ∈ U,
ω(p) =P

iωi(p)(dxi)p ∈ T_{p}^{∗}R^{n}. Moreover, (π ◦ ω)(p) = p for all p ∈ U. Therefore a smooth
one-form ω is a smooth map ω : U → T^{∗}U such that π ◦ ω = 1_{U}.

Let Λ^{k}T^{∗}U =S

p∈U Λ^{k}T_{p}^{∗}R^{n}. We can also consider the natural projection^{1} π : Λ^{k}T^{∗}U →
U. We can check that Λ^{k}T^{∗}U can be identified with U × Λ^{k}(R^{n})^{∗}.

Definition 4.1. A smooth k-form on U is a smooth map η : U → Λ^{k}T^{∗}U such that
π ◦ η = 1_{U}. The set of all smooth k forms on U is denoted by Ω^{k}(U ). We denote Ω^{∗}(U ) =
L

k≥0Ω^{k}(U ).

By definition, a smooth k-form has the formal expression

η = X

1≤i1<···<ik≤n

η_{i}_{1}···i_{k}dx_{i}_{1} ∧ · · · ∧ dx_{i}_{k}
with ηi1···ik ∈ C^{∞}(U ).

Let I = (i_{1}, · · · , i_{k}) be a k-tuple. We write

dxI = dxi1 ∧ · · · ∧ dx_{i}_{k}.
A k-form η is also denoted by η =P

Iη_{I}dx_{I}. The sum of two k-forms on U ω =P

Iω_{I}dx_{I}
and η =P

IηIdxI is defined to be

ω + η =X

I

(ω + η)dx_{I}.

1Let us use the notation π for the projection but we have to know that this projection is different from the above.

For any smooth function f on U, we define f ω =X

I

(f ωI)dxI. One can check that

Proposition 4.1. The set Ω^{k}(U ) is a free C^{∞}(U )-module of rankn
k

.

On Ω^{∗}(U ), we define the exterior (wedge) product as follows. Given a s-form ω =
P

Iω_{I}dx_{I} and a k-form η =P

Jη_{J}dx_{J}, we define a s + k form by
ω ∧ η =X

I,J

ω_{I}η_{J}dx_{I}∧ dx_{J}.

The exterior product of forms on U has the following properties.

Theorem 4.1. Let ω, η, and θ be k, s, r-forms respectively. Then (1) (ω ∧ η) ∧ θ = ω ∧ (η ∧ θ),

(2) ω ∧ η = (−1)^{ks}η ∧ ω,

(3) If r = s, then ω ∧ (η + θ) = ω ∧ η + ω ∧ θ.

Proof. Exercise.

Let ω =P

Iω_{I}dx_{I}. We define the exterior derivative dω of ω by
dω =X

I

dω_{I}∧ dx_{I}.

Proposition 4.2. d : Ω^{∗}(U ) → Ω^{∗}(U ) is a linear map such that
(1) d : Ω^{k}(U ) → Ω^{k+1}(U )

(2) d(ω ∧ η) = dω ∧ η + (−1)^{k}ω ∧ dη
(3) d^{2} = 0.

Proof. Exercise.

Let dk= d|_{Ω}^{k}_{(U )}. Then d^{2} = 0 implies that dk+1dk = 0. In other words, Im dk−1⊂ ker d_{k}.
Definition 4.2. The k-th de Rham cohomology of U is the quotient space defined by

H^{k}(U ) = ker d_{k}/ Im d_{k−1}.

Later, we will study more about the de Rham cohomology of a smooth manifold.

4.1. Pull back on Differential Forms. Let F : U ⊂ R^{n}→ R^{m} be a smooth map. Then
F induces a linear map F^{∗} from Ω^{k}(V ) to Ω^{k}(U ), where V is an open set containing F (U ).

Given a k-form ω on V, with k ≥ 1, we define a k-form F^{∗}ω on U by

(F^{∗}ω)(p)(v1, · · · , v_{k}) = ω(F (p))(dFp(v1), · · · , dFp(v_{k})), ∀p ∈ U,
where v_{1}, · · · , v_{k} ∈ T_{p}R^{n}. For k = 0, set F^{∗}g = g ◦ F.

Proposition 4.3. Let F : U ⊂ R^{n}→ R^{m} and g ∈ C^{∞}(U ). Suppose ω, η are k-forms. Then
(1) F^{∗}(ω + η) = F^{∗}ω + F^{∗}η,

(2) F^{∗}(gω) = (F^{∗}g)(F^{∗}ω),

(3) If ϕ_{1}, · · · , ϕ_{k} are one-forms, F^{∗}(ϕ_{1}∧ · · · ∧ ϕ_{k}) = F^{∗}ϕ_{1}∧ · · · ∧ F^{∗}ϕ_{k}.

Let us assume F = (F1, · · · , Fm), i.e. Fi = yi ◦ F. Suppose that ω = P

IωIdyI. Then
F^{∗}ω =P

IF^{∗}ω_{I}F^{∗}dy_{I}. Using the properties, we have F^{∗}dy_{i}_{1}∧· · ·∧F^{∗}dy_{i}_{k} = dF_{i}_{1}∧· · ·∧dF_{i}_{k}.
Then

F^{∗}ω =X

I

ω_{I}(F_{1}, · · · , F_{m})dF_{i}_{1}∧ · · · ∧ dF_{i}_{k}.
Using this identity, it is easy for us to prove the following corollary.

Corollary 4.1. Let ω, η be forms. Then F^{∗}(ω ∧ η) = F^{∗}ω ∧ F^{∗}η.

Proposition 4.4. Let F : U ⊂ R^{n}→ R^{m} and G : V ⊂ R^{m} → R^{k}, where V is an open set
containing F (U ). Then

(G ◦ F )^{∗}ω = F^{∗}(G^{∗}ω)
for all k-forms ω.

Proposition 4.5. Let F : U → R^{m} be a smooth map and ω be a k-form. Then
d(F^{∗}ω) = F^{∗}dω.