All the vector spaces in this note are all real vector spaces.

The set of n-tuples of real numbers is denoted by R^{n}. Suppose that a is a real number
and x = (x1, · · · , xn) and y = (y1, · · · , yn) are elements of R^{n}. We define the sum x +0y
and the scalar product a ·0x by

x +_{0}y = (x_{1}+ y_{1}, · · · , x_{n}+ y_{n}), a ·_{0}x = (ax_{1}, · · · , ax_{n}).

It is well known that the set R^{n}together with the addition +0 and the scalar multiplication

·_{0} forms a vector space (R^{n}, +_{0}, ·_{0}). In fact, the set R^{n} can be equipped with other vector
space structures. Let p be an element of R^{n}. For each element x of R^{n}, we represent x as a
sum p +_{0}v where v = x −_{0}p. We denote p +_{0}v by v_{p} or (v)_{p}. If x = (v)_{p} and y = (w)_{p} are
elements of R^{n} and a ∈ R is a real number, we define x +py and a ·px by

x +_{p}y = (v + w)_{p}, a ·_{p}x = (av)_{p}.

Lemma 1.1. The set R^{n} together with the addition +p and the scalar multiplication ·p

forms a vector space.

Proof. This is left to the reader as an exercise.

The vector space (R^{n}, +_{p}, ·_{p}) is denoted by T_{p}R^{n} and called the tangent space to R^{n} at
p. We remark that T_{0}R^{n} is the usual R^{n} we have learned before. One can also see that the
vector space TpR^{n}is isomorphic to R^{n}for each p ∈ R^{n}. This can be proved by constructing
a linear isomorphism from T_{p}R^{n} onto R^{n}by sending v_{p} to v. If {e_{1}, · · · , e_{n}} is the standard
basis for R^{n}, {(e1)p, · · · , (en)p} forms a basis for T_{p}R^{n}.

We can also identify T_{p}R^{n} with the subset {(p, v) ∈ R^{n}× R^{n} : v ∈ R^{n}} of R^{n}× R^{n} by
sending vp to (p, v). Instead of using the notation (R^{n}, +p, ·p) for TpR^{n}, we may use the
identification

TpR^{n}= {(p, v) ∈ R^{n}× R^{n}: v ∈ R^{n}}.

On TpR^{n}, we define the addition and the scalar multiplication by
(p, v) +_{p}(p, w) = (p, v + w), a ·_{p}(p, v) = (p, av).

Furthermore, we introduce an inner product on TpR^{n}by
h(p, v), (p, w)i_{T}_{p}_{R}n = hv, wi_{R}^{n}

where the right hand side of the equation is the standard Euclidean inner product of v, w
on R^{n}.

Definition 1.1. Let U be an open subset of R^{n}. The set T U = S

p∈UTpR^{n} is called the
tangent bundle over U. A vector field on U is a function V : U → T U such that V (p) ∈ T_{p}R^{n}
for any p ∈ U.

By definition, T U = U × R^{n} as a set.

Let U be an open subset of R^{n} and f : U → R be a C^{1}-function. Let p ∈ U and
v_{p} ∈ T_{p}R^{n}. We define df_{p}(v_{p}) to be the directional derivative of f at p along v, i.e.

dfp(vp) = d

dtf (p +0tv) t=0

. By chain rule, we know that

dfp(vp) = Df (p)(v).

1

Here Df (p) : R^{n} → R is the derivative of f at p. For each vp, wp ∈ T_{p}R^{n} and a, b ∈ R, by
linearity of Df (p), we obtain that

df_{p}(av_{p}+ bw_{p}) = df_{p}((av + bw)_{p})

= Df (p)(av + bw)

= aDf (p)(v) + bDf (p)(w)

= adf_{p}(v_{p}) + bdf_{p}(w_{p}).

This shows that df_{p}: T_{p}R^{n}→ R is a linear map.

Definition 1.2. Let V be a finite dimensional vector space. A linear map ψ : V → R is
called a linear functional on V. The set of all linear functionals denoted by V^{∗} is called the
dual space to V.

By linear algebra, V^{∗} forms a vector space. The dual space to T_{p}R^{n} is denoted by T_{p}^{∗}R^{n}.
For each 1 ≤ i ≤ n, let x_{i} : R^{n}→ R be the function defined by

x_{i}(p) = p_{i} where p = (p_{1}, · · · , p_{n}).

Then {x_{i} : 1 ≤ i ≤ n} are smooth functions on R^{n} and called the rectangular coordinate
functions on R^{n}.

Example 1.1. Let p = (p_{1}, p_{2}, p_{3}) be an element of R^{3} and {e_{1}, e_{2}, e_{3}} be the standard
basis for R^{3}. Then {(e1)p, (e2)p, (e3)p} forms a basis for T_{p}R^{3}. Let vp= (v1, v2, v3)p be any
vector in TpR^{3}. For t ∈ R, p +0tv = (p1+ tv1, p2+ tv2, p3+ tv3) and hence

x1(p +0tv) = (p1+ tv1, p2+ tv2, p3+ tv3) = p1+ tv1. By definition,

(dx_{1})_{p}(v_{p}) = d

dtx_{1}(p +_{0}tv)
t=0

= d

dt(p_{1}+ tv_{1})
t=0

= v_{1}.

In general, one can show that (dxi)p(vp) = vi for 1 ≤ i ≤ 3. One can easily see that (dx1)p((e1)p) = 1, (dx1)p((e2)p) = 0, (dx1)p((e3)p) = 0,

(dx_{2})_{p}((e_{1})_{p}) = 0, (dx_{2})_{p}((e_{2})_{p}) = 1, (dx_{2})_{p}((e_{3})_{p}) = 0,
(dx_{3})_{p}((e_{1})_{p}) = 0, (dx_{3})_{p}((e_{2})_{p}) = 0, (dx_{3})_{p}((e_{3})_{p}) = 1.

This example motivates the following definition:

Definition 1.3. Let V be an n-dimensional real vector space and {v1, · · · , vn} be a basis
for V. A set of linear functionals {ϕ_{1}, · · · , ϕ_{n}} on V is called the dual basis to {v_{1}, · · · , v_{n}}
providing that

ϕi(vj) = δij, 1 ≤ i, j ≤ n.

Lemma 1.2. Let V be an n-dimensional real vector space and {v_{1}, · · · , v_{n}} be a basis for
V. Suppose that {ϕ1, · · · , ϕn} is the dual basis to {v_{1}, · · · , vn}. Then {ϕ_{1}, · · · , ϕn} forms a
basis for V^{∗}.

Proof. Let us first prove that the set is linearly independent. Suppose a1ϕ1+· · ·+anϕn= 0.

By ϕ_{i}(v_{j}) = δ_{ij}, one has

0 = (a_{1}ϕ_{1}+ · · · + a_{n}ϕ_{n})(v_{j}) = a_{1}ϕ_{1}(v_{j}) + · · · + a_{n}ϕ_{n}(v_{j}) = a_{j}.
for 1 ≤ j ≤ n.

Let ϕ : V → R be a linear functional on V. Define ψ = Pn

i=1ϕ(vj)ϕj ∈ V^{∗}. Then
ψ(v_{i}) = ϕ(v_{i}) for all 1 ≤ i ≤ n. Hence ψ(v) = ϕ(v) for any v ∈ V. We show that ψ = ϕ.

Thus ϕ is a linear combination of {ϕ1, · · · , ϕn}. We find that {ϕ_{1}, · · · , ϕn} spans V^{∗}.
The above lemma also implies that for any basis {v_{1}, · · · , v_{n}} for V and its dual basis
{ϕ_{1}, · · · , ϕn}, any linear functional ϕ : V → R on V has the following expression:

(1.1) ϕ =

n

X

i=1

ϕ(v_{i})ϕ_{i}.

It follows from the definition that the set {(dx_{1})_{p}, (dx_{2})_{p}, (dx_{3})_{p}} is the dual basis to
{(e_{1})p, (e2)p, (e3)p} and hence is a basis for T_{p}^{∗}R^{3}.

Let U be an open subset of R^{n} and f : U → R be a smooth function. Let p ∈ U. Since
dfp: TpR^{n}→ R is a linear functional, by equation (1.1) and Corollary 1.1, we have

df_{p} =

n

X

i=1

df_{p}((e_{i})_{p})(dx_{i})_{p}.
Let us now compute dfp((ei)p). By definition,

dfp((ei)p) = d

dtf (p +0tei) t=0

= ∂f

∂xi

(p).

From here, we obtain that

(dxi)p((ej)p) = ∂xi

∂xj

(p) = δij, 1 ≤ i, j ≤ n.

Lemma 1.2 implies the following results:

Corollary 1.1. The set {(dx_{1})_{p}, · · · , (dx_{n})_{p}} forms a basis to T_{p}^{∗}R^{n}; it is the dual basis to
{(e_{1})p, · · · , (en)p}.

Since {(dx1)p, · · · , (dxn)p} is the dual basis to {(e_{1})p, · · · , (en)p}, by Lemma 1.2 and its
remark (1.1), we conclude that

(1.2) dfp =

n

X

i=1

∂f

∂xi

(p)(dxi)p.

Example 1.2. Let f : (a, b) → R be a smooth function and p ∈ (a, b). Then
dfp = f^{0}(p)dxp.

Theorem 1.1. (Riesz representation theorem) Let V be a finite dimensional inner product space. For any linear functional ψ : V → R, there exists a unique ξ ∈ V such that ϕ(v) = hv, ξi for all v ∈ V.

Proof. Let us prove the uniqueness first. Suppose ξ and η are two vectors in V such that ϕ(v) = hv, ξi = hv, ηi for all v ∈ V. Let α = ξ − η. Then hv, αi = 0 for all v ∈ V. We choose v = α. Then hα, αi = 0. By the property of inner product, α = 0 and hence ξ = η.

Let us prove the existence. Assume that dim V = n. Choose an orthonormal basis
{e_{1}, · · · , en} for V. We set ξ =Pn

i=1ϕ(ei)ei and define ψ : V → R by ψ(v) = hv, ξi. Then ψ is a linear functional on V. Furthermore, for 1 ≤ j ≤ n,

ψ(ej) = hej, ξi =

n

X

i=1

ϕ(ei)hej, eii = ϕ(e_{j}).

This implies that ψ(v) = ϕ(v) for all v ∈ V and thus ψ = ϕ.

Remark. Let V be an n-dimensional inner product space and {ei : 1 ≤ i ≤ n} be an orthonormal basis for V. If ϕ : V → R is a linear functional on V, the unique vector ξ so that ϕ(v) = hv, ξi has the following representation:

ξ =

n

X

i=1

ϕ(e_{i})e_{i}.

Now we are ready to introduce the notion of gradient of a smooth function at a point p.

Since dfp : TpR^{n}→ R is a linear functional and TpR^{n}is a finite dimensional inner product
space, there is a unique vector in T_{p}R^{n}, denoted by ∇f (p), such that

df_{p}(v) = hv, ∇f (p)i_{p}.

The vector ∇f (p) in TpR^{n} is called the gradient vector of f at p.

Definition 1.4. If f : U ⊂ R^{n}→ R is smooth, ∇f defines a function

∇f : U → T U, p 7→ ∇f (p).

This vector field is called the gradient vector field of f on U.

By Theorem 1.1, and its remark, we see that

∇f (p) =

n

X

i=1

dfp((ei)p)(ei)p.
Since df_{p}((e_{i})_{p}) = f_{x}_{i}(p), we find that

∇f (p) =

n

X

i=1

∂f

∂x_{i}(p)(e_{i})_{p} =

n

X

i=1

∂f

∂x_{i}(p)e_{i}

!

p

Example 1.3. Let f : R^{2} → R be a smooth function and p ∈ R^{2}. Then

∇f (p) = (f_{x}(p), f_{y}(p))_{p}.

If f : U ⊂ R^{n} → R is smooth, dfp ∈ T_{p}^{∗}R^{n} for any p ∈ U. We define the notion of
cotangent bundle over an open subset U of R^{n}.

Definition 1.5. The cotangent bundle T^{∗}U of an open subset U of R^{n} is the union
S

p∈UT_{p}^{∗}R^{n}.

An one-form on U is a function ω : U → T^{∗}U. Let ω be an one-form on U. For each
p ∈ U, ω(p) ∈ T_{p}^{∗}R^{n}. By equation 1.1, if we set a_{i}(p) = ω(p)((e_{i})_{p}), then

ω(p) = a_{1}(p)(dx_{1})_{p}+ · · · + a_{n}(p)(dx_{n})_{p}.
We obtain functions a_{1}, · · · , a_{n}: U → R. We write

ω = a1(x)dx1+ · · · + an(x)dxn.

We say that ω is a continuous/differentiable/C^{k}/smooth one-form if a_{1}, · · · , a_{n}: U → R are
continuous /differentiable/C^{k}/smooth functions on U. For example, let f : U ⊆ R^{n} → R
be a C^{k} function. Then df defines a one-form df : U → T^{∗}U by sending p to dfp. Since
df = f_{x}_{1}dx_{1}+ · · · + f_{x}_{n}dx_{n} and f_{x}_{i} are C^{k−1} functions on U, df is a C^{k−1} one-form.

Remark. The tangent bundle T U = U ×R^{n}over an open subset U of R^{n}and the cotangent
bundle T^{∗}U = U ×(R^{n})^{∗}over U are open subsets of T R^{n}= R^{n}×R^{n}and T^{∗}R^{n}= R^{n}×(R^{n})^{∗}.
We can define the continuities/differentiability/smoothness of a vector field V : U → T U
over U or an one form ω : U → T^{∗}U in the usual sense.

Example 1.4. Let f (x, y) = e^{x}cos y for (x, y) ∈ R^{2}. Then

df = fxdx + fydy = e^{x}cos ydx − e^{x}sin ydy.

Example 1.5. Let y = sin x for x ∈ R. Then

dy = f^{0}(x)dx = cos xdx.

Example 1.6. Let f (x, y) = x^{3}y + 2x^{2}y for (x, y) ∈ R^{2}. Then

df = fxdx + fydy = (3x^{2}y + 2y^{2})dx + (x^{3}+ 4xy)dy.

Example 1.7. Let U = R^{2}\ {(0, 0)}. Then U is an open subset of R^{2}. Define
ω = −ydx + xdy

x^{2}+ y^{2} for (x, y) ∈ R^{2}.
Then ω is a smooth one-form on U.

A one form on an open subset U of R^{n} is of the form ω = a_{1}(x)dx_{1} + · · · + a_{n}(x)dx_{n}.
It is natural to ask when ω = df for some f ∈ C^{∞}(U )? This is equivalent to the following
family of differential equations

ai(x) = ∂f

∂xi

(x), 1 ≤ i ≤ n.

We will begin with the case when n = 2. Consider a smooth one-form on an open set U ⊂ R^{2}
of the form

ω = M (x, y)dx + N (x, y)dy

and solve this problem when U is an open ball. If ω = df holds, then M = fx and N = fy. By smoothness of f,

M_{y} = f_{xy} = f_{yx}= N_{x}.

We find that if ω = df, then M_{y} = N_{x}. In fact, we have the following result:

Proposition 1.1. Let B be an open ball in R^{2}and ω = M (x, y)dx + N (x, y)dy be a smooth
one form on U. Then ω = df for some smooth function f : B → R if and only if My = Nx.
Proof. We have proved one direction. Let us prove the reverse direction. We may assume
that B the open unit ball B(0, 1) centered at 0 of radius 1. We define a function

f : B → R by f (x, y) = c + x
Z _{1}

0

M (tx, ty)dt + y
Z _{1}

0

N (tx, ty)dt.

Here c is any real number. By taking the partial derivative of f with respect to x, we obtain
f_{x}(x, y) =

Z 1 0

M (tx, ty)dt + x Z 1

0

M_{x}(tx, ty)tdt + y
Z 1

0

N_{x}(tx, ty)tdt.

By the relation M_{y} = N_{x}, we see that
Z 1

0

N_{x}(tx, ty)tdt =
Z x

0

M_{y}(tx, ty)tdt.

Therefore

fx(x, y) = Z 1

0

M (tx, ty)dt + Z 1

0

t(Mx(tx, ty)x + My(tx, ty)y)dt.

By chain rule,

d

dtM (tx, ty) = M_{x}(tx, ty)x + M_{y}(tx, ty)y.

Therefore fx can be rewritten as fx(x, y) =

Z 1 0

M (tx, ty)dt + Z 1

0

t d

dtM (tx, ty)

dt

= Z 1

0

M (tx, ty)dt + tM (tx, ty)|^{1}_{t=0}−
Z 1

0

M (tx, ty)dt (use the integration by parts)

= M (x, y).

Similarly, one can show that f_{y}(x, y) = N (x, y). This proves that ω = df if M_{y} = N_{x}.
In calculus, we have learned the notion of antiderivative of a C^{1}-function on a closed
interval [a, b]. Let f : [a, b] → R be a continuous function. A C^{1}-function F : [a, b] → R is
said to be an antiderivative of f provided that F^{0}(x) = f (x). If we denote ω = f (x)dx, then
the condition F^{0}(x) = f (x) is equivalent to the condition ω = dF. We now can define the
notion of antiderivative of a smooth one form.

Definition 1.6. Let ω be a smooth one form on an open subset U of R^{n}. If there exists a
smooth function f : U → R such that ω = df, then ω is said to have an antiderivative on
U. In this case, f is said to be an antiderivative of f.

Proposition 1.1 says that a smooth one form ω = M (x, y)dx + N (x, y)dy on an open ball
B in R^{2} has an antiderivative on B if and only if My = N_{x}. It is natural for us to ask when
a smooth one form on an open set possess an antiderivative on that open set? What are
the necessary and sufficient conditions for a smooth one form to have an antiderivative?

Example 1.8. Let ω = f (x)dx be a smooth one form on R. We define F (x) =

Z x 0

f (t)dt, x ∈ R.

By fundamental theorem of calculus, F^{0}(x) = f (x) for any x ∈ R and hence ω = dF.

In other words, any smooth one form on R always possess an antiderivative on R by the fundamental Theorem of calculus.

Now we will express the condition M_{y} = N_{x} in terms of the “derivative of differential
forms”. Let us introduce the notion of r-forms for any r ≥ 0.

2. Differential r-Forms

Suppose vp= (a, b)p and wp= (c, d)p are two tangent vectors to R^{2} at p. The area of the
parallelogram spanned by {v_{p}, w_{p}} is the absolute value of the determinant

a c b d

.

Since dx_{p}(v_{p}) = a and dy_{p}(v_{p}) = b and dx_{p}(w_{p}) = c and dy_{p}(w_{p}) = d, the above determinant
can be expressed as :

dx_{p}(v_{p}) dx_{p}(w_{p})
dyp(vp) dyp(wp)

. This motivates the following definition.

Definition 2.1. Let V be a finite dimensional vector space and ϕ, ψ be two linear func-
tionals on V, i.e. ϕ, ψ ∈ V^{∗}. We define the wedge product ϕ ∧ ψ of ϕ and ψ as follows. We
define

ϕ ∧ ψ : V × V → R, (v, w) 7→

ϕ(v) ϕ(w) ψ(v) ψ(w) .

Lemma 2.1. Let ϕ, ψ be linear functionals on a finite dimensional vector space V. Their wedge product ϕ ∧ ψ : V × V → R is a skew symmetric bilinear form on V, i.e.

(1) ϕ ∧ ψ : V × V → R is bilinear (2) (ϕ ∧ ψ)(w, v) = −(ϕ ∧ ψ)(v, w).

Especially, when ϕ = ψ, ϕ ∧ ϕ = 0.

Proof. The proof is left to the reader as an exercise.

Definition 2.2. Let ψ_{1}, · · · , ψ_{r}: V → R be linear functionals on a finite dimensional vector
space V. We define the wedge product ψ1∧ · · · ∧ ψ_{r}: V^{r} → R by

(ψ1∧ · · · ∧ ψ_{r})(v1, · · · , vr) = det[ψi(vj)]^{r}_{i,j=1}.
Here (v1, · · · , vr) ∈ V^{r}.

Example 2.1. Let ψ_{1}, ψ_{2}, ψ_{3} : V → R be linear functions on V. For v1, v_{2}, v_{3}∈ V, one has
(ψ_{1}∧ ψ_{2}∧ ψ_{3})(v_{1}, v_{2}, v_{3}) =

ψ1(v1) ψ1(v2) ψ1(v3)
ψ_{2}(v_{1}) ψ_{2}(v_{2}) ψ_{2}(v_{3})
ψ3(v1) ψ3(v2) ψ3(v3)

Lemma 2.2. Let ψ1, · · · , ψr : V → R and ψ1 ∧ · · · ∧ ψ_{r} : V^{r} → R be as above. Then
ψ_{1}∧ · · · ∧ ψ_{r}: V^{r}→ R is a alternating r-linear form on V.

Proof. This is left to the reader.

Example 2.2. Let up= (a1, b1, c1)p and vp= (a2, b2, c2)p, and wp= (a3, b3, c3)p be vectors
in TpR^{3} for p ∈ R^{3}. In high school, we have learned that the volume of the parallelepiped
spanned by up, vp, wp is the absolute value of the determinant

(dxp∧ dy_{p}∧ dz_{p})(up, vp, wp) =

a1 b1 c1

a_{2} b_{2} c_{2}
a3 b3 c3

Definition 2.3. Let V be a finite dimensional vector space. We set T^{0}(V ) = R and
T^{1}(V^{∗}) = V^{∗} and T^{r}(V^{∗}) be the space of all r-linear forms f on V, i.e. f : V^{r} → R is
r-linear. Elements of T^{r}(V^{∗}) are called r-tensors on V.

Let T1, T2, T : V^{r} → R be r-tensors and a ∈ R. We define the sum T1+ T2 and the scalar
product aT by

(T_{1}+ T_{2})(v_{1}, · · · , v_{r}) = T_{1}(v_{1}, · · · , v_{r}) + T_{2}(v_{1}, · · · , v_{r}),
(aT )(v1, · · · , vr) = aT (v1, · · · , vr),

for (v1, · · · , vr) ∈ V^{r}.

Lemma 2.3. The set T^{r}(V^{∗}) forms a vector space over R. (In fact, it is a vector subspace
of the space of functions F (V^{r}, R) from V^{r} into R.)

For any linear functionals ψ1, · · · , ψr: V → R, we define the tensor product ψ1⊗· · ·⊗ψ_{r}:
V^{r} → R by

(ψ_{1}⊗ · · · ⊗ ψ_{r})(v_{1}, · · · , v_{r}) = ψ_{1}(v_{1})ψ_{2}(v_{2}) · · · ψ_{r}(v_{r})
for any (v_{1}, · · · , v_{r}) ∈ V. We leave it to the reader to verify that

ϕ ∧ ψ = ϕ ⊗ ψ − ψ ⊗ ϕ, see quiz 11.

Proposition 2.1. Let β = {v1, · · · , vn} be a basis for a n dimensional vector space V and
{ϕ_{1}, · · · , ϕ_{n}} be the dual basis to β. Then {ϕ_{i}_{1} ⊗ · · · ⊗ ϕ_{i}_{r} : 1 ≤ i_{1}, · · · , i_{r} ≤ n} forms a
basis for T^{r}(V^{∗}); the dimension of T^{r}(V^{∗}) is n^{r}.

Proof. See quiz 11.

A r-tensor T ∈ T^{r}(V^{∗}) is alternating if

T (v_{σ}_{1}, · · · , v_{σ}_{r}) = (sgn σ)T (v_{1}, · · · , v_{r}), (v_{1}, · · · , v_{r}) ∈ V^{r}
for any permutation σ : {1, · · · , r} → {1, · · · , r},^{1} where

sgn σ = (

1 if σ is an even permutation;

−1 if σ is an odd permutation.

Lemma 2.4. Let Λ^{r}(V^{∗}) the subset of T^{r}(V^{∗}) consisting of all alternating r-tensors on V.

Then Λ^{r}(V^{∗}) forms a vector subspace of T^{r}(V^{∗}).

Proof. This is left to the reader as an exercise.

Proposition 2.2. Let β = {v1, · · · , vn} be a basis for a n dimensional vector space V and
{ϕ_{1}, · · · , ϕn} be the dual basis to β. Then {ϕ_{i}_{1}∧ · · · ∧ ϕ_{i}_{r} : 1 ≤ i1 < · · · < ir ≤ n} forms a
basis for Λ^{r}(V^{∗}). Hence the dimension of Λ^{r}(V^{∗}) isn

r

.

Proof. We will prove the case when r = 2. For general case, see quiz 11.
Definition 2.4. Let U be an open subset of R^{n}. We denote Λ^{r}(T^{∗}U ) =S

p∈UΛ^{r}(T_{p}^{∗}R^{n}).

A r-form on U is a function

η : U → Λ^{r}(T^{∗}U )
such that η(p) ∈ Λ^{r}(T_{p}^{∗}R^{n}) for any p ∈ U.

Since {(dxi)p: 1 ≤ i ≤ n} forms a basis of T_{p}^{∗}R^{n} (dual to the standard basis {(ei)p: 1 ≤
i ≤ n} in T_{p}R^{n},) the set {(dx_{i}_{1})_{p}∧ · · · ∧ (dx_{i}_{r})_{p} : 1 ≤ i_{1} < · · · < i_{r} ≤ n} forms a basis for
the vector space Λ^{r}(T_{p}^{∗}R^{n}) by Lemma 2.2. An element of Λ^{r}(T_{p}^{∗}R^{n}) is of the form

X

1≤i1<···<ir≤n

a_{i}_{1}···ir(dx_{i}_{1})_{p}∧ · · · ∧ (dx_{i}_{r})_{p}.
If U is an open subset of R^{n}, a smooth r form on U is of the form

η = X

1≤i1<···<ir≤n

ηi1···ir(x)dxi1∧ · · · ∧ dx_{i}_{r},
where η_{i}_{1}···ir : U → R are smooth functions on U.

Example 2.3. A two form on an open subset U of R^{2} is of the form
η = f (x, y)dx ∧ dy.

Here f : U → R is a function.

Example 2.4. A two form on an open subset U of R^{3} is of the form
η = P (x, y, z)dx ∧ dy + Q(x, y, z)dx ∧ dz + R(x, y, z)dy ∧ dz;

a three form on U is of the form

α = g(x, y, z)dx ∧ dy ∧ dz.

Here P, Q, R, g : U → R are functions.

1A permutation on a nonempty set X is a bijection on X.

Let Ir,n be the set of all r-tuple of nonnegative integers (i1, · · · , ir) such that 1 ≤ i1 <

· · · < i_{r}≤ n. Let η be a r form on R^{n}. Then we denote η by

η = X

I∈Ir,n

ηI(x)dxI

where dx_{I} = dxi1 ∧ · · · ∧ dx_{i}_{r} for any I = (i1, · · · , ir) ∈ Ir,n. The set of all smooth r-forms
on an open subset U of R^{n} is denoted by Ω^{r}(U ).

Let η =P

IηIdxI and ω =P

IωIdxI be r-forms on U and f ∈ C^{∞}(U ). We define η + ω
and f η by

η + ω =X

I

(ωI(x) + ηI(x))dxI, f η =X

I

f (x)ηI(x)dxI.

Proposition 2.3. The set Ω^{r}(U ) forms a vector space over R such that for f, g ∈ C^{∞}(U )
and ω, η ∈ Ω^{r}(U ),

(1) f (η + ω) = f η + f ω;

(2) (f + g)η = f η + gη;

(3) (f g)η = f (gη);

(4) 1η = η

Proof. This is left to the reader as an exercise.

Remark. In algebra, the above properties tell us that that Ω^{r}(U ) is a C^{∞}(U )-module.

Let η =P

IηI(x)dxIbe a r-form and ω =P

JωJdxJ be a s-form. We define their wedge product by

η ∧ ω =X

I,J

η_{I}(x)ω_{J}(x)dx_{I}∧ dx_{J}.
One can easily check that

η ∧ ω = (−1)^{rs}ω ∧ η.

Definition 2.5. Let η = P

IηI(x)dxI be a smooth r forms on U ⊆ R^{n}. We define the
derivative d^{r}η of η by

d^{r}η =X

I

dηI∧ dx_{I}.
Here df denotes the differential of a smooth function f on U.

Example 2.5. Let ω = M (x, y)dx + N (x, y)dy be a smooth one form on an open set
U ⊆ R^{2}. The derivative of ω is given by

d^{1}ω = (Nx− M_{x})dx ∧ dy.

We see that the condition Ny = Mx is equivalent to the condition d^{1}ω = 0.

Example 2.6. Let η = P dy ∧ dz − Qdx ∧ dz + Rdx ∧ dy be a smooth two form on an open
subset of R^{3}. Then

d^{2}η = (Px+ Qy+ Rz)dx ∧ dy ∧ dz.

Definition 2.6. Let η be an r-form on an open subset U of R^{n}. If d^{r}η = 0, η is called a
closed r-form. The set of all closed smooth r-forms on U is denoted by Z_{dR}^{r} (U ).

Lemma 2.5. The derivative d^{r} of r-forms on an open subset of R^{n} defines a linear map
d^{r}: Ω^{r}(U ) → Ω^{r+1}(U ). Hence Z_{dR}^{r} (U ) = ker d^{r} is a vector subspace of Ω^{r}(U ).

Proof. The proof follows from the definition.

We will set Ω^{0}(U ) = C^{∞}(U ); i.e. a smooth zero form on U is a smooth function on U.

We set d^{0} = d.

Example 2.7. Let f : U → R be a smooth function on an open subset U of R^{2}. Then
df = f_{x}dx + f_{y}dy. The previous example tells us that

d^{1}(d^{0}f ) = (fyx− f_{xy})dx ∧ dy.

Since f is smooth, f_{yx} = f_{xy} on U. Hence d^{1}(d^{0}f ) = 0. Since f is arbitrary, we find
d^{1}◦ d^{0}= 0.

In fact, we have a general result:

Lemma 2.6. The linear map d^{r} : Ω^{r}(U ) → Ω^{r+1}(U ) satisfies the following properties:

(1) d^{r+1}◦ d^{r}= 0.

(2) (graded Leibniz rule) d^{r+s}(η ∧ ω) = d^{r}η ∧ ω + (−1)^{r}η ∧ d^{s}ω for any r form η and for
any s form ω.

Let us go back to define the notion of antiderivatives of differential forms.

Definition 2.7. Let ω be a smooth r form on an open set U ⊆ R^{n}. We say that ω has an
antiderivative on U if there exists a smooth r − 1 form η on U such that ω = d^{r−1}η. If ω
has an antiderivative on U, we say that ω is an exact r-form. The set of all exact r-forms
on U is denoted by B^{r}_{dR}(U ).

The Proposition 1.1 can be reformulated as follows.

Proposition 2.4. Let B be an open ball in R^{2} and ω be a smooth one form on B. Then ω
is an exact r-form on B if and only if it is a closed one form on U.

Since B_{dR}^{r} (U ) consists of r form of the form d^{r−1}η for η ∈ Ω^{r−1}(U ), by definition,
B_{dR}^{r} (U ) = Im d^{r−1}. Since d^{r−1}is linear, B^{r}_{dR}(U ) is a vector subspace of Ω^{r}(U ). Furthermore,
since d^{r}◦ d^{r−1}= 0, d^{r}(d^{r−1}η) = 0 for any η ∈ Ω^{r−1}(U ). This implies that Im d^{r−1}⊆ ker d^{r},
i.e. B^{r}_{dR}(U ) is a vector subspace of Z_{dR}^{r} (U ).

Now let us state the fundamental theorem of calculus for smooth differential forms on open sets in an Euclidean space.

Definition 2.8. Let U be an open subset of R^{n}. We say that the fundamental theorem of
calculus holds for r forms on U if Z_{dR}^{r} (U ) = B_{dR}^{r} (U )

Proposition 2.7 can be reformulated as follows:

Proposition 2.5. The fundamental theorem of calculus for one forms holds on any open
ball in R^{2}.

To know whether the fundamental theorem of calculus holds or not, we need to study
the “difference” between Z_{dR}^{r} (U ) and B_{dR}^{r} (U ). Thus we consider the quotient space

H_{dR}^{r} (U ) = Z_{dR}^{r} (U )/B^{r}_{dR}(U ).

The quotient space H_{dR}^{r} (U ) is called the r-th de Rham cohomology of U. Thus we can
reformulate the fundamental theorem of calculus in terms of the quotient space H_{dR}^{r} (U ) :
Proposition 2.6. Let r ≥ 1. The fundamental Theorem of calculus holds for smooth
r-forms on U ⊆ R^{n} if and only if H_{dR}^{r} (U ) = {0}.

The Proposition 2.7 can be rewritten as:

Proposition 2.7. Let r ≥ 1. For any open ball B in R^{2}, H_{dR}^{1} (B) = {0}.