• 沒有找到結果。

Inner Product Space

N/A
N/A
Protected

Academic year: 2022

Share "Inner Product Space"

Copied!
6
0
0

加載中.... (立即查看全文)

全文

(1)

HODGE DECOMPOSITION

1. Some Linear Algebras

In this note, all the basis mentioned are ordered basis and all the vector spaces are real finite dimensional.

1.1. Inner Product Space. Let V be an n-dimensional real vector space. Given any two p-vectors v1∧ · · · ∧ vp, and w1∧ · · · ∧ wp in ΛpV, we set

hv1∧ · · · ∧ vp, w1∧ · · · ∧ wpiΛpV = det (hvi, wjiV)pi,j=1.

This function can be extended bilinearly to an inner product on ΛpV. In other words, if η =P

i1,··· ,ipai1···ipvi1 ∧ · · · ∧ vip and ω =P

j1,··· ,jpbi1···ipwj1 ∧ · · · ∧ wjp, we define hη, ωiΛpV =X

i,j

ai1···ipbi1···iphvi1 ∧ · · · ∧ vip, wj1 ∧ · · · ∧ wjpiΛpV. Lemma 1.1. The bilinear form h·, ·iΛpV on ΛpV is an inner product.

Proof. The proof is left to the readers. 

The inner product on ΛpV defined in Lemma 1.1 is called the inner product induced from V.

1.2. Oriented Vector Space. Let V be an n-dimensional real vector space. Any two ordered basis β = {v1, · · · , vn} and γ = {w1, · · · , wn} for V can be related by an n × n real invertible matrix A = (aij)ni,j=1 by

wj =

n

X

i=1

aijvi.

We denote A by [1]γβ. Two ordered basis β, γ is said to be equivalent, denoted by β ∼ γ if det[1]γβ > 0.

Lemma 1.2. On the set of all basis for V, the relation ∼ is an equivalence relation.

Proof. The proof will be leave to readers as an exercise.  An orientation of V is a choice of an equivalence class; A finite dimensional real vector space together with an orientation is called an oriented vector space. A basis is said to be a positive basis for V if it belongs to the given orientation. In fact, we can say that V is oriented if V is given a specific ordered basis and a basis is said to be positive if the matrix relating the basis with the given one has positive determinant.

Definition 1.1. An oriented inner product space V is an inner product space together with an orientation.

1

(2)

Let V be an n-dimensional inner product space and β = {e1, · · · , en} be an orthonormal basis for V, i.e. hei, eji = δij for 1 ≤ i, j ≤ n. Let V be oriented by β. Denote the dual basis to β by {ψ1, · · · , ψn}. In other words, {ψ1, · · · , ψn} forms a basis for V, the dual space1 for V and ψi(ej) = δij.

Theorem 1.1. (Riesz representation theorem) Let ψ be a linear functional on V. (The vector space V needs not be oriented.) Then there is a unique vector ξψ in V such that

ψ(v) = hv, ξψi.

Proof. Let us first proof the uniqueness. If ξ, ξ0 are two vectors such that ψ(v) = hv, ξi = hv, ξ0i. Then hv, ξ − ξ0i = 0 for all v ∈ V. Taking v = ξ − ξ0, we find hξ − ξ0, ξ − ξ0i = 0. By the property of inner product, ξ − ξ0 = 0. Hence ξ = ξ0.

Now, let us construct such a vector. Notice that if such a vector ξ exists, ψ(ei) = hei, ξi.

We know that any vector v ∈ V, has the Fourier expansion v = Pn

i=1hξ, eiiei. ξ must be of the form ξ = Pn

i=1ψ(ei)ei. In fact, if we take ξ = Pn

i=1ψ(ei)ei, we can verify ξ is the

required vector. 

This lemma also allows us to introduce a natural inner product on V by setting hψ, ϕiV= hξψ, ξϕiV

where ξψ, ξϕ are the unique vectors in V representing ξψ and ξϕ. Let us define a map T : ΛnV → R by

T (v1∧ · · · ∧ vn) = det(ψi(vj)) and extend it linearly to Λn(V ), i.e.

T (av1∧ · · · ∧ vn+ bw1∧ · · · ∧ wn) = aT (v1∧ · · · ∧ vn) + bT (w1∧ · · · ∧ wn) for any a, b ∈ R and v1∧ · · · ∧ vn, and w1∧ · · · ∧ wn in ΛnV.

Lemma 1.3. Let V be as above. T is an linear isomorphism of vector spaces.

Proof. As long as we can show that T is surjecitve, we know that T must be an isomorphism by dimension reason. Note that

T (e1∧ · · · ∧ en) = det(ψi(ej))ni,j=1 = det In= 1.

Hence T (ae1∧ · · · ∧ en) = a for all a ∈ R and thus T is surjective.  We can also easily see that from definition

v1∧ · · · ∧ vn= T (v1∧ · · · ∧ vn)(e1∧ · · · ∧ en) for any v1∧ · · · ∧ vn∈ ΛnV. By definition,

T = ψ1∧ · · · ∧ ψn.

Let η be a p-vector in ΛpV. We define an operator Tη : Λn−pV → R by Tη(ω) = T (η ∧ ω).

by the Riesz representation, we can choose a unique n − p-vector said ∗η in Λn−pV so that Tη(ω) = hω, ∗ηiΛpV.

It is equivalent to say that ∗η is the unque n − p vector in Λn−pV such that η ∧ ω = hω, ∗ηiΛpV(e1∧ · · · ∧ en),

1The dual space of a vector space V is the space of all linear functionals on V.

(3)

for all ω ∈ Λn−pV.

Lemma 1.4. The function ∗ : ΛpV → Λn−pV is a linear operator.

Proof. The proof is a corollary of the Riesz representation theorem.  Now, let us compute ∗1. By definition, T1(e1∧ · · · ∧ en) = he1 ∧ · · · ∧ en, ∗1i = 1. Since he1 ∧ · · · ∧ en, e1 ∧ · · · ∧ eni = 1, by uniqueness, we obtain ∗1 = e1 ∧ · · · ∧ en. Similarly,

∗(e1∧ · · · ∧ en) = 1.

Lemma 1.5. Let A be an n × n real matrix and wi =Pn

i=1aijei for 1 ≤ i ≤ n. Then w1∧ · · · ∧ wn= det(A)(e1∧ · · · ∧ en).

Proof. We know w1∧ · · · ∧ wn= a(e1∧ · · · ∧ en), for a = T (w1∧ · · · ∧ wn). By definition, T (w1∧ · · · ∧ wn) = (ψ1∧ · · · ∧ ψn)(w1, · · · , wn) = det(ψi(wj))ni,j=1= det A.

 Since ∗ is a linear operator from ΛpV → Λn−pV, we only need to compute ∗(ei1∧ · · · ∧ eip) for all 1 ≤ i1 < · · · < ip ≤ n.

Given an ordered set of indices I = {1 ≤ i1 < · · · < ip≤ n}, denote eI = ei1 ∧ · · · ∧ eip.

Let I = {1 ≤ i1 < · · · < ip ≤ n} and J = {1 ≤ j1 < · · · < jq ≤ n} be two sets of indices.

If I ∩ J 6= ∅, eI∧ eJ = 0. If I ∪ J = {1, 2, · · · , n}, then eI∧ eJ = ±e1∧ · · · ∧ en. In other words, ∗eI = ±eJ and ∗eJ = ±eI. In fact,

∗eI =

(eJ if the ordered set I ∪ J is an even permutation of {1, · · · , n},

−eJ if the ordered set I ∪ J is an odd permutation of {1, · · · , n}.

As it turns out, we can prove

Proposition 1.1. ∗∗ = (−1)p(n−p) : ΛpV → ΛpV.

The proof is based on the following identity:

eI∧ eJ = (−1)p(n−p)eJ∧ eI. Proposition 1.2. For any ω, η ∈ Λp(V ), we have

η ∧ ∗ω = hω, ηi(e1∧ · · · ∧ en) = ω ∧ ∗η.

Proof. If we can show that η ∧ ∗ω = hω, ηi(e1∧ · · · ∧ en), then by the symmetry of inner product, ω ∧ ∗η = η ∧ ∗ω. So then we only need to verify this identity on basis.

We know that {eI} forms an orthonormal basis for ΛpV. If I 6= I0 with #I = #I0 = p, heI, eI0i = 0. In this case, I ∩({1, · · · , n}\I0) is nonempty, eI∧∗eI0 = 0. Hence the statement holds when ω = eI and η = eI0 with I 6= I0. Now let us compute eI ∧ ∗eI. By definition,

∗eI = eJ when I ∪ J is an even permutation of {1, · · · , n}. Then eI∧ eJ = e1∧ · · · ∧ en. On the other hand, heI, eIi = 1. We find

eI∧ ∗eI = e1∧ · · · ∧ en= heI, eIi(e1∧ · · · ∧ en).

 Lemma 1.6. Let {v1, · · · , vn} be a positive basis for V.

∗1 = 1

q

det (hvi, vji)ni,j=1

v1∧ · · · ∧ vn.

(4)

Proof. Let A = (aij)ni,j=1 be the matrix relating {e1, · · · , en} and {v1, · · · , vn}, i.e. vj = Pn

i=1aijei. Then we know

v1∧ · · · ∧ vn= (det A)e1∧ · · · ∧ en. By the orthonormality of {e1, · · · , en}, we obtain

hvj, vki =X

i,k

aijalkhei, eki =X

i,k

aijalkδik =

n

X

i=1

aijaik.

If we denote gjk = hvj, vki, and G = (gjk)nj,k=1, then G = AA. By the property of determi- nant, det G = (det A)2. Then we find

v1∧ · · · ∧ vn= (det G)1/2e1∧ · · · ∧ en.

We finish the proof using the fact ∗1 = e1∧ · · · ∧ en.  Let V1, · · · , Vm be inner product spaces and V =Lm

i=1Vi be the vector space direct sum of V1, · · · , Vm. On V, we introduce a form

(1.1) h(v1, · · · , vn), (w1, · · · , wn)iV =

n

X

i=1

hvi, wiiVi.

One can easily verify that (1.1) introduces an inner product on V. The vector space V together with the inner product (1.1) is called the direct sum of inner product spaces V1, · · · , Vm.

Using this construction, we can introduce a natural inner product on ΛV =L

i≥0ΛiV, where Λ0V = R induced from the inner product on V.

2. On Riemannian Manifold

Let (M, g) be a compact oriented connected smooth manifold of dimension n. Then for each x ∈ M, the inner product g(x) on TxM induces an inner product on TxM. Locally, if g has the expression

g(x) =

n

X

i,j=1

gij(x)dxi⊗ dxj, then

g−1(x) =

n

X

i,j=1

gij(x) ∂

∂xi ⊗ ∂

∂xj

is the inner product on TxM induced from g. Here (gij(x)) is the inverse matrix of (gij(x)).

Hence for each x, we obtain the Hodge ∗ operator ∗x : ΛpTx → Λn−pTxM. Since M is oriented, we obtain a global operator

∗ : Ωp(M ) → Ωn−p(M ).

We call dµ = ∗1 the volume form on M. Using Lemma 1.6, locally, dµ =

q

det(gij(x))dx1∧ · · · ∧ dxn. For each α, β ∈ Ωp(M ), we can introduce an inner product:

hα, βip = Z

M

hα(x), β(x)iΛpTxMdµ = Z

M

α ∧ ∗β.

(5)

The L2-completion of Ωp(M ) is denoted by L2(ΩpM ). Similarly, we can introduce an inner product h·, ·i on Ω(M ) =L

p≥0p(M ) whose L2-completion is denoted by L2(ΩM ).

Lemma 2.1. Let d: Ωp(M ) → Ωp−1(M ) be the operator defined by d= (−1)n(p+1)+1∗d∗, where d : Ωp−1(M ) → Ωp is the de Rham differential operator. Then

(2.1) hdα, βi = hα, dβi

for any α ∈ Ωp−1(M ) and β ∈ Ωp(M ), i.e. d is the (formal) adjoint of d.

One can check that (d)2 = 0.

Definition 2.1. The Laplace operator on Ωp(M ) is defined by

∆ = (d + d)2 = dd + dd.

A p-form ω ∈ Ωp(M ) is said to be harmonic if ∆ω = 0. The space of harmonic p-forms is denoted by Hp.

Corollary 2.1. ∆ is (formally) self-adjoint:

h∆α, βi = hα, ∆βi for any α, β ∈ Ωp(M ).

One can easily verify that

h∆α, αi = kdαk2+ kdαk2. It follows from this identity that

Corollary 2.2. A p-form α is harmonic if and only if dα = 0 and dα = 0.

A p-form α is said to be co-closed if dα = 0. This corollary tells us that a p-form is harmonic if and only if it is closed and co-closed.

If f ∈ C(M ) = Ω0(M ) is harmonic function, then df = 0. In this case, f is a locally constant function on M. Since M is connected, f is a constant. We conclude that:

Corollary 2.3. Every harmonic function on M is a constant.

One can verify that ∆ : Ωp(M ) → Ωp(M ) is an elliptic operator. By the theory of P.D.E,

∆ is a Fredholm operator, i.e. ker ∆ and coker ∆ are finite dimensional vector spaces. Since

∆ is self-adjoint, coker ∆ = ker ∆. This implies that we have an orthogonal direct sum decomposition:

p(M ) = Hp⊕ Im ∆.

This gives us the following orthogonal direct sum decomposion:

p(M ) = Hp⊕ dΩp−1(M ) ⊕ dp+1(M ).

As a result,

ker d|p(M )= Hp⊕ dΩp−1(M ).

The de Rham cohomology group is the quotient space HdRp (M ) = ker d|p(M )/dΩp−1(M ).

We obtain the following isometric isomorphism:

HdRp (M ) ∼= Hp.

Theorem 2.1. Let (M, g) be a compact oriented connected Riemannian manifold. Then dimRHdRp (M ) < ∞.

(6)

Let M be as above. We define a bilinear map:

B : HdRp (M ) × HdRn−p(M ) → R by B([ω], [η]) = R

Mω ∧ η, where ω and η are representatives of [ω] and [η] respectively.

Then B is a well-defined map. Notice that for a nonzero closed p-form α, we have hα, αi =

Z

M

α ∧ ∗α > 0.

This shows that B([α], [∗α]) > 0. In other words, B is a non degenerate2 bilinear form.

Theorem 2.2. Let M be as above. The vector space HdRp (M ) is isomorphic to HdRn−p(M ) via B.

Proof. The theorem follows from the following lemma. 

Lemma 2.2. Let V, W be finite dimensional vector spaces and B : V ×W → R be a bilinear form. If B is non degenerate, then V can be identified with W and W can be identified with V.

Proof. Let Φ : V → W be the linear map defined by Φ(v) = B(v, ·). Since B is bilinear, Φ(v) ∈ W. Since B is non degenerate, ker Φ = {0}. Similarly, Ψ : W → V defined by w 7→ B(·, w) is linear and ker Φ = {0}. By the rank nullity lemma and dim V = dim V, dim W = dim W, we find dim V = dim W. Again, by the rank nullity lemma, Φ : V → W and Ψ : W → V are linear isomorphism.

 This implies that dim HdRp (M ) = dim HdRn−p(M ).

Corollary 2.4. Let M be a compact connected oriented smooth manifold of dimension n and bi(M ) be its i-th Betti number, i.e. bi(M ) = dim HdRi (M ). Then

(1) bn(M ) = 1,

(2) bp(M ) = bn−p(M ) for 0 ≤ p ≤ n.

2Let V, W be vector spaces. A bilinear form B : V × W → R is non degenerate if for each v 6= 0 in V, there is w ∈ W so that B(v, w) 6= 0.

參考文獻

相關文件

(Riesz representation theorem) Let V be a finite dimensional inner product space.. Let us prove the

Locally ringed spaces together with morphisms defined here form a

A space X is said to be locally contractible if for each x ∈ X there exists an open neighborhood U of x so that U is contractible..

The number of zeros of a nonconstant elliptic function in a period paral- lelogram P is equal to the number of poles in P.. The zeros and poles are counted according to

Proof of Inverse Function Theorem..

Before proving the convergence of (y n ), let us recall some basic math facts learnt from high school..

Here is

(i) [5%] Give an example of a nondegenerate symmetric bilinear form of Witt index 1 on a two- dimensional real vector space.. (ii) [5%] Give an example of nonzero quadratic form on