• 沒有找到結果。

The set of all linear functionals, denoted by V∗, forms a vector space

N/A
N/A
Protected

Academic year: 2022

Share "The set of all linear functionals, denoted by V∗, forms a vector space"

Copied!
7
0
0

加載中.... (立即查看全文)

全文

(1)

JIA-MING (FRANK) LIOU

Let K be a field. All the vector spaces in this note are over K.

1. Some Linear Algebras

Let V be an n-dimensional vector space over K. A linear functional on V is a linear map ϕ : V → K. The set of all linear functionals, denoted by V, forms a vector space. The addition and scalar multiplication are given as follows. Let ϕ, ψ ∈ V, and a ∈ k. Define ϕ + ψ and aϕ by

(ϕ + ψ)(v) = ϕ(v) + ψ(v), (aϕ)(v) = aϕ(v).

Theorem 1.1. V has dimension n.

Proof. Let {v1, · · · , vn} be a basis for V. Let ψi : V → K such that ψi(vj) = δij. Claim that the set β = {ψi: 1 ≤ i ≤ n} forms a basis for V.

For any ϕ ∈ V, one can check that ϕ =

n

X

i=1

ϕ(eii. Then ϕ is a linear combinaton of elements of β.

Assume that Pn

i=1aiψi = 0 for some a1, · · · , an ∈ K. Evaluate the sum for vk, we see that ak= 0. This shows that β is linearly independent.

 Remark. The basis {ψi} for V in Theorem (1.1) is called the dual basis of {vi}.

Let ϕ1, ϕ2 ∈ V. We define a map ϕ1∧ ϕ2: V × V → K by (ϕ1∧ ϕ2)(w1, w2) = det [ϕi(wj)]2i,j=1.

We can check that ϕ1∧ ϕ2 : V × V → K is bilinear, and skew-symmetric, i.e. ϕ2∧ ϕ1 =

−ϕ1 ∧ ϕ2. We call ϕ1∧ ϕ2 the wedge product of ϕ1 and ϕ2. Let Λ2V be the set of all skew-symmetric bilinear maps from V × V to K.

Theorem 1.2. Λ2V is a vector space of dimensionn 2

 .

Proof. Let {vi} be a basis for V and {ψi} be its dual basis. Claim that {ψi∧ ψj : 1 ≤ i <

j ≤ n} forms a basis for V.

Suppose that

X

i<j

aijψi∧ ψj = 0.

Evaluate the equation for (vk, vl), we obtain akl= 0.

Let B be a skew-bilinear map. Denote bij = B(vi, vj). Then bij = −bji. For any x = Pn

i=1xivi and y =P

iyivi, we see that B(x, y) =X

i,j

bijxiyj 1

(2)

=X

i<j

bijxiyj+X

i>j

bijxiyj

=X

i<j

bij(xiyj − xjyi).

Note that (ψi∧ ψj)(x, y) = (xiyj − xjyi). Hence we find B(x, y) = (X

i<j

bijψi∧ ψj)(x, y).

Hence we prove that B =P

i<jbijψi∧ ψj. 

Definition 1.1. Let p be a natural number and Sp be the symmetric group on p-letters.

A map f : ×pi=1V → K is alternating if

f (vσ1, · · · , vσp) = (sgn σ)f (v1, · · · , vp), for any v1, · · · , vp ∈ V. f is called p-linear if for each 1 ≤ k ≤ p,

f (v1, · · · , avk+ bwk, · · · , vp) = af (v1, · · · , vk, · · · , vp) + bf (v1, · · · , wk, · · · , vp), for any a, b ∈ K.

Let 1 ≤ p ≤ n. Given ϕ1, · · · , ϕp ∈ V, we define ϕ1∧ · · · ∧ ϕp : ×pi=1V → k by (ϕ1∧ · · · ∧ ϕp)(w1, · · · , wp) = det [ϕi(wj)]pi,j=1,

for all w1, · · · , wk ∈ V. Then we can check that ϕ1∧ · · · ∧ ϕp is p-linear and alternating.

Let ΛpV be the set of all p-linear alternating maps from ×pi=1V to k and denote Λ0V= V.

Theorem 1.3. For 0 ≤ p, ≤ n, the set ΛpV forms a vector space of dimensionn p

 . Proof. Let {vi} be a basis for V and {ψi} be a basis for V dual to V. We only need to show that the set {ψi1 ∧ · · · ∧ ψip : 1 ≤ i1< · · · < ip ≤ n} forms a basis for ΛpV.

 2. Tangent Vectors

Let p be a point in Rn. We denote (v)p = p + v, where + is the addition in Rn. Let TpRn be the set of all (v)p with v ∈ Rn. We define the addition and scalar multiplication on TpRn as follows. Let (v)p, (w)p ∈ TpRn, and a ∈ R. We define

(v)p+ (w)p = (v + w)p, a · (v)p= (av)p.

Note that the addition and scalar multiplication here are different from those of Rn. Proposition 2.1. The set TpRn forms an n-dimensional real vector space.

Proof. The proof is obvious. Let {ei : 1 ≤ i ≤ n} be the standard basis for Rn. Then {(ei)p: 1 ≤ i ≤ n} forms a basis for TpRn.

 Remark. We will use the notation vp for (v)p.

(3)

We say that TpRn is the tangent space to Rn at p and elements of TpRn are tangent vectors at p. The dual space of TpRn is denoted by TRn and called the cotangent space of Rn at p.

Let U be an open set on Rn. The algebra of (real-valued) smooth functions on U is denoted by C(U ). Let p ∈ U and f ∈ C(U ), we define the directional derivative of f at p along a vector v ∈ Rn by

vp[f ] = d

dtf (p + tv) t=0

.

Let {ei : 1 ≤ i ≤ n} be the standard basis for Rn. Assume that v = Pn

i=1viei. From calculus, we know

vp[f ] =

n

X

i=1

vi

∂f

∂xi

(p).

Then we know

(1) vp : C(U ) → R is a linear functional, i.e. vp(af + bg) = avp(f ) + bvp(g) for all a, b ∈ R and

(2) vp(f g) = vp(f )g(p) + f (p)vp(g).

Note that if the tangent vector is given, the directional derivative only depends on the partial derivatives of the function at the point p. Suppose f, g are two smooth functions defined on some open sets, (not necessarily the same). If f and g agree on an open set containing p, then vp(f ) = vp(g). This leads to another definition of tangent vectors of Rn. Let V, W be open sets containing p in Rn. Let f be a smooth function on V and g smooth function on W. We say that the pair (f, V ) is equivalent to the pair (g, W ) if there exists an open set Z ⊂ V ∩ W contain p such that f = g on Z. The set of all equivalent classes [(f, V )]

is denoted by Cp. For simplicity, we will simply denote [(f, V )] by [f ]. An equivalent class [f ] is called a germ at p.

Proposition 2.2. The set Cpforms an algebra over R.

Let [f ] ∈ Cp. We define [f ](p) = f (p), where f is a representative of [f ]. Then [f ](p) is well-defined. Now, if f1, f2 belong to the same germ [f ] at p, we can check that vp(f1) = vp(f2). Hence we can define vp([f ]) = vp(f ) where f is a representative of [f ].

Definition 2.1. A point derivation δp at p is a linear functional on Cp such that δp([f ][g]) = δp([f ])[g](p) + [f ](p)δp([g]).

The set of all point derivations at p is denoted by Tp0Rn.

Remark. By definition, all tangent vectors are point derivation at p.

Given [f ] ∈ Cp, we define

∂xi

(p)[f ] = ∂f

∂xi

(p) for a representative f of [f ]. Then we know that ∂

∂xi(p) is a point derivation.

Theorem 2.1. The set

 ∂

∂xi(p) : 1 ≤ i ≤ p



forms a basis for Tp0Rn; Then Tp0Rn is an n-dimensional vector space.

(4)

Proof. It is easy to verify that the set is linearly independent.

Let δp ∈ TpRn. Denote δp[xi] = vi. Let [f ] ∈ Cp. Choose a representative f of [f ].

Consider the Tayler expansion f (x) = f (p) +X

i

∂f

∂xi

(p)(xi− pi) +X

i,j

(xi− pi)(xj− pj) Z 1

0

(1 − t) ∂2f

∂xi∂xj

(p + t(x − p))dt.

Using the properties of tangent vectors, we find δp[f ] =X

i

vi

∂xi

(p)[f ].

In other words, we find δp =X

i

vi

∂xi

(p). 

We know that TpRnis a vector subspace of Tp0Rn. Since they both have the same dimen- sion, they must be equal. In fact, by elementary calculus,

(ei)p[f ] = ∂f

∂xi(p).

We find that (ei)p = ∂

∂xi

(p) by definition. Hence we conclude that the notion of tangent vectors are equivalent to the notion of point derivations.

Remark. Since we always choose a representative of a germ at p, from now on, we will simply use the notation f for [f ].

Let f be a germ at p. We define a linear functional dfp: TpRn→ R by dfp(vp) = vp(f )

Then dfp ∈ TpRn. Notice that for any vp = (v1, · · · , v)p ∈ TpRn, we have (dxi)p(vp) = vi. This implies that (dxi)p((ei)p) = δij for all i, j. Hence {(dxi)p : 1 ≤ i ≤ p} is the dual basis to {(ei)p: 1 ≤ i ≤ n}. We conclude that:

Theorem 2.2. The set {(dxi)p: 1 ≤ i ≤ n} forms a basis for TpRn. 3. Tangent Maps

Let U be an open subset of Rn. Suppose F : U → Rm is a smooth map, where F = (F1, · · · , Fm). For each p ∈ U, we can define a linear map, called the tangent map,

dFp : TpRn→ TF (p)Rm as follows.

Let f be a germ at F (p). Then f ◦ F is a germ at p. Given any vp∈ TpRn, we set (dFp(vp)) (f ) = vp(f ◦ F ).

Suppose F = F (y1, · · · , ym). By chain rule, ∂

∂xi(p)(f ◦ F ) =

m

X

j=1

∂f

∂yj(F (p))∂Fj

∂xi(p). By definition, dFp

 ∂

∂xi

(p)

 (f ) =

m

X

j=1

∂Fj

∂xi

(p)∂f

∂yj

(F (p)). Hence we see that

dFp

 ∂

∂xi(p)



=

m

X

j=1

∂Fj

∂xi(p) ∂

∂yj(F (p)).

(5)

If we choose the standard basis for TpRnand TF (p)Rm, the tangent map dFp is represented by the Jacobi matrix of F at p.

Let U be an open set in Rn and F : U → Rm be a smooth map. Assume that V is an open set in Rm containing F (U ) and G : V → Rkis a smooth map. For any x ∈ U, we have (chain rule)

(3.1) d(G ◦ F )x: TxRn −−−−→ TdFx F (x)Rm

dGF (x)

−−−−→ TG(F (x)). 4. Differential Forms on Rn

Let U be an open set in Rn. We denote T U = [

p∈U

TpRn, TU = [

p∈U

TpRn.

Then we find that T U can be identify with U ×Rnand TU can be identified with U ×(Rn). We have a natural projection π : TU → U.

Let f be a smooth function on an open set U ⊂ Rn. The total derivative of f has the formal expression:

df =X

i

∂f

∂xidxi.

Since f is smooth, all the partial derivatives of f are all smooth. We shall use this idea to define the notion of one-form. A smooth one form ω on U is a formal expression

ω =X

i

ωidxi

with ωi ∈ C(U ). Now we can think of ω as a map from U to TU. In fact, for each p ∈ U, ω(p) =P

iωi(p)(dxi)p ∈ TpRn. Moreover, (π ◦ ω)(p) = p for all p ∈ U. Therefore a smooth one-form ω is a smooth map ω : U → TU such that π ◦ ω = 1U.

Let ΛkTU =S

p∈U ΛkTpRn. We can also consider the natural projection1 π : ΛkTU → U. We can check that ΛkTU can be identified with U × Λk(Rn).

Definition 4.1. A smooth k-form on U is a smooth map η : U → ΛkTU such that π ◦ η = 1U. The set of all smooth k forms on U is denoted by Ωk(U ). We denote Ω(U ) = L

k≥0k(U ).

By definition, a smooth k-form has the formal expression

η = X

1≤i1<···<ik≤n

ηi1···ikdxi1 ∧ · · · ∧ dxik with ηi1···ik ∈ C(U ).

Let I = (i1, · · · , ik) be a k-tuple. We write

dxI = dxi1 ∧ · · · ∧ dxik. A k-form η is also denoted by η =P

IηIdxI. The sum of two k-forms on U ω =P

IωIdxI and η =P

IηIdxI is defined to be

ω + η =X

I

(ω + η)dxI.

1Let us use the notation π for the projection but we have to know that this projection is different from the above.

(6)

For any smooth function f on U, we define f ω =X

I

(f ωI)dxI. One can check that

Proposition 4.1. The set Ωk(U ) is a free C(U )-module of rankn k

 .

On Ω(U ), we define the exterior (wedge) product as follows. Given a s-form ω = P

IωIdxI and a k-form η =P

JηJdxJ, we define a s + k form by ω ∧ η =X

I,J

ωIηJdxI∧ dxJ.

The exterior product of forms on U has the following properties.

Theorem 4.1. Let ω, η, and θ be k, s, r-forms respectively. Then (1) (ω ∧ η) ∧ θ = ω ∧ (η ∧ θ),

(2) ω ∧ η = (−1)ksη ∧ ω,

(3) If r = s, then ω ∧ (η + θ) = ω ∧ η + ω ∧ θ.

Proof. Exercise. 

Let ω =P

IωIdxI. We define the exterior derivative dω of ω by dω =X

I

I∧ dxI.

Proposition 4.2. d : Ω(U ) → Ω(U ) is a linear map such that (1) d : Ωk(U ) → Ωk+1(U )

(2) d(ω ∧ η) = dω ∧ η + (−1)kω ∧ dη (3) d2 = 0.

Proof. Exercise. 

Let dk= d|k(U ). Then d2 = 0 implies that dk+1dk = 0. In other words, Im dk−1⊂ ker dk. Definition 4.2. The k-th de Rham cohomology of U is the quotient space defined by

Hk(U ) = ker dk/ Im dk−1.

Later, we will study more about the de Rham cohomology of a smooth manifold.

4.1. Pull back on Differential Forms. Let F : U ⊂ Rn→ Rm be a smooth map. Then F induces a linear map F from Ωk(V ) to Ωk(U ), where V is an open set containing F (U ).

Given a k-form ω on V, with k ≥ 1, we define a k-form Fω on U by

(Fω)(p)(v1, · · · , vk) = ω(F (p))(dFp(v1), · · · , dFp(vk)), ∀p ∈ U, where v1, · · · , vk ∈ TpRn. For k = 0, set Fg = g ◦ F.

Proposition 4.3. Let F : U ⊂ Rn→ Rm and g ∈ C(U ). Suppose ω, η are k-forms. Then (1) F(ω + η) = Fω + Fη,

(2) F(gω) = (Fg)(Fω),

(3) If ϕ1, · · · , ϕk are one-forms, F1∧ · · · ∧ ϕk) = Fϕ1∧ · · · ∧ Fϕk.

(7)

Let us assume F = (F1, · · · , Fm), i.e. Fi = yi ◦ F. Suppose that ω = P

IωIdyI. Then Fω =P

IFωIFdyI. Using the properties, we have Fdyi1∧· · ·∧Fdyik = dFi1∧· · ·∧dFik. Then

Fω =X

I

ωI(F1, · · · , Fm)dFi1∧ · · · ∧ dFik. Using this identity, it is easy for us to prove the following corollary.

Corollary 4.1. Let ω, η be forms. Then F(ω ∧ η) = Fω ∧ Fη.

Proposition 4.4. Let F : U ⊂ Rn→ Rm and G : V ⊂ Rm → Rk, where V is an open set containing F (U ). Then

(G ◦ F )ω = F(Gω) for all k-forms ω.

Proposition 4.5. Let F : U → Rm be a smooth map and ω be a k-form. Then d(Fω) = Fdω.

參考文獻

相關文件

Compute Df (x) and

Since every point of S is an accumulation point of S, S has no isolated point... Is it true that if x is an accumulation point of A, then x is also an accumulation point

Using the property of Noetherian space, we can decompose any algebraic set in A n (k) uniquely into a finite union of its irreducible components.. Hence we

Since the images of a human face lie in a complex sub- set of the image space that is unlikely to be modeled by a single linear subspace, we use a mixture of linear sub- spaces to

Since everyone needs to write the final solutions alone, there is absolutely no need to lend your homework solutions and/or source codes to your classmates at any time.. In order

The set of all adherent points of A is denoted by A called the closure of A.. When A is empty, the statement

After zooming in, the surface and the tangent plane become almost indistinguishable, as shown in the second graph.. (Here, the tangent plane is above the surface.) If we zoom

Thus, each vector ∇( ) has the same direction and twice the length of the position vector of the point ( ), so the vectors all point directly away from the origin