• 沒有找到結果。

The Hyperplane-Constrained Continuation Method

In the introduction, we noted that the ε2 term will be absorbed by a change of variables. Despite this, the two equations are essentially the same. In this section, we will offer a summarized description of the Hyperplane-Constrained Continuation Method, and we will show some figures exhibiting specially shaped solutions in some bifurcations.

Suppose that we solve for the function g(x) in a difficult way, but that there exists a function f (x) which is similar to g(x), but that can be obtained more easily. Now, we define a new function H(f, g, τ ) and add a variable τ which connects f (x) and g(x) as: H(f, g, τ ) = (1 − τ )f (x) + τ g(x). When τ = 0, H(f, g, τ ) equal to f (x) i.e.

H(f, g, τ ) = f (x). If we then increase τ from 0 to 1 somehow, then H(f, g, τ ) will become g(x) at τ = 1 i.e. H(f, g, τ ) = g(x) at τ = 1. In the end, we will obtain the solution of g(x).

Now we may apply the above ideas to solve our equation. We rename the variables u and v to u1 and u2. The difficult part of this particular solution is the β term, i.e.

the coupling term, because this term contains both u1 and u2. In the presence of this term, the solution of u1 is affected by the value of u2. Similarly, u2 is controlled by u1, too. We therefore assign a new variable τ in the β term as follows.

H(u(s), τ (s)) =





£u1− λ1u1+ µ1u°13 + β21τ u1◦ u°22 = 0 in Ω.

£u2− λ2u2+ µ2u°23 + β12τ u°12 ◦ u2 = 0 in Ω.

u1 = u2 = 0 on ∂Ω.

We assume the solutions move along a curve C which is our supposition. Let a virtual variable s be an arc-length along the curve, so u = (u1, u2) and τ are related to s.

Since ∆ will be transfer a matrix £, the variable β can be expressed in matrix form.

u1 ◦ u2 denotes the Hadamard product of u1 and u2 as described in Section 2, and u°1r = u1◦ · · · ◦ u1 denotes the r-time Hadamard product of u1.

First, we compute the initial solution, called u0 (which holds at τ = 0), as (u0, τ0)T = (u10, u20, τ0)T. Second, we introduce a target vector −→

t0 which comes from the u and τ respective different by s and then suitably normalized. Third, we consider a hyperplane P1 whose normal vector is −→

t0 and contains a point (¯u1, ¯τ1)T = (u0, τ0)T + δ−→

t 0 where δ is a small step length. Thus, the hyperplane P1 will be inter-sected at a point by the solution curve C. This point is called (u1, τ1)T. Forth, we find (u1, τ1)T using Newton’s Method, iteratively converging to (u1, τ1)T from (¯u1, ¯τ1)T. Once we find (u1, τ1)T, we continue using the same steps to find (u2, τ2)T and so on.

The main idea of the Continuation Method is contained in the first through forth steps, and, in practice, there are some problems and computing difficulties that arise

in going further. We will save an explanation of such details for later on in this paper.

We may ask how we get −→

t0 in the first place. In fact, the variable s does not exist, but we can find it in another way. We note that H(u(s), τ (s)) is different than s to zero at every point, since H(u(s), τ (s)) = 0 for all (u(s), τ (s)) on the curve C.

According to the Chain Rule, we can obtain Hu be gotten by solving a linear system like Ax = 0. This presents a new problem that must be solved. The problem is that, since the vector ¯t0 maybe solved a zero solution using a computer, we must consider solving the case of a nonzero component in the

¯t0. Without a loss of generality, let the last element be 1 in the ¯t0. After we solve for a nonzero ¯t0, the vector −→

t0 may quickly be obtained.

Eventually, we will give explicit forms for Hu and Hτ is, in fact, derived from two equations, so we suppose that H1 = £u1 − λ1u1 + µ1u31 + β21τ u1u22 = 0 and where u and τ are the coordinates of the hyperplane P1. Given that we want to get (u1, τ1), we can put u1 and τ1 into P1 and H(u(s), τ (s)) to obtain the following:

We now solve for (u1, τ1) using Newton’s Method. In R1, there is a differentiable function f : R → R for which there exists an x such that f (x) = 0. We want to find x. First, we choose an x0 and compute both f (x0) and f0(x0). We use these values to find x1 as:

x1 = x0 f (x0)

f0(x0) i.e. xk+1 = xk f (xk)

f0(xk) for k = 0, 1, 2, · · · . A similar mechanics holds for performing Newton’s Method in Rn for n ≥ 2.

−1 0 1 2 3 4

−2 0 2 4 6 8 10 12

x0 f(x0)

x1 x

0 f(x0)

x1

x value

y value

Newton’s Method for x 0 to x

1

f(x)

Figure 3.1. The image of Newton’s Method in R1 for x0 to x1.

In Rn for n ≥ 2, we suppose that there is a differentiable function f : Rn → Rn, and an x such that f (x) = 0. We will find x ∈ Rn. Then:

xk+1= xk− J−1(xk) · f (xk) for k = 0, 1, 2, · · · ,

where J(xk) is the Jacobian for f (xk). In fact, we do not solve J−1(xk), because it is computationally expensive. Let y(xk) = J−1(xk) · f (xk). Then J(xk) · y(xk) = f (xk).

Therefore, we solve J(xk)·y(xk) = f (xk), then iteratively compute xk+1 = xk−y(xk).

Once f (xk+1) < tolerance which is given by user, we stop the method.

In our case, we already have a variable x, so we change this to a variable a instead of using x. Our equation will thus become

f (a1) = f (u1(s), τ1(s)) =

( H(u1(s), τ1(s)) = 0 P1(u1(s), τ1(s)) = 0 . Thus, the next step is finding J(x0) = J(u0(s), τ0(s)).

J(a0) = J(u0(s), τ0(s)) =

"

Hu(a0) Hτ(a0) (us(a0))T τs(a0)

# .

We can find the lower part of the Jacobian J(a0) to be equal to ¯t0T. We match up the previous states of Hu and Hτ, inserting x0 to obtain the Jacobian J(a0). The process proceeds iteratively to finding x1 etc., but there are inevitable problems associated with this process.

Figure 3.2. The image of Hyperplane-Constrained Continuation Method on the curve C. The pink curve is represented as the process of Newton’s Method.

From the arguments of Section 2, we should be able to detect that the ground state solutions almost everywhere in this system. If the computing accuracy has a little error, then the solutions will occur at a small distance along the original curve C. However, these solutions may not be directed along the curve C. Therefore, we will add another two hyperplane to force the direction of the solutions to lie along the original curve C.

Let the two hyperplanes be P2 and P3 and their normal vectors be n2 and n3, respectively. Considering iterations that take x0 to x1, we can also rewrite the normal vectors as n20 and n30. We have that n20 is perpendicular to n30, n20 is perpendicular to−→

t0, and n30 is perpendicular to−→

t0. What are n20 and n30? Indeed, n20 and n30 are also the vectors of H which differ by an amount s from zero. In an earlier paragraph we solved a homogeneous linear system as Ax = 0. In that instance however, we supposed the last component of−→

t0 be 1. We now loosen this restriction, and suppose that the above states, n20 and n30 will satisfy with 3 events as follows:

(1) (

Hu(x0) · n20 = 0

Hu(x0) · n30 = 0 (2)











→t0T

"

n20 0

#

= 0

→t0T

"

n30

0

#

= 0

(3) [nT20, 0]

"

n30 0

#

= 0

We can make a table to show that the normal vectors and the points pass through

each of the three hyperplanes when x0 goes to x1.

normal direction pass through

P1 −→

t0 ( ¯u1, ¯τ1) and (u1, τ1) P2 [nT20, 0] ( ¯u1, ¯τ1) and (u1, τ1) P3 [nT30, 0] ( ¯u1, ¯τ1) and (u1, τ1)

The equations that are used to develop this solution are listed as follows:

H : H(u1(s), τ1(s)) = 0 in Newton’s Method. Since we add two hyperplanes P2 and P3 the Jacobian matrix will be changed. side less than tolerance. If we assume above system has the form Ay = b, then the dimension of A is (2n + 3) × (2n + 1), the dimension of y is (2n + 1) × 1, and the dimension of b is (2n + 3) × 1. That is, an over determined system. We can therefore add extra elements to the equations so that they are not over determined.

We expand the matrix A and the vector y as follows:

characterized above are in common use and one can find expressions for (u1, τ1)T.

We expect that when we have (u1, τ1)T, we can perform the same steps to find (u2, τ2), (u3, τ3) ... etc. iteratively. In fact, there are some hidden problems in this

process. It is the main problem occurs when we find a singular vector which is equal to zero in solving the linear system Ax = b. If there exists y such that Ay = 0, then there exists an α ∈ R such that A(x + αy) = b. Therefore, bifurcations are expected to occur along the curve C.

Assume we have a matrix B ∈ Rm×n. Using the Singular Value Decomposition (SVD), we have B = UΣVT, where U ∈ Rm×m, Σ ∈ Rm×n, and V ∈ Rn×n. The matrices U and V satisfy UTU = I, VTV = I. If an element of Σ is written as σij, then σkk are the singular values of B for all k = 1, 2, · · · , min(n, m), and σij = 0 as i 6= j, where i = 1, 2, · · · , n, and j = 1, 2, · · · , m. We consider the system as follows:

BTB = (UΣVT)T(UΣVT)

= V ΣTUTUΣVT

= V (ΣTΣ)VT

⇒ (BTB)V = V (ΣTΣ)VTV

= V (ΣTΣ)

⇒ (BTB)vi = σi2vi

where vi’s are eigenvectors of BTB, σi2 are the corresponding eigenvalues, and i = 1, 2, · · · , n. We call (σi2, vi) ”eigenpairs” for all i. We can use the above method to find the eigenvalues of BTB. We just take their positive square roots, which are the singular values of B.

What kinds of situations lead to such bifurcations? Suppose that there are three points ua, ub and uc on the curve C and suppose they are conjoint points of our method. Without a loss of generality, we let the first point be ua, the second point be ub, and the last point be uc. Suppose that the σ value is the smallest singular value of the point u on the curve C, and the numbers 1, 2, and 3 are the placings of σa, σb, and σc. 1 denotes the largest value, 3 the smallest value of them, and 2 an intermediate value. In our cases, the matrix A whether overdetermined or expanded, can be used to find the smallest singular values. There are also other cases which are possible as follows.

σa σb σc

1 2 3

1 3 2 ∗

2 1 3

2 3 1 ∗

3 1 2

3 2 1

There may be bifurcations in the above two * shapes i.e. (1, 3 ,2) and (2, 3, 1) corresponding to the possibility that ∃ σi = 0. Connecting the three points, we note a V shape, as in Figure 3.3. We know that the singular value may be located at the red star or the green star. We see in the picture on the left of Figure 3.4 that the

distribution of points is linear. Suppose that either ud or ue is the singular value, This is seen more clearly in the right hand picture of Figure 3.4.

0 1 2 3 4

The coordinate values are inexpressive.

0 1 2 3 4

Figure 3.4. We want to find the point which has a zero ( singular ) value point.

The left picture shows that the zero singular value maybe happened with a linear distribution of points is linear. The right picture shows the ratio described in the text.

When we find ud or ue, we can calculate the associated smallest singular value, called σd or σe. We can decide on our the next step according to this value. I divide up the area and explain various cases in Figure 3.5.

First, we divide the space in two with x = σb. According to σa and σb, we can

further divide the domain into three areas. Lets observe the singular values. The interval [0, σb] is called the F area, the interval [σb, σa] is called the E area, and the D area contains a singular value which is larger than σa. The L area, M area, and N area are divided up similarly. When σd falls in the D area, we call it d1. When σd falls instead into the E area, we call it as d2. Finally, when σd is located in the F area, we call it as d3. e1, e2, and e3 are named similarly.

When we connect the points of σa and d1, and also the points d1 and σb, ( i.e.

σad1 and d1σb ), we find that with the resulting shape is not a V shape. We find the same result with σad2 and d2σb, but the segments σad3 and d3σb do form a V shape.

Therefore, we will not be able to judge whether the or not a bifurcation exists as the σd falls into the D or E areas. The case of σe falling in the L or M areas is similar.

We will thus continue to iterate while σd or σe lie in the F or N areas. There is also another special case in which we must continue iterations. When σd falls into the E area, we should continue iterating with ue. Similarly if σe falls into the M area, then the segments d2σb and σbe2 would form a V shape. This is another case continued iterations are necessary.

0 0.5 1 1.5 2 2.5 3 3.5 4

−0.2 0 0.2 0.4 0.6 0.8

| σa

| σb

| σc

D

E

F

L

M

d N

3

d2

d1

e3

e2

e1

x value

σ value

Some distributions of σd and σe

Figure 3.5. Some distributions of σd and σe. All written words are defined as in the above paragraph.

In numerical computations, the user should always specify a tolerance, here called ε1. When three successive points have smallest singular values less then ε1, we only need further check whether V shapes are formed. If the solution evolves to a V-shape,

the user will need to give another tolerance called ε2. If the solution further evolves to having a smallest singular value less then ε2, that solution is defined as a bifurcation point.

After identifying a bifurcation point, we need to find its associated direction. If ui is a bifurcation point for some i, then we would take the penultimate smallest singular value of matrix A for the corresponding ui. If we call this value σps, then one direction of the bifurcation point for ui is the singular vector for the corresponding σps. Another direction is its anti-parallel singular vector. There are therefore three directions in which a bifurcation point may advance. In fact, the σps for A is equal to the penultimate smallest eigenvalue of ATA, and the singular vector for A is the associated eigenvector for ATA.

We will now name these bifurcations according to specific numerical experiments.

We identify them in order along τ starting at zero so the line of zero to the first bi-furcation point is named 1. Suppose the first bibi-furcation point is called b1, then there are three directions available to b1 from the above discussion. The three directions will be named 1 1 , 1 2, and 1 3. Then we can see that 1 1 is the original direction of 1. If we let the direction of solutions walk along 1 2, and then we find a new bifurcations, then we name them as 1 21, 1 22, and 1 23, and so forth. The origin direction may be called the main shaft, i.e. the path following i 1, for all i = 1, 2, 3....

We note that the solutions’ shapes are symmetric in i 2 and i 3 for all i, and the resulting shapes can be very interesting.

Consider a set of information, for example, the domain’s range, the length of−→ t , the stopping tolerances of Newton’s method, the V-shape...etc. We denote variables as λ1 = λ2 = 1, µ1 = µ2 = 1, β12 = β21 = −1, the domain is a square whose coordi-nates are (−5, −5), (−5, 5), (5, 5) and (5, −5), and the edge is cut at 51 ( we need an odd number. ). Then we will obtain several results. Since there are dozens of related figures we will present them completely in the appendix. We present crude results from the vertical views and only typical shapes and bifurcations as follows.

We first consider the energy variation. Figure 3.6 shows the relation between τ value and the energy. We find that the energy of the bifurcation is less than that on the main perch. Because the bifurcations happen so frequently, we focus only on the beginning curve.

0 0.2 0.4 0.6 0.8 1

10 15 20 25

τ value

Energy

1 11 12 111 112 121 122 1121 1122

Figure 3.6. The relation between τ and energy. They are on the beginning curve.

In order to see the details clearly, we also magnify two parts as follows. One of them goes 12 to 121 and to 122, another goes 112 to 1121 and to 1122. Whether or not τ increases or decreases from the perch to the bifurcation, the energy of the bifurcation is less than the energy on the perch.

0.32 0.34 0.36 0.38 0.4 0.42 0.44 0.46

17 17.5 18 18.5 19

τ value

Energy

1 11 12 111 112 121 122 1121 1122

0.59 0.595 0.6 0.605 0.61

22.25 22.3 22.35 22.4 22.45 22.5 22.55 22.6

τ value

Energy

1 11 12 111 112 121 122 1121 1122

Figure 3.7. The relation of τ and energy. We magnify two parts from Figure 3.6 in order to see clearly.

We will describe this behavior in the following paragraphs. We can see that the solutions will be bell-shaped, so I can portray them using the numbers of bells. From the main picture of he 1 1 bifurcation, we see that the solutions for u1 and u2 will separate to be one-and-two bells or one-and-four bells. It is interesting that the pro-cess which separates eventually into four bells happens as a bell becomes a ring in one-in-four cases. It is also only owning one simple bell for all bifurcations at present.

The bifurcations of 2 1 and 3 1 are both two-and-two cases. In 2 1, one of the solutions is separated fore and aft, and another is separated right and left. Because we don’t set the rotated plane, the solutions will eddy itself. In 3 1, their bells will likely align with four points.

The behaviors of 5 1 and 9 1 are likely. 5 1 is three-and-three case, and 9 1 is four-and-five case. In rudimentary processes they will not separate noticeably, so they behave, respectively, like two-Y shapes and two-+ shapes. Although their processes are likely, their final behaviors are not same. In 9 1, one solution will separate to five bells and another only to become four bells.

The behaviors of 6 1 and 7 1 are also likely. They are both four-and-four case and they will become two-line shapes or spheral-cross shapes in their final states. The behavior of 11 1 is like them, but it is a six-and-six case. Some parts of 12 1 are like 11 1, and the two-line shape is transformative as an X in 12 1. 22 1 also has two-line shape and spheral-cross shape, but I am unsure of the number of of bells present.

Most types in finding shapes are cross shapes. Some contain many bells in the center of the cross as in 22 1. They also typically become ”skew” in their bifurcations.

For example see: 11 1, 16 1, 19 1, 22 1, 24 1, and 29 1.

相關文件