• 沒有找到結果。

奈米元件之微觀熱傳分析(3/3)

N/A
N/A
Protected

Academic year: 2021

Share "奈米元件之微觀熱傳分析(3/3)"

Copied!
40
0
0

加載中.... (立即查看全文)

全文

(1)

行政院國家科學委員會專題研究計畫 成果報告

奈米元件之微觀熱傳分析(3/3)

計畫類別: 個別型計畫 計畫編號: NSC91-2212-E-002-072- 執行期間: 91 年 08 月 01 日至 92 年 07 月 31 日 執行單位: 國立臺灣大學應用力學研究所 計畫主持人: 張建成 報告類型: 完整報告 報告附件: 出席國際會議研究心得報告及發表論文 處理方式: 本計畫可公開查詢

中 華 民 國 92 年 10 月 31 日

(2)

I. Preface

Following the length scale hierarchy, the microstructure modeling can be grouped into the nanoscopic (Raabe, 1998), microscopic, meoscopic, and macroscopic regimes. The term nanoscopic refers to the atomic level, microscopic to lattice defects ensem-bles, mesoscopic to lattice defect ensembles at the grain scale, and macroscopic to the sample geometry. Hence, in the different range of scale, we have to choose different numerical simulations. For example, in the meso-macro level of material simulation, we often use finite element, finite difference, and boundary element methods to cal-culate the averaged solution of differential equations, such as electromagnetic fields, and hydrodynamics, etc. In the micro-meso level, we apply the spring models and dislocation dynamics to fracture mechanics and crystal plasticity. In this study, we pay more attention to simulations at nano-micro level. Ab − initio theory is the most fundamental method to simulate the behavior of atom scale, e.g. Hatree-Fork method and DFT (Parr et al, 1989). Additionally, the solution methods can be separated into probabilistic and deterministic approaches. The first class is often referred to Monte Carlo (MC) method (Thijssen, 1999) which is another useful tool to compute prob-lems of thermodynamics or diffusion, the second is Molecular dynamics simulations (MDS).

II. Academic purpose

First, we illustrate some characteristics about MC and MDS. The Monte Carlo technique mimics a canonical ensemble by executing a large number of successive stochastic computer experiments using uncorrelated random numbers which form a Markov chain. This method allows one to explore within reasonable computational time a large number of state in phase space. A major assumption of MDS is ergodicity. Instead of ensemble averages, time averages are usually taken over hundred thousands or millions of time steps, or block means are averaged to reduce correlation errors in time and thermodynamic quantities can be calculated. However, MDS has its

(3)

limitations in statistics. It is known that the Monte Carlo integrations allow one to visit a large number of states along their stochastic trajectory, but MDS is along one deterministic trajectory. This means that MDS pass less phase states than the Monte Carlo method that executes 6N degrees of freedom. This is the motivation for us to introduce a methodology into MDS in order to make ensemble average of MDS more reasonable. This methodology is iterative nonlinear Gaussian algorithm. In statistical mechanics, the ensemble average is computed by many representative points in the phase space and is a little different from time average computed by MDS though ergodicity. Instead of taking average from one MDS, we want to create representive points as many as possible when the system approaches to equilibrium and calculate the average which is closer to the concept of the ensemble average than that from one simulation.

III. Review paper

From the history of MDS, Alder and Wainwright (Alder et al, 1959, 1960) pro-posed a general method to calculate the behavior of several hundred classical parti-cles and then used the model of hard spheres to simulate the first-order transition. Rahman (Rahmann, 1964) simulated the system with the Lennard-Jones potential and discussed some pair-correlation function and the constant of self-diffusion. Nos´e (Nos´e, 1984) proposed an approach that introduced a single degree of freedom, i.e.,

the heat bath, into the system to control the temperature and pressure instead of simple control on temperature or energy. Ciccoti and Ryckaert (Ciccoti et al, 1980) in-cluded the Brownian motion to simulate the Langevin equation. They use the random force model to eliminate the high degree freedom of rigid molecules to simplify MDS. Recently, Car and Parrinello (Car et al, 1985) combined MDS with ab − initio cal-culation, i.e., density functional theory. This approach doesn’t use the pair-potential approximation to the system. On the contrary, they applied density functional theory to predict the interaction between each particle of the system and then simulated the

(4)

motion to account for the quantum effect. These methods share one common fea-ture, i.e., they just take one simulated result to calculate the macroscale properties. It is fundamentally different from the concept of the ensemble averages in statistical mechanics. Our study focuses on this problem, aimed to propose a methodology to make the average of simulation more reasonable in MDS.

The concept of Interative nonlinear Gaussianization algorithm is to transform a set of correlated random variables to the standard multivariate Gaussian N(0, Id) (Lin, 2000; Lin et al, 2002), so that the statistical dependence to be minimized among the transformed coordinates. The nonlinear algorithm INGA consists of the forward and backward parts: the forward process brings correlated random variables to in-dependent ones with decoupled Gaussian (multi-normal) distributions. By sampling an independent Gaussian sample, and proceed with the backward algorithm gives an independent replica of original sample. The application of INGA lies in image analysis and simulation from a given picture of which the probability is unknown. Especially, we cannot obtain a reliable estimate of probability density functions from a finite number of samples because of the less information. The algorithm INGA can overcome these difficulties.

IV. Methodology

In molecular dynamics simulations, we solve the equations of motion: ¨

~

ri = ~Fi = −∇Φ , (1)

where ¨~ri, ~Fi and Φ denote the position of the ith particle, force and potential, re-spectively. In many studies, the Lennard-Jones model is often used for the potential model (Haile, 1992), Φ(r) = 4 µ 1 r12 1 r6 ¶ . (2)

The above equations are written in a dimensionless form with the reference length

(5)

Table 1: The parameters in the Lennard-Jones potential of the inert

gas.

²(10−16erg) σ( ˚A) He 14 2.56 Ne 50 2.74 Ar 167 3.40 Kr 225 3.65 Xe 320 3.98

reference mass, where σ and ² are, respectively, the length and energy scales in the standard model. The Lennard-Jones potential contains two parts: one is attractive term 1/r6, and the other is 1/r12. The former describes the interaction between two

molecules for a large range, especially for a closed shell molecules. As two uncharged molecules approach one another, the electron clouds of competing atoms undergo a deformation. This is called an induced dipole moment. It lasts for only the short time, but during this time the atoms with dipoles are interacted with each other, we call it London or Van der Waals force of which the behavior is the 1/r6(Brehm, 1989).

Hence, the attractive form comes from the real quantum effect for non-polar, neutral atoms. Nevertheless, it is always used as approximation for other atoms. Unlike attraction, the repulsive interaction is not well known. It can be approximated by exponential, or by the inverse power term. Inverse power term is easily computed, as it is the square of 6. Finally, we discuss the parameters of σ and ² that are extracted from some experimental data, such as crystal structure, thermodynamic properties, kinetic coefficients, etc. They are hard to obtain from the first principle. We make the table 1 of the inert gas fitted in Lennard-Jones potential (Kittel, 1996).

(6)

order to avoid the great repulsive force between particles, we use the fcc structure as the initial distribution of the particle locations. The initial velocity is assigned from a Maxwell distribution f (v) = 1 2πT exp µ −v2 2T. (3)

In this study, we consider (i) the canonical NTV ensemble and (ii) the micro-canonical ENV ensemble of systems for computing different physical properties. In order to make sure that the system transfers from the initial state to equilibrium, we use a simple temperature control for the NTV system, and a simple energy control for the ENV system.

(i) NTV system. The physical properties to be computed from this ensemble include (Haile, 1992): ¯ P ρT = ρ − ρ2 6kT Z 0 rdΦ(r) dr g(r)4πr 2dr = < P > ρT 16πρ 3T r3 c , (4) ¯ U N = ρ 2 Z 0 Φ(r)g(r)4πr2dr = < U > N 8πρ 3r3 c , (5) ¯ E N = ¯ U N + 3T 2 . (6)

In these formulas, rc, ρ, T , ¯P , ¯U and ¯E are, respectively, the cutoff length, density, temperature, pressure, potential energy and total energy computed by NTV ensemble, and g(r) is the radial distribution function. Because the Lennard-Jones model is a short-range pair potential model, we usually neglect pair interactions beyond some cutoff length to save computational time in MDS simulations. The equations of (4) and (5) contain terms of long-range interactions (McQuarrie, 1976):

ULR = 2πρ Z rc Φ(r)g(r)r2dr ≈ −8πρ 3r3 c , (7) PLR ρT = −2πρ 3T Z rc rdΦ(r) dr g(r)r 2dr ≈ −16πρ 3T r3 c , (8)

(7)

where ULR and PLR are the long range correction of the potential per atom and pressure, respectively. A physical quantity like < X > denotes the statistical average of microscopic X from MDS, i.e.,

< U > = 1 MΣ M k=1ΣαΣi<jΦ [rij(k∆t − αL)] , (9) < P > ρT = 1 1 3MNTΣ M k=1ΣαΣi<j µ |rij(k∆t − αL)| du [|rij(k∆t − αL)|] drij, (10)

where N is the atom number, M is total discrete times using the time-step ∆t, α is the cell translation vector and L is the length of one edge of the primary cell.

(ii) ENV system. The physical properties to be computed include temperature and pressure ¯ T = 2 < Ek> 3N = 2(E − ¯U) 3N , (11) ¯ P = ρ ¯T µ < P > ¯ T 16πρ 3 ¯T r3 c, (12) ¯ U N = < U > N 8πρ 3r3 c , (13) as well as (Pearson, 1985) ¯ Cv = · N − N ¯T µ 3N 2 − 1< Ek−1 > ¸−1 , (14) ¯ κs= · 7 ¯P −16ρ ¯T 3 − 8ρ ¯U − N ρ ¯T < (δP ) 2 > ¸−1 , (15) ¯ γv = 2 3C¯v · ρ − ¯1 T2 < δEkδP > ¸ , (16)

where ¯Ek is the kinetic energy of microstate, δP = P − ¯P , and δEk = Ek− 1.5N ¯T . ¯

Cv, ¯κs and ¯γv are, respectively, constant-volume heat capacity, adiabatic compress-ibility and thermal pressure coefficient, computed by ENV ensemble. Notice that the formula for the potential energy per atom is identical to that in (5).

(8)

Boltzmann H theorem. An initial time step for molecular dynamics simulations is often taken to be 0.003 for easy relaxation of the system to equilibrium. As the condition of Boltzmann H theorem (Liboff, 1998) is checked,

dH

dt ≤ 0 , (17)

where H =R d3vf (~v)lnf (~v), the system is considered to have reached an equilibrium

microstate for the first time. This usually takes about 2,000 time steps. Then, we increase the time step to 0.009 for the system to pass through the phase space as much as possible, and run the MDS until the final step, which is 12,000 for the NTV system or 22,000 for the ENV system in the later numerical test. By this criterion, we can assert that the system is approah to equilibrium in the velocity distribution. In addition the to velocity, we also see if the position distribution approaches to equilibrium. Here, we compute the radial distribution function g(r) to measure the static structure of the system which is the probability of measuring two atoms separated by distance r ± 4r, i.e., (Haile, 1992)

ρg(r) = 1 N ­ ΣN i ΣNj6=iδ[~r − ~rij] ® , (18)

where ~rij is the vector between centers of atoms i and j.

In the above, we emphasized that for an arbitrary sample, it could be found what the original distribution it comes from by INGA forward process. And then, by sampling independent Gaussian variables and going back with the forward process, we can generate independent samples from this original distribution as many as possible. But, how independent are they? We patched the new samples into a time series and then use the correlation time functions to illustrate the new series constructed by INGA is more uncorrelated than direct MDS. We compute the correlation of the variable A per atom, i.e., (Thijssen, 1999)

(9)

=< AnAn+k > − < An>2, (19) and then normalized it to unit.

Numerical Algorithms for Molecular Dynamics Simulations

(i) Gear’s predictor-corrector algorithm (Gear, 1971). This method con-tains three steps, i.e., predict, evaluate, and correct. We illustrate this concept by considering the Taylor expansion for the molecular positions ~ri and their derivative at time tn. This is a one-step method-using the information of the present time tn to calculate that of next time tn+1. We write

˜

yn+1= B · yn, (20)

where

yn = (~ri, h~r0i, h2~ri00/2 , h3~r000i /6 , ...)t, (21)

h = tn+1− tn and ˜yn+1 is the predicted value of y at next time tn+1 written as

˜

yn+1 = (~rip, h~r0pi , h2~ri00p/2 , h3~ri000p/6 , ...)t. (22) We now write the actual approximation to yn+1with adding a correction to ˜yn+1, i.e.,

yn+1 = ˜yn+1+ ∆~α , (23)

where

(a) ∆ is a constant. In MDS, we often choose it as the difference between the pre-dicted acceleration ~r00pi calculated in (20) and new acceleration calculated by predicted positions ~rip ~r00 i = ~Fi = − X j6=i ∂u(rpij) ∂rpij rˆ p ij, (24) and ∆ = ∆R2 = (~r00 i − ~ri00p)2/2 , (25)

(10)

(b) ~α is the vector numbers determined by various methods. Gear determined their values by applying each algorithm to linear differential equations and analyzing the resulting stability matrices. For example, the fifth-order predictor, the value of ~α was chosen by ~α = (α0, α1, α2, α3, α4, α5) = µ 3 16, 251 360, 1 , 11 18, 1 6, 1 60 ¶ . (26)

(ii) Symplectic integrators (Yoshida, 1990). Symplectic integrators are numerical integration algorithm for Hamiltonian system, which can conserve the symplectic geometry in the phase space. For the common integration algorithm, like Gear’s 5th algorithm, we can take higher order approximation to make sure the accuracy between each step during the simulation. However, for a long time running, we cannot insure that the error is bounded. For example, the energy of the system that is conserved along the exact trajectory originally, but the computed energy may deviate from the initial value with a steady increase or decrease, along the numerical trajectory. Hence, if the integration algorithm is not symplectic, the error of energy grows and this is most undesirable. Ruth (Ruth, 1983) and Feng (Feng, 1985) developed the methodology for solving the Hamiltonian equations of motion and preserved the geometrical structure in the phase space. Yoshida (Yoshida, 1990) proposed the procedure of how to produce higher order symplectic integrators. Here, we use the 4th symplectic integrator to calculate the time evolution of the system in the phase transition.

First of all, we shall explain the symplectic geometry of the Hamiltonian system is conserved, so that (p(0), q(0)) to (p(t), q(t)) is canonical transformation. For sim-plicity, we consider two-dimensional phase space, and the algorithm can be easily extended the higher-dimension space. Let z = (p, q), and we can write down the

(11)

equation of motion (Thijssen, 1999) ˙z = J∇H(z) , (27) where J =   0 −1 1 0   , (28) and ∇H(z) =   ∂H(z) ∂p ∂H(z) ∂q . (29)

The exact solution of (27) can be formally written as

z(t) = exp(tJ∇H)[z(0)] , (30)

and a first order approximation solution is

z(t + h) = z(t) + hJ∇H[z(t)] , (31)

where we see tJ∇H(z) is an operator. Now we consider the small area δA spanned by infinitesimal vectors δza and δzb located at z. Hence, the δA can be described as

δA = δza× δzb = −δza· (Jδzb) . (32)

Now, we consider the derivative of area at t = 0 and show it to be zero. For the later times, the analysis can be translated to this case. We write

dδA dt ¯ ¯ ¯ ¯ t=0 = −d dt ¡ etJ∇H(δza) · JetJ∇H(δzb)¢ = −[J∇H(δza)] · (Jδzb) − (δza) · [JJ∇H(δzb)] . (33) Using the first order Taylor expansion, H(δz) becomes

H(δz) = H(z + δz) − H(z) = δz · ∇H(z) . (34) Defining Lij = Jik 2H(z) ∂zk∂zj , (35)

(12)

we can find (33) becomes dδA dt ¯ ¯ ¯ ¯ t=0 = [LT(δza)] · (Jδzb) + (δza) · [JLT(δzb)] . (36) It is easy to see that the matrix L satifies

LTJ + JL = 0 . (37)

so that (36) is zero and the area δA is indeed conserved.

Consider a Hamiltonian of the simple form

H = T (p) + U(q) . (38)

Then the (27) becomes

˙z = J∇H(z) = ˜T (z) + ˜U(z) , (39) where ˜ T (z) =   0 ∂T (p) ∂p , (40) and ˜ U(z) =   ∂U (q) ∂q 0   . (41)

And the solution (30) becomes

z(t) = exp(tJ∇H)[z(0)] = exp[t( ˜T + ˜U)][z(0)] . (42)

Now we want to find the set of real numbers (c1, c2, ..., ck) and (d1, d2, ..., dk) such

that the following equality holds exp[t( ˜T + ˜U)] =

k Y i=1

exp(cit ˜T ) exp(dit ˜U) + o(tn+1) , (43) and then (42) can be written as

z(t) =

à k Y i=1

exp(cit ˜T ) exp(dit ˜U) + o(tn+1) !

(13)

up to order n. Now,

(a) when n = 1, a trivial solution is c1 = d1 = 1, and so

exp[t( ˜T + ˜U)] = exp(t ˜T ) exp(t ˜U) + o(t2) . (45)

(b) When n = 2, c1 = c2 = 1/2, d1 = 1, d2 = 0, thus

exp[t( ˜T + ˜U)] = exp(t ˜T /2) exp(t ˜U) exp(t ˜T /2) + o(t3) . (46)

If we define

S(t) =

k Y i=1

exp(cit ˜T ) exp(dit ˜U) = exp[t( ˜T + ˜U) + o(tn+1)] , (47) the mapping of (47) is symplectic because it is a product of symplectic operators. However, it is beneficial to make system to evolute with time successfully, i.e.,

qi = qi−1+ tci ∂T ∂p(pi−1) , (48) pi = pi−1− tdi ∂U ∂q(qi) , (49)

for i = 1 to i = k, with (q0, p0) = z(0) and (qk, pk) = z(t). So an nth order symplectic integrator is established. However, by comparing the coefficients in (43), it is hard to obtain the higher integrator. Yoshida proposed a method to find systematically the coefficients in higher order integrators. First, we recall the Baker-Campbell-Hausdorff (BCH) formula (Merzbacher, 1998). If non-commutative operators X and Y made up the product of exp(X) exp(Y ), it can be expressed by a single exponential function as

exp(X) exp(Y ) = exp(Z) , (50)

where Z = X + Y + [X, Y ]/2 + ([X, [X, Y ]] + [Y, [Y, X]])/12 + .... By repeated application of BCH formula, we obtain

(14)

where W = 2X + Y + [Y, [Y, X]]/6 − [X, [X, Y ]]/6 + .... Thus 2nd order symplectic integrator (46) can be rewritten as

S2nd = exp(t ˜T /2) exp(t ˜U) exp(t ˜T /2) = exp(ta1+ t3a3+ t5a5+ ...) . (52)

where S2ndis the 2nd order symplectic integrator, and a1 = ˜T + ˜U, a3 = [ ˜U, ˜U, ˜T ]/12−

[ ˜T , ˜T , ˜U]/24.... From (52), we find that there are no even powers of t, i.e., a2 = a4 =

a6 = ... = 0. Hence, Yoshida proved a lemma, i.e., if S(t) is the operator of the form

(47) which has time reversibility

S(t)S(−t) = S(−t)S(t) = identity , (53)

we can expand S(t) as

S(t) = exp(ta1+ t3a3+ t5a5+ ...) , (54)

where a2 = a4 = a6 = ... = 0. To see this, we can calculate from the lower orders of t

S(t)S(−t) = exp(2t2a2+ o(t3)) . (55)

Because S(t)S(−t) = 1, 2t2a

2+ o(t3) must be equal to zero and then a2 must be zero.

By the same way, we can get a4 = a6 = ... = 0 to the higher order.

For the 4th order symplectic integrator, Yoshida takes the same form of (52)

S4th = S2nd(x1t)S2nd(x0t)S2nd(x1t) , (56)

where x0 and x1 are undetermined coefficients. Because

S2nd(x1t) = exp(tx1a1+ t3x13a3+ t5x51a5+ ...) , (57)

and

S2nd(x0t) = exp(tx0a1+ t3x03a3+ t5x50a5+ ...) , (58)

we have

(15)

In order to make sure S4th(t) = exp(t(A + B) + o(t5)) shown in (47), we need two

conditions, i.e.,

x0+ 2x1 = 1 , (60)

x3

0+ 2x31 = 0 , (61)

and their solution is

x0 = − 21/3 2 − 21/3, (62) x1 = 1 2 − 21/3 . (63)

Finally, we compare the coefficient between (56) and (47), and calculate the c0s and

d0s in terms of x 0 and x1. We get d1 = d3 = x1, (64) d2 = x0, (65) d4 = 0 , (66) c1 = c4 = x1/2 , (67) c2 = c3 = (x0+ x1)/2 , (68)

and 4th order symplectic integrator is completed. Iterative Nonlinear Gaussianization Algorithm

The concept of Interative nonlinear Gaussianization algorithm is to transform a set of correlated random variables to the standard multivariate Gaussian N(0, Id) (Lin, 2000; Lin et al, 2002), so that the statistical dependence to be minimized among the transformed coordinates. The nonlinear algorithm INGA consists of the forward and backward parts: the forward process brings correlated random variables to in-dependent ones with decoupled Gaussian (multi-normal) distributions. By sampling an independent Gaussian sample, and proceed with the backward algorithm gives an independent replica of original sample. The application of INGA lies in image

(16)

analysis and simulation from a given picture of which the probability is unknown. Especially, we cannot obtain a reliable estimate of probability density functions from a finite number of samples because of the less information. The algorithm INGA can overcome these difficulties.

Hence, given an arbitrary sample, we believe that it must come from the Gaussian distribution without knowing how to go back to it. The INGA forward process can determine these progress and record them. These process implies some mechanism of physics or math about creation law from natural, i.e. the Gaussian distribution. Once making sure how to generate from the natural, we can create some new independent Gaussian distribution. Following the creation law, we can produce a new sample which represents another standpoint of the original one. This prolepsis is important for us to improve MDS with INGA in respect of computing ensemble average in the statistical mechanics. In the work of computing ensemble average, we need represen-tive points in the phase space as many as possible. However, it is impossible for us to catch all information of the microstates. Fortunately, these representive points have a property, i.e., mental copy. The so-called mental copy is that under a macrostate, there are many microstates that have their weight probability. If one MDS performs some representive points in the equilibrium, we can search for more mental copies by INGA method owing to its capacity of finding replicas of the original one. It is the motivation for us to apply INGA to MDS in this research.

The above observations bring a possible resolution to the mentioned difficulty of MDS, before. First of all, each of the physical quantities can be regarded as a random variable over the microstates. Second, molecular dynamics simulations are proposed to run only for a relatively short time period after the check of the Boltzmann H criterion for equilibrium. Based on physical values sampled over the short period,

(17)

independent replicas could be generated by an iterative nonlinear Gaussianization algorithm. Such a replica can be generated at relatively cheap cost, and several or many replicas consist of an ensemble of molecular systems, and they can be patched to form a ’pseudo time series’ for the correlated random variables (physical quantities). Statistical averages of microstates are then taken over these pseudo time series to obtain macroscopic properties.

Algorithm INGA

The algorithm tries to nonlinearly transform a set of correlated random variables to the standard multivariate Gaussian N(0, Id) in an attempt to minimize the statisti-cal dependence among the transformed variables. The forward process is an iterative algorithm, consisting of transforming a given set of dependent random variables Xp by:

1) normalize the random variable, ˜Xp = Xpσ−µp p, where µp and σp are mean and standard deviation of Xp, and p = 1, 2, ..., N , respectively;

2) use PCA (Johnson et al, 1982) to de-correlate the random components, i.e., Y =

B ˜X by a decorrelation matrix B, where Y = {Y1, Y2, ..., YN} and ˜X = { ˜X1, ˜X2, ..., ˜XN};

3) make the transformation Up = Fp(Yp) in each random component, where Fp is the cumulative distribution function of Yp;

4) make the transformation Zp = Φ−1(Up) (Press et al, 2002) in each random component, where Φ denotes the standard Gaussian distribution N(0,1);

5) check convergence of Z = {Zp} to disjoint multi-normality, i.e., independent

normality of the components Zp. If not, go back to 2).

(18)

method to find the uncorrelated linear combinations {Y1, Y2, ..., YN} whose variance are as large as possible and covariance Cov(Yi, Yj) as small as possible , and so we need to compute the eigen-system of B. Step 3) ensures that each Up has the uniform distribution U(0, 1), and 4) makes each Zp a marginal Gaussian variable. With the records of the INGA forward process, the backward process generates new samples by

1) starting from creating the new independent Gaussian components, usually, using the Box-Muller method;

2) inverting all the iterations of the INGA forward process to obtain an indepen-dent new sample. Now, we make a flow chart to illustrate these processes shown in Fig. 1 and 2.

In brief, the forward INGA is designed to bring arbitrary random components to independent Gaussian ones by interlacing the procedures of de-correlation and marginal Gaussianization. For an arbitrary sample, we could not know how it creates from or satisfies with the kind of distribution. However, following the INGA, we could generate new independent samples from the original one. We take a simple example to illustrate the procedure and concept. Fig. 3 is the cigar-like picture made up of 200 data points which comes from some distribution that we don’t know. After 6 forward iterations that included normalization, PCA process, uniform transformation, and Gaussian transformation. Finally, we obtain the Gaussian distribution N(0, 1) shown in Fig. 4. Now, we start to go through backward process. First, we create the independent Gaussian distribution and then reverse the forward process recorded before. Finally, we obtain the resample shown in Fig. 5 from which we see that it is slightly different to the original picture. However, it catches the characteristic chiefly; for example, the four edges. There is somewhat similarity and difference, but each sample comes from the same original distribution. This is the main idea for us to

(19)

2-d I II D U G ( It er at ion In de pe nde nt -I de nt ic al -D is tr ib ut ion -U ni for m iz at ion / G au ss ia ni za tion ) R aw da ta R aw da ta 1 X X2                  2 1 2 1 X X A Y Y   1 1 01 Y F U 2 2 02 Y F U 1 1 Z X 2 2 Z X    01 1 1 U Z      02 1 2 U Z    M ul t-U ni for m ity or M ul ti-N or m al ity N o N o (1) (2) (3) (4) Y es   1 2 1 02 2   U U   1 2 1 01 1  U U (1) de cor re la tion , (2) u ni for m at ion , (3) G au ss iz at ion , (4) c he ck M ul ti-nu if or m ity or n or m al ity .

(20)

O

ri

g

in

a

l

sa

m

p

le

F

o

rw

a

rd

P

C

A

p

ro

ce

ss

M

a

rg

in

a

l

n

o

r

m

a

li

za

ti

o

n

M

u

lt

i-N

o

r

m

a

l

N

o

Y

es

B

a

c

k

w

a

rd

S

a

m

p

li

n

g

i.

i.

d

.

G

a

u

ss

ia

n

s

In

v

er

se

m

a

rg

in

a

l

n

o

r

m

a

li

za

ti

o

n

In

v

er

se

P

C

A

p

ro

ce

ss

R

e-sa

m

p

le

d

d

a

ta

(21)

              

Figure 3: Original sample of cigar.

                 

(22)

              

Figure 5: Resample of cigar.

apply to MDS because we want to use this property to simulate the mental copy in statistical mechanics.

The algorithm INGA is then described and explained, and show how to supple-ment INGA to molecular dynamics simulations for NTV and ENV systems, respec-tively. Numerical examples are carried out and the results are compared to earlier empirical results, and some also obtained from MDS. The most significant features of the present methodology of doing MDS supplemented by INGA is:

1) its accuracy, i.e., reduction of statistical error, 2) substantially less computational cost,

3) physical correlations correctly produced. V. Results and discussion

(23)

                         ! "# $$

Figure 6: Tests for the system approaches to equilibrium with Lennard

Jones potential.

i.e., x∗ = x σ, r = r σ , v = r m ² v , F = σ ²F , t = ² 2t , T = kB ² T , ...etc. , (69)

where kB, ², σ and m are Boltzmann constant, inter-atomic potential depth, refer-ence length of the Lennard-Jones potential and particle mass. x, r, v, F , t and T are basic quantities of the position, distance, velocity, force, time and temperature, respectively. For simplictity, we omit the star on the nondimensional quantities in following use. We choose ρ = 0.7, rc = 2.5, and particle number N = 256 to simu-late the all conditions under the T = 0.964 to 2.23. Before discussing the numerical results, let us observe see some phenomenon during the simulation.

The system is considered to have reached an equilibrium microstate by checking the Boltzmann H theorem. This usually takes about 2,000 time steps. From Fig. 6, we plot the value of H from steps 2001 to 4000 per 10 steps at T = 1.191. We observe that H is parallel and closer to Maxwell’s value, but is slightly distance between each other. It is because the internal potential of system is Lennard-Jones potential, not really ideal gas, and we only use 256 atoms to simulate the all system. Hence, for the each step, the distribution of velocity is nearly Maxwellian, but not

(24)

Maxwellian. To make sure that the system have really approached to equilibrium, we compute the average velocity distribution from 2001st to 12000th step. Compared to the Maxwell distribution at T = 1.191, the velocity distribution of simulation is almost equal to Maxwellian shown in Fig. 7. So, we can assert that the system has approahed to equilibrium in the velocity. In addition to velocity, we also see if the position partition approaches to equilibrium. We take the average of the radial distribution function from step 2001st to 12000th at T = 1.191. Fig. 8 shows that the first peak occurs at 1.075, the second peak at 2.125 which are very close to those obtained in a previous study (McQuarrie, 1976). Hence, we can say that the system is very close to equilibrium in the velocity and position distribution and we can use the data simulated by MDS to be the basic sample because it already acquires some information about equilibrium state.

                           

Figure 7: Test for the distribution of velocity with Maxwell

distribu-tion.

(25)

                         

Figure 8: The radial distribution of the system obeying a Lennard

Jones potential from direct MDS.

     

Figure 9: Test the probability of Gaussianization by the INGA forward

procedure in NTV ensemble.

(26)

                     

Figure 10: Pressure ¯

P versus Temperature T.

Apply INGA to MDS

The main idea of this study is that each of the physical variables T , P , U and E be regarded as a random variable; they are of course correlated to each other. For NTV, the microscopic P and U will be sampled every 20 time steps from the 2, 001st to 12, 000th step. These consist of a two-dimensional sample (Ui, Pi) of size 500, drawn from the (unknown) distribution of X = (U, P ). For ENV, the microscopic

P and T will be sampled every time step from the 2, 001st to 22, 000th step. These

consist of a two-dimensional sample Xi = (Ti, Pi) of size 20,000, drawn also from the (unknown) distribution of X = (T, P ). The reason why we sampled fewer in NTV is that the sampled time sequence (Ui, Pi) is less correlated. But for ENV, the computed transport coefficients depend much on fluctuations of microscopic physical quantities. For example, the RHS of (14) actually involves the fluctuation of < Ek >< Ek−1 > about unity, while the RHS of (16) depends on the correlation of < δEkδP >.

Next, we consider how to apply INGA to MDS. For NTV system, we have already obtained an original sample of size 500. Now the INGA algorithm is employed to generate 350 new independent samples. These samples are patched together to form a ’pseudo time series’ of length 175,000. For ENV system, we also take (X1, X2) =

(27)

                        

Figure 11: Potential per molecule ¯

U/N versus Temperature T.

(T, P ) and have already obtained an original sample of size 20,000. This time the INGA algorithm is employed to generate 10 new independent samples. These samples are patched to together form a ’pseudo time series’ of length 200,000. The macroscopic quantities are obtained by taking averages of these microscopic states.

Numerical results for NVT system

Consider the NTV system. First of all, we check the convergence of the Gaus-sianalized by INGA forward procedure. We construct 225 mesh points which consist of 15 mesh points in the range from -6 to 6 in each coordinate and then count the number from 500 data points (computed by INGA forward process) falling in each mesh area. After determining the probability in each mesh area, we redistribute the probability to four edge mesh point by the ratio of the opposite area per total area. Fig. 9 shows that the probability of the final step in INGA forward procedure after 10 iterations at T = 1.191. We check the convergence by estimating the error between

N(0, I2) and the computed distribution. Defining

(28)

where f = f (x, y) is the 2-D normal distribution, i.e., (Johnson et al, 1982) f (x, y) = 1 pσ11σ22(1 − ρ212) × exp ( 1 2(1 − ρ2 12) "µ x − µ1 σ11 ¶2 + µ y − µ2 σ22 ¶2 − 2ρ12 µ x − µ1 σ11 ¶ µ y − µ2 σ22 ¶#) , (71)

where µ1, µ2 are the average and σ11, σ22 are the variance of x and y, ρ12 is the

correlation coefficient. The function h = h(x, y) is the probability computed, xi,

yi are coordinates of the mesh points and ∆Ai is the mesh area occupied by point (xi, yi). We decrease the error to 0.025 from 0.526 between two probabilities and it is reasonable and similar. Figs. 10, 11 show, respectively, the computed ¯P and

¯

U versus temperature with comparisons to the results of Ree (Ree, 1980), Nicolas

(Nicolas et al, 1979) and Haile (Haile, 1992). The temperature considered ranges from 0.964 to 2.23. Both Ree’s and Nicolas’ results are an analytic expression fitted with experimental data in a wide range of density and temperature. Haile ran one MDS as long as to 500,000 time steps. The present results lie basically between Ree’s and Nicolas. For ¯P , the maximum difference with Ree is 0.072, the maximum

difference with Nicolas is 0.073 and the maximum difference with Haile is 0.016. For ¯

U, the maximum difference with Ree is 0.059, the maximum difference with Nicolas

is 0.028 and the maximum difference with Haile is 0.003. Next, we fix T = 1.191, and compare the statistical errors of the present result and an MDS carried out up to 175,000 time steps.

Fig. 12 shows substantial error reduction of ¯U versus time steps with use of

the present algorithm. This can be reasonably expected because the pseudo time series by its construction in the present study is much less correlated in time. We have emphasized that for an arbitrary sample, we can transform it to independent Gaussian variables by INGA forward process and then we can generate independent samples from this original distribution as many as possible. From the Fig. 13, we find that the correlation of direct MDS decays to zero about at 50 time steps but

(29)

               time  S ta t. E rr . o f U /N       

Figure 12: Statistical error of potential per atom ¯

U/N versus time step.

that of INGA MDS decays more quickly. The series constructed by INGA MDS is almost uncorrelated. About the cost of computation, the one MDS needs 18,658 CPU-seconds to run for 175,000 time steps on a P.C. Celeron. The present MDS with 4 iterations is the forward INGA and 1 direct inversion needs only 1,305 CPU-seconds. The relative computational cost is 1, 305/18, 658 = 0.0699, i.e., about 7 percent of the cost of the full MDS. In case of MDS up to 500,000 timesteps as in Haile (Haile, 1992), the relative cost can only be 2.45 percent.

                                      !" #$%&"

Figure 13: Correlation of potential per atom ¯

U/N versus time step.

(30)

Figure 14: Test the probability of Gaussianization by the INGA forward

procedure in ENV ensemble.

                            

Figure 15: Residual of constant volume heat capacity ¯

C

v

-1.5 versus

(31)

Table 2: Comparison of thermal pressure coefficient ¯

γ

v

between two

different resample method.

Temperature (X1, X2) = (T, P ) X1 = T, X2 = P Haile 0.951 3.426 1.027 3.517 1.187 3.184 0.998 3.268 1.288 3.084 0.969 3.176 1.636 2.915 0.959 2.920 2.025 2.794 0.947 2.752 2.231 2.718 0.955 2.712

Next, consider the ENV system. We also check the convergence of the Gaussian-ization by INGA forward procedure, first. We construct 10201 mesh points which arrange 101 mesh points in the range from -5 to 5 in each coordinate and then count the number from 20000 data points (computed by INGA forward process) falling in each mesh area. Fig. 14 shows that the probability of the final step in INGA forward procedure after 10 iterations at T = 1.191. We decrease the error to 0.012 from 0.431 between two probabilities and it is reasonable and similar. Figs. 15 - 17 show, respectively, the computed residual ¯Cv − 1.5, ¯κs, and ¯γv versus temperature with comparisons to the results of Ree, Nicolas, and Hailes. The monotone behav-iors of the computed coefficients are in agreement with those obtained by previous authors. The mutual discrepancies at the lower temperature are large, but decreases with increasing the temperature. The present results basically lie between Ree’s and Nicolas’. A very challenging and critical comparison for the present methodology is on ¯γv, which, from (16), contains a correlation term on the RHS. That means, if MDS supplemented by INGA are to generate meaningful replicas of ENV microstate, it must correctly produce the physical correlation < δEkδP >. Fig. 17 indicates that this is well achieved.

(32)

Table 3: Comparison of residual of constant volume heat capacity ¯

C

v

1.5 between NVT and ENV ensemble.

Temperature NVT ensemble ENV ensemble Haile

0.951 0.669 0.696 0.673 1.187 0.637 0.631 0.633 1.288 0.624 0.605 0.617 1.636 0.577 0.558 0.570 2.025 0.525 0.541 0.530 2.231 0.497 0.526 0.516

We mentioned that the correlated fluctuation mean < δEkδP > of ¯γv is the important characteristic which can show the mechanism of INGA MDS. We use two different INGA methods to construct the original sample, and the results shown in Table 2 and illustrate the power of INGA MDS. First method is introduced before and we write down the answer in the column 2 in comparison with the results of a previous study. Second method, we separate the random varibles and resample by itself and result shown in the column 3. We find that the answer of second method could not represent real correlated between the temperature, i.e., kinetic energy, and pressure, but first method could. Second method must miss something and so we could not omit or re-arrange any relation arbitrarily. It would change some physics from the original sample and that may be recorded in the INGA procedure. Hence, MDS with INGA really acquires the fundamental physical relation caught in the original sample.

Finally, we show an example that demonstrates convergence of the coefficient ¯γvto their statistical averages. Fig. 18 shows faster convergence of the presently computed result, compared to the one direct MDS result. The relative cost compared to direct

(33)

Table 4: Comparison of thermal pressure coefficient ¯

γ

v

between NVT

and ENV ensemble.

Temperature NVT ensemble ENV ensemble Haile

0.951 3.271 3.426 3.517 1.187 3.188 3.184 3.268 1.288 3.145 3.084 3.176 1.636 2.968 2.915 2.920 2.025 2.715 2.794 2.752 2.231 2.556 2.718 2.712

MDS of 200,000 steps in the ENV case is 6, 229/21, 044 = 0.296, i.e., about thirty percent because of INGA handling now a sample size 20,000 rather than 500. It must be noted, however, that properties like Cv = (∂ ¯E/∂T )N,V, γv = (∂ ¯P /∂T )N,V can be computed less expensively by MDS plus INGA with NTV ensemble for different temperatures. Table 3, 4 illustrate these conditions. In NVT ensemble, we construct the approximation function to relate ¯E to T and ¯P to T obtained by MDS with INGA.

Then, taking the derivative of temperature to obtain ¯Cv and ¯γv which are shown in the column 2. The data of column 3 shows the results computed by INGA MDS directly in ENV ensemble introduced before. Haile also used the empirical functions to approach the relation and then took the derivative of temperature shown in the column 4. We find that they are in reasonably agreement with each other. The illustration with ENV is mainly to show the power of the present sampling methodology in producing correct physical correlations.

(34)

                      

Figure 16: Adiabatic compressibility ¯κ

s

versus Temperature T.

                      

Figure 17: Thermal pressure coefficient ¯

γ

v

versus Temp. T.

VI. Concluding remarks

The fluctuations are very important in statistical mechanics. Sometimes, it means the statistical error, but, on the other hand, it means real physics, i.e., it is related to some physical properties in the macro-scale world (fluctuation-dissipation theorem). The MDS with INGA simulates successfully these two behaviors of the fluctuation. The MDS with INGA is presented as a powerful methodology in computing physical phenomena at the nano-micro scales. In the future work, using this method, we can find the correlation between arbitrary random variables, i.e., physical properties. It is hard to achieve in traditional MDS with finite samples. Hence, MDS with

(35)

                      time v               

Figure 18: Convergence of thermal pressure coefficient ¯

γ

v

.

INGA proposes a new standpoint for computational physics. In addition, we can apply MDS with INGA to the quantum molecular dynamics simulations in order to understand the mechanism if we consider the overlap of electrons. Similarly, we have the same problem in the quantum molecular dynamics simulations, namely, with one simulation it is hard to describe the real phenomena at micro-scale world. Using MDS with INGA, we may obtain more complete statistical picture in the process of simulations.

(36)

Bibliography

[1] B.J. Alder and T.E. Wainright, Studies in Molecular Dynamics. I. General

Method, J. Chem. Phys., 27, 459, 1959.

[2] B.J. Alder and T.E. Wainright, Studies in Molecular Dynamics. II. Behavior of

a Small Number of Elastic Spheres, J. Chem. Phys., 33, 1439, 1960.

[3] B.J. Alder and T.E. Wainwright, Phase Transition in Elastic Disk, Phys. Rev., 127, 359, 1962.

[4] J.A. Baker, D. Henderson, and F.F. Abraham, Orientational Order at the

Two-Dimensional Melting Transition, Physica (Utrecht)106A, 226, 1981.

[5] J.J. Brehm, Introduction to The Structure of Matter, John Wiley & Sons, Inc., USA, 1989.

[6] F. Buda, R. Car and M. Parrinello, Thermal Expansion of c-Si via ab initio

Molecular Dynamics, Phys. Rev. B, 41, No.3, 1680, 1990

[7] R. Car and M. Parrinello, Unified approach for molecular-dynamics and

density-functional theory, Phys. Rev. Lett., 55, 2471, 1985.

[8] K. Chen, T. Kaplan and M. Mostoller, Melting in Two-Dimensional

Lennard-Jones Systems: Observation of a Metastable Hexatic Phase, Phys. Rev. Lett.,

(37)

[9] C.C. Chang, Numerical Solution of Stochastic Differential Equations with

Con-stant Diffusion Coefficients, Math. Comput., 49, 523, 1987.

[10] A.J. Chorin, Hermite Expansions in Monte-Carlo Computation, J. Comput. Phys., 8, 472, 1971.

[11] G. Ciccoti and J.P. Ryckaert, Computer simulation of the generalized Brownian

motion. I. The scalar case, Mol. Phys., 40, 141, 1980.

[12] P. Comon, Indepent Component Analysis, a New Concept?, Signal Processing, 36, 287, 1994.

[13] K. Feng, On difference schemes and symplectic geometry, in Beijing Symposium on Differential Geometry and Differential Equations -Computation of Partial Differential Equations (K. Feng, ed.), 45, Science Press, Beijing, 1985.

[14] D. Frenkel and S. Berend, Understanding molecular simulation : from algorithms

to applications , San Diego : Academic Press, 1996.

[15] C.W. Gear, Numerical Initial Value Problems in Ordinary Differential Equation, Prentice-Hall, Englewood Cliffs, NJ, 1971.

[16] J.M. Haile, Molecular dynamics simulation: elementary methods, John Wiley & Sons, Inc., New York, 1992.

[17] B.I. Halperin and D.R. Nelson, Theory of Two-Dimensional Melting, Phys. Rev. Lett., 41, 121, 1978.

[18] J.P. Hansen and L. Verlet, Phase Transitions of the Lennard-Jones System, Phys. Rev., 184, 151, 1969.

[19] W.G. Hoover and F.H. Ree, Melting Transition and Communal Entropy for Hard

Spheres, J. Chem. Phys., 49, 3609, 1968.

(38)

[21] J. Jeese and A.E. Lord, Elastic Coefficients of Single Crystal Iron from Room

Temperature to 500o C, Jour. Appl. Phys., 39, No.8, 3968, 1968.

[22] R.A. Johnson and D.W. Wichern, Applied Multivariate Statiatical Analysis, Prentice-Hall, Inc., New Jersey, 1982.

[23] C. Kittel, Introduction to Solid State Physics, John Wiley & Sons, Inc., USA, 1996.

[24] S. Kotake, Molecular Thermofluid, Maruzen,1990.

[25] L.D. Landau, On the theory of phase transitions, Phys. Z. Sowjetunion, II, 26, 1937.

[26] R.L. Liboff, Kinetic Theory, John Wiley & Sons, Inc., New York, 1998.

[27] J.J. Lin, Simulation and Synthesis of High-Dimensional Data and Related Issues, Ph.D. dissertation, University of California, Davis, 2000.

[28] J.J. Lin, N. Saito and R.A. Levine, An Iterative Nonlinear Gaussianization

Al-gorithm for Image Simulation and Synthesis, submitted to Journal of

Computa-tional and Graphical Statistics, 2002.

[29] Y.H. Liu Molecular Dynamics Simulation and Calculation of Emsemable

Aver-age, Ph.D Thesis, Institute of Applied Mechanics, National Taiwan University,

Taipei, 2003.

[30] F.H. Maltz and D.L. Hitzl, Variance Reduction in Monte Carlo Computations

Using Mult-Dimensional Hermite Polynomials, J. Comput. Phys., 32, 345, 1979.

[31] D.A. McQuarrie, Statistical Mechanics, Harper & Row, New York, 1976.

[32] N.D. Mermin, Crystalline Order in Two Dimensions, Phys. Rev, 176, 250, 1968. [33] E. Merzbacher, Quantum Mechanics, third edition, John Wiley and sons, inc.,

(39)

[34] N. Miyazaki and Y. Shiozaki, Calculation of Mechanical Properties of Solids

Using Molecular Dynamics Method, JSME Int. J., Ser. A, 39, No.4, 606, 1996.

[35] J.J. Nicolas, K.E. Gubbins, W.B. Streett and D.J. Tildesley, Equation of State

for the Lennard-Jones Fluid, Mol. Phys., 37, 1429, 1979.

[36] S. Nos´e, AUnified Formulation of Constant Temperature Molecular-Dynamics Methods, J. Chem. Phys., 81, 511, 1984.

[37] S. Nos´e, AMolecular Dyanmics Method for Simulations in a Canonical Ensemble, Mol. Phys., 52, 255, 1984.

[38] R.G. Parr and W. Yang, Density-Functional Theory of Atoms and Molecules, Oxford University Press, Inc., New York, 1989.

[39] M. Parrinello and A. Rahman, Crystal Structure and Pair Potentials: A

Molecular-Dynamics Study, Phys. Rev. Lett., 45, No.14, 1196, 1980.

[40] M. Parrinello and A. Rahman, Polymorphic transitions in single crystal: A new

molecular dynamics method, J. Appl. Phys., 52, 7182, 1981.

[41] M. Parrinello and A. Rahman, Strain Fluctuations and elastic constants, J. Chem. Phys., 76, No.5, 2662, 1982.

[42] E.M. Pearson, T. Halicioglu and W.A. Tiller, Laplace-Transform Technique for

Deriving Thermodynamic Equations from the Classical Microcanonical Ensem-ble, Phys. Rev. A, 32, 3030, 1985.

[43] W.H. Press, S.A. Teukolsky, W.T. Vetterling and B.P. Flannery, Numerical

Recipes in C++, Cambridge University Press, New York, 2002.

[44] D. Raabe, Computational Materials Science, Wiley-VCH, New York, 1998. [45] A. Rahman, Correlations in the motion of atoms in liquid argon, Phys. Rev.,

(40)

[46] J.R. Ray and A. Rahman, Statistical ensembles and molecular dynamics studies

of anisotropic solids, J. Chem. Phys., 80, No.9, 4423,1984.

[47] F.H. Ree, Analystic Representation of Thermodynamic Data for the

Lennard-Jones Fluid, J. Chem. Phys., 73, 5401, 1980.

[48] R.D. Ruth, A canonical integration technique, IEEE Trans. Nucl. Sci., 30, 2669, 1983.

[49] D. Srolovitz, K. Maeda, V. Vitek and T. Egami, Structural Defects in Amorphous

Solids-Statiatical Analysis of a Computer Model, Philosophical Mag. A,44, No.4,

847, 1981.

[50] J.M. Thijssen, Computational Physics, Cambridge university press, New York, 1999.

[51] S. Toxvaerd, Computer Simulation of Melting in a Two-Dimensional

Lennard-Jones system, Phys. Rev. A,24, No.5, 2735, 1981.

[52] H. Yoshida, Construction of higher order symplectic integrators, Phys. Lett. A, 150, 262, 1990.

[53] A.P. Young, Melting and vector Coulomb gas in two dimensions, Phys. Rev. B, 19, 1855, 1979.

數據

Figure 1: The flow chart of 2D-INGA process.
Figure 2: The flow chart of INGA process.
Figure 3: Original sample of cigar.
Figure 5: Resample of cigar.
+7

參考文獻

相關文件

In Sections 3 and 6 (Theorems 3.1 and 6.1), we prove the following non-vanishing results without assuming the condition (3) in Conjecture 1.1, and the proof presented for the

From these characterizations, we particularly obtain that a continuously differentiable function defined in an open interval is SOC-monotone (SOC-convex) of order n ≥ 3 if and only

In section29-8,we saw that if we put a closed conducting loop in a B and then send current through the loop, forces due to the magnetic field create a torque to turn the loopÆ

In Paper I, we presented a comprehensive analysis that took into account the extended source surface brightness distribution, interacting galaxy lenses, and the presence of dust

In the following we prove some important inequalities of vector norms and matrix norms... We define backward and forward errors in

Since the subsequent steps of Gaussian elimination mimic the first, except for being applied to submatrices of smaller size, it suffices to conclude that Gaussian elimination

Since the subsequent steps of Gaussian elimination mimic the first, except for being applied to submatrices of smaller size, it suffices to conclude that Gaussian elimination

Now we assume that the partial pivotings in Gaussian Elimination are already ar- ranged such that pivot element a (k) kk has the maximal absolute value... The growth factor measures