• 沒有找到結果。

Simulations of subsonic vortex-shedding flow past a 2D vertical plate in the near-continuum regime by the parallelized DSMC code

N/A
N/A
Protected

Academic year: 2021

Share "Simulations of subsonic vortex-shedding flow past a 2D vertical plate in the near-continuum regime by the parallelized DSMC code"

Copied!
13
0
0

加載中.... (立即查看全文)

全文

(1)

Computer Physics Communications 183 (2012) 1596–1608

Contents lists available atSciVerse ScienceDirect

Computer Physics Communications

www.elsevier.com/locate/cpc

Simulations of subsonic vortex-shedding flow past a 2D vertical plate

in the near-continuum regime by the parallelized DSMC code

K.C. Tseng

a

, T.C. Kuo

a

, S.C. Lin

b

, C.C. Su

b

, J.S. Wu

b,c,

aNational Space Organization, National Applied Research Laboratories, Hsinchu Science Park, Hsinchu, Taiwan bDepartment of Mechanical Engineering, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan

cNational Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu Science Park, Hsinchu, Taiwan

a r t i c l e

i n f o

a b s t r a c t

Article history: Received 30 April 2011

Received in revised form 8 February 2012 Accepted 27 February 2012

Available online 1 March 2012 Keywords:

Direct simulation Monte Carlo DSMC

Vortex shedding Subsonic flow

Computational fluid dynamics CFD

A general-purpose Parallel Direct Simulation Monte Carlo Code, named PDSC, is used to simulate near-continuum subsonic flow past a 2D vertical plate for studying the vortex-shedding phenomena. An unsteady time-averaging sampling method and a post-processing procedure called DREAM (DSMC Rapid Ensemble Averaging Method) have also been implemented, reducing the overall computational expense and improving the sampling quality of time-dependent flow problems in the rarefied flow regime. Parametric studies, including the temporal variable time step (TVTS) factor, the number of particles per cell, the domain size, and the Reynolds number, have been conducted, obtaining the Strouhal number and various aerodynamic coefficients of the flow. Results are compared to experimental data in the continuum regime available in the literature, demonstrating the capacity of PDSC and DREAM to simulate near-continuum vortex-shedding problems within acceptable computational time.

©2012 Elsevier B.V. All rights reserved.

1. Introduction

Under certain conditions, vortices are generated then shed peri-odically when fluid flows past an object. This is a classical unsteady problem in fluid mechanics and can commonly be observed in the natural world, such as vehicles moving through air or water. Study-ing this kind of problem can help us understand the physics of flows, prevent structural damage and increase engineering perfor-mance. A variety of aspects related to the vortex-shedding prob-lem have been investigated experimentally for continuum fluids. For example, Roshko[1]studied the development of vortex streets with Reynolds numbers ranges from 40 to 10,000 using a hot-wire technique to detect the velocity fluctuations at two downstream positions. In another example, Perry et al. [2] conducted a flow experiment to investigate the vortex-shedding process behind a circular cylinder using a variety of flow-visualization techniques. Time-exposure photography was used to obtain a sequential frame of instantaneous streamline patterns of aluminium particles en-trained in the flow. Dye experiments were also conducted in a water tunnel. Colorful dyes were introduced near the top and bot-tom surfaces of the circular cylinder, respectively, allowing the two vortices to be distinguished.

*

Corresponding author at: Department of Mechanical Engineering, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan.

E-mail address:chongsin@faculty.nctu.edu.tw(J.S. Wu).

Despite the advantages of experimental studies into vortex shedding, these are often difficult and costly exercises. Addition-ally, the experimental investigation of vortex shedding in the rar-efied regime is virtually impossible or very difficult, giving rise to the use of numerical simulation to aid in the prediction of vortex-shedding behavior. Meiburg [3] investigated the Rayleigh–Stokes flow and the vortex-shedding problem past a two-dimensional 45◦-inclined flat plate numerically using molecular dynamics (MD) and the Direct Simulation Monte Carlo (DSMC) methods. The au-thor mentioned both methods yield equally good results for the Rayleigh–Stokes flow, but only the MD simulation obtained the vortex structure for the inclined flat plate while DSMC failed possi-bly due to the collision cells being far too large. This problem was again investigated by Bird [4]and Koura [5] whom reviewed the simulation settings of Meiburg’s DSMC simulation and found the cell size was about three mean free paths (one order of magni-tude larger than the recommended value), which may have led to the wake structure been smeared out during the collision process. Bird studied a 2D forced vortex in a square box and confirmed that DSMC also can generate vortices if a sub-cell module is included. The results show the sub-cell function can increase the accuracy of the DSMC simulations by reducing the spacing of collision part-ners. Koura [5], upon investigation of Meiburg’s simulation using the null-collision DSMC method, produced similar conclusion to Bird. Two different Reynolds numbers (Re

=

69

.

2 and 231) were studied and it was found that the vortex structure was clearer than previous efforts; however the Strouhal number was found to 0010-4655/$ – see front matter ©2012 Elsevier B.V. All rights reserved.

(2)

K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608 1597

Fig. 1. Sampling methods in DSMC including (a) steady sampling; (b) unsteady ensemble averaging; (c) unsteady time averaging with temporal variable time step (TVTS);

(d) DREAMs.

Fig. 2. Schematic diagram of the transient adaptive sub-cell module in a

two-dimensional cell.

be approximately 0.2 and insensitive to the Reynolds number. As a result of this conclusion, Bird[6]and Talbot-Stern and Auld[7] parametrically studied the effects of their simulation conditions, such as the Knudsen number, Mach number, flow field domain size, cell size, plate position, and surface model by using DSMC. All simulations obtained vortex structures and found a significant

Fig. 3. Sketch of the vortex-shedding flow after a 2D vertical plate(10L×5L) includ-ing the samplinclud-ing points for the components of u-velocity (S1–S3) and v-velocity (S4–S6) components.

(3)

1598 K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608

Fig. 4. Contour of u-velocity component at t=8527.2, 8676.8, 8826.4, and 8976 μs with different TVTS factors (100, 150, and 300). deviation in the shedding frequency and drag coefficient to that

observed experimentally by Roshko [1] in the continuum limit. In these two studies, 1,000,000 and 4,500,000 simulated particles were used for simulating 30 flow transit times, which took about one week and 20 days on 166 and 200 MHz Pentiums CPU’s re-spectively.

The previous research of Bird[4,6], Koura[5]and Talbot-Stern and Auld[7]has demonstrated that the DSMC method can be used to simulate the flow instability of the vortex-shedding problems. However, a major drawback of the DSMC scheme is its enormous computational expense for unsteady flow simulation. Therefore, an efficient DSMC code without scarifying simulation accuracy is re-quired. In this manuscript, we present the application of the DSMC method with DREAM (DSMC Rapid Ensemble Averaging Method) [8] with the goal of increasing the computational efficiency of the DSMC simulation. Some important modules in PDSC to study

the vortex-shedding phenomenon, such as parallelization, the un-steady sampling method, and transient adaptive sub-cells, are then reviewed. Systematic simulations including the temporal variable time step (TVTS), number of simulated particles per cell, and do-main sizes, are conducted and provided as a baseline for future simulations. Finally, flows with four different Reynolds numbers (Re

=

73, 126, 287, and 412) are simulated and compared with other references wherever they are available.

2. Numerical method

2.1. The Direct Simulation Monte Carlo (DSMC) method

The Direction Simulation Monte Carlo (DSMC) is a particle based method for the simulation of rarefied gas flows and was originally developed by Bird during the 1960s. The details of the

(4)

K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608 1599

Fig. 5. Contour of v-velocity component at t=8527.2, 8676.8, 8826.4, and 8976 μs with different TVTS factors (100, 150, and 300).

DSMC procedures and the consequences of the computational ap-proximations are documented in detail in[9], so only a brief intro-duction of the method is presented here.

In a DSMC simulation, the gas is represented at the micro-scopic level by “simulation particles”, each of which represents a much larger number of real particles (each simulation particle generally represents 1010–1020 real particles, depending on com-puting resources and flow conditions). The computational domain for the simulation is divided spatially into computational “cells”, used primarily for sampling of macroscopic flow properties. Parti-cle motion is computed deterministically over a small time step, based on a small fraction of the collision time. This allows the reasonable decoupling of particle motions from collisions, which are treated separately during the collision phase. During the colli-sion phase, particles within any single computational cell undergo

simulated collisions with each other. Mass, momentum and en-ergy are conserved at the particle level during collisions by using collision models which are designed to reproduce real fluid behav-ior when the flow is examined at the macroscopic level. Several molecular models, such as the variable hard sphere (VHS)[10]and the variable soft sphere (VSS)[11]models, are designed to repro-duce real fluid behavior and have already been implemented in DSMC codes. To validate the DSMC method, Nanbu[12]and Wag-ner [13] have proved mathematically that the DSMC method can provide the same solution of the Boltzmann equation when the number of simulated particles becomes large. In the current study, we have adapted the previously developed Parallel DSMC Code (PDSC) which has been described in detail in the papers by Wu et al.[14–19].

(5)

1600 K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608

Fig. 6. Streamlines at t=8527.2, 8676.8, 8826.4, and 8976 μs with different TVTS factors (100, 150, and 300). 2.2. Parallel computing with dynamic domain decomposition

The portable parallelization in PDSC is achieved by partitioning the computational domain into domains of cells. Each sub-domain is distributed to a processor and the DSMC algorithm is executed in serial for all particles and cells in the sub-domain. Parallel communication between processors is only required when particles cross the processor boundaries and during dynamic load balancing. This is achieved using the Message Passing Interface (MPI) package. The communication overhead and any imbalance of computational load between each processor will be re-balanced by using dynamic domain decomposition (DDD) to improve the parallel efficiency during the simulation. The key points for imple-menting DDD are to determine when to re-balance computational load, how to repartition the domain, and how to reorganize trans-ferred data. A stop-at-rise (SAR) algorithm is used to calculate a degradation function, which compares the computational cost of repartition to the idle time for each processor, to determine when the domain requires repartitioning [20]. The multi-level graph-partitioning tool ParMeTis[21], which is available as free software from the Karypis Laboratory at the University of Minnesota, is used to decompose the computational domain based on the in-stantaneous particle distribution. Finally the transferred particles,

cells, and sampling properties data must be communicated be-tween processors and re-organized before the simulation process resumes. More details are described in Ref.[15].

2.3. Unsteady sampling method

The DSMC method has been widely used for simulating steady flows which the sampling procedure is shown as Fig. 1(a). How-ever, simulation of unsteady flows still represents a challenge due to the large amount of sampling required to eliminate statistical scatter. Limited unsteady DSMC simulations have been reported in the literature due to the expensive computational and memory re-quirements [9]. Two unsteady sampling methods can be used for DSMC simulation as shown inFig. 1(b) and (c). The first method, termed “ensemble-averaging”, requires multiple simulation runs. During each run, the flowfield is sampled at the appropriate sam-pling times and the samples from each run are averaged over the runs to provide the flowfield output. The results are very ac-curate, however this method is very time-consuming because a large number of runs are required to reduce the statistical scat-ter to an acceptably low level and a large amount of memory is required to record the sampling data for each simulation. The sec-ond method, termed “time-averaging”, has been implemented in

(6)

K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608 1601

Fig. 7. Time traces of u-velocity components for different TVTS factors at points(0.03,0.01)(left),(0.06,0.01)(centre), and(0.09,0.01)(right) respectively.

PDSC[8]. Here a number of time steps are averaged over an in-terval just before the sampling time. This method only requires one simulation run but it suffers a potential disadvantage in that the results will be “smeared” over the time during which sam-ples are taken. Hence the sample time must be sufficiently short to minimize time “smearing” and yet long enough to obtain a good statistical sample. In order to increase the simulation efficiency of the unsteady time averaging method, the temporal variable time step (TVTS) method has been developed. The TVTS scheme enables faster processing by enlarging the time step value between the sampling periods. It has been proven that the TVTS can increase the computational efficiency without compromising simulation ac-curacy.

Because reducing the statistical scatter significantly in time-averaged data necessitates a very large number of simulation par-ticles with consequent large computational times, a post-processor named the DSMC Rapid Ensemble Averaging Method I & II (shorten

as DREAM-I & II) was also proposed to obtain more sampling data without filtering or smoothing the data artificially. DREAM repeatedly re-runs the DSMC algorithm over a short period prior to the temporal point of interest, thus building up a combination of time- and ensemble-averaged sampling data. The approaches used in DREAM-I and -II are shown in Fig. 1(c) and (d), respec-tively. DREAM restarts the flow using either a Maxwellian dis-tribution based on macroscopic properties for near equilibrium flows (DREAM-I) or using instantaneous particle data obtained from the original unsteady sampling in PDSC for strongly non-equilibrium flows (DREAM-II). The user can choose either DREAM-I or -II based on their simulation requirements. Validations of these unsteady flow techniques have been conducted by simulating the shock tube flow. The simulation results were compared with Bird’s DS2V code and shown the unsteady module and DREAM in PDSC can obtain significant agreements. More details are described in Ref.[8].

(7)

1602 K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608

Fig. 8. Time traces of v-velocity components for different TVTS factors at points(0.03,0)(left),(0.06,0)(centre), and(0.09,0)(right) respectively. 2.4. Transient Adaptive Sub-cell module (TAS)

As mentioned previously, the collision cells in DSMC should be smaller than the mean free path to maintain good collision quality during the simulation. Running simulations with under-resolved sampling cells which employ sub-cells results in a reduction in the computational and memory requirements of the simulation. Previ-ous versions of Bird’s code[9]employ a fixed number of sub-cells per sampling cell; however, more recent versions of the DS2V code generate a transient grid in each sampling cell in turn during the collision routine such that there is approximately one particle per sub-cell [22]. The use of virtual sub-cells, whereby the distances between the particle selected for collision and all other particles in the cell are simply calculated and the nearest particle chosen, was used in Bird’s latest DS2V code[22], NASA’s DAC code [23], and Boyd’s MONACO code[24]. Gallis et al.[25] conducted para-metric studies of cell size and time step by using the DSMC94 and DSMC07 codes. The DSMC94 code is the traditional algorithm

us-ing constant time steps and fixed sub-cells in the samplus-ing cell. The DSMC07 code is a new algorithm in which the time step in each cell is calculated based on the local mean collision time. The virtual sub-cell and transient adaptive sub-cell modules are also implemented. The studies of simulating Fourier and Couette flows found that the error was significantly reduced when using the DSMC07 code with the virtual sub-cell module.

Fig. 2shows the transient adaptive sub-cell module for a two-dimensional cell in PDSC [25] and it can be easily extended to three-dimensional sampling cells. The sampling cells are divided into sub-cells only during the collision routine hence they can be considered “transient sub-cells” which will have negligible com-puter memory overhead. In every case, these sub-cells are strictly quadrilateral which reduces the complexity of sub-dividing the sampling cell and greatly facilitates particle indexing. The size of the sub-cells is indirectly controlled by the user, who inputs the desired number of particles per sub-cell, P . The program then determines the dimensions of the sub-cell array based on the

(8)

K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608 1603

Fig. 9. Contours of u-velocity components at different instant times with different Reynolds numbers (Re=73, 287, 412). number of particles within the cell, Nparts. For example, for a

two-dimensional grid, the background cell is divided by an orthogonal int

(



Nparts

/

P

)

×

int

(



Nparts/P

)

sub-cell array. During the collision routine, a particle is chosen at random from some point within the whole sampling cell. The sub-cell in which the particle lies is then determined and then the second collision partner is selected from these particles which are in the same sub-cell. If the first particle is alone within the sub-cell, then adjacent sub-cells are scanned for a possible collision partner. These sub-cell routines ensure nearest neighbor collisions, even when under-resolved sampling cells are used, with minimal computational and memory overhead. A dimensional square driven cavity flow and a benchmark of two-dimensional hypersonic cylinder flow were simulated to validate the TAS module. Results showed that the simulation of the TAS module enables replication of the benchmark results with signifi-cantly reduced computational cost[26].

3. Results and discussions

3.1. Systematic study of unsteady sampling of DSMC

In the current study, a number of simulations of the flow past a 2D vertical plate were conducted to study the vortex-shedding phenomena. A systematic study, including the effects of the TVTS factor in the unsteady sampling module, particle num-ber, and domain size, was first undertaken. The flow conditions for this test case are: VHS air, the free-stream Mach number is 0.77 (267.19 m/s), the free-stream temperature is 300 K, and the Reynolds and Knudsen numbers are 126 and 0.01, respectively. The length of the vertical plate is 0.02 m, which is 50 times the free-stream mean free paths, and the vertical plate is simulated as a fully diffusive wall at 300 K. The initial conditions at t

=

0 are assigned as the same as the free-stream boundary conditions. An

(9)

1604 K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608

Fig. 10. Contours of v-velocity components at different instant times with different Reynolds numbers (Re=73, 287, 412). ARA PC cluster system (12-node, dual cores/dual processors per

node, AMD 2.2 GHz, RAM 16 GB per node, InfiniBand network-ing) is used for all cases. All simulations were conducted with only 12 processors. In these simulations, the mesh is uniform and fixed due to the low variance of mean free path over the subsonic flow field.

3.1.1. Effect of different TVTS factors

The TVTS module was developed to save the computational time associated with unsteady flow sampling by increasing the time step value between the sampling periods[8]. A sampling time step of 7

.

48

×

10−9 s was used and the total number of quadri-lateral cells is 125,000 (500

×

250,



x

= 

y

2

λ

). Although the cell size is larger than the mean free path at free-stream, we have proved that applying the transient adaptive sub-cell module (TAS) ensures adequate collision quality even the cell size is not satisfied the requirement[26]. There are initially 100 simulation particles

per cell. Each temporal node has 10,000 time steps and the last 100 time steps are sampled, which implies the flow properties are obtained by sampling about 10,000 simulated particles. In the cur-rent simulation, three TVTS factors were used allowing the time step to increase by a factor of 100, 150, and 300 outside the sam-pling region, respectively. The sketch of the vortex-shedding flow including the sampling points for the components of x-velocity (S1–S3) and y-velocity (S4–S6) is shown asFig. 3, which the length and width are 10L and 5L, respectively.Figs. 4–6respectively show the contours of an approximate cycle of u- and v-velocity com-ponents and the streamlines at t

=

8527

.

2, 8676.8, 8826.4, and 8976 μs with different TVTS factors (100, 150, and 300), respec-tively. Fig. 7 shows the time trace of the u-velocity component at points

(

0

.

03

,

0

.

01

)

,

(

0

.

06

,

0

.

01

)

, and

(

0

.

09

,

0

.

01

)

and Fig. 8 shows similar data for v-velocity component at points

(

0

.

03

,

0

)

,

(

0

.

06

,

0

)

, and

(

0

.

09

,

0

)

. The following conclusions can be made: (a) The domain size has a significant impact on the outcome of

(10)

K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608 1605

Fig. 11. Time traces of u-velocity components for different Reynolds numbers at points(0.03,0.01)(left),(0.06,0.01)(centre), and(0.09,0.01)(right) respectively. the simulation – this is discussed in Section3.1.3, (b) The

com-putational times for TVTS

=

100, 150, and 300 were about 21.6, 19.2, and 16.8 hours, respectively, indicating the larger TVTS fac-tor has better computational efficiency due to less particle tracking and particle/sub-cell identification processing. However, no vortex shedding occurs with TVTS

=

300 because the time step is too large to obtain reasonable collision behavior; (c) both TVTS

=

100 and 150 have the oscillation phenomenon, however, the vortex shedding is more clearly shown in the results of TVTS

=

100; (d) fromFigs. 7 and 8, the ranges of u- and v-velocity components are smaller when a larger value of the TVTS factor is applied. This is because the time step with the larger TVTS factor is too large and the flow is smeared out due to improper collisions.

In addition, two dimensionless parameters are examined in this study. The Strouhal number (St) can be important when analyz-ing unsteady, oscillatanalyz-ing flow problems. It is defined as f L

/

U ,

where f , L, and U represent the oscillation frequency, characteris-tic length, and flow velocity, respectively. The drag coefficient

(

Cd

)

,

which is used to express the drag on the object in moving flow, is defined as 2Fd/

ρ

U2A, where Fd,

ρ

, U , and A are drag force, flow density, flow velocity, and the characteristic of frontal area, respectively. For the cases of TVTS

=

100 and 150 the Strouhal number is 0.167, but the Strouhal number cannot be evaluated for TVTS

=

300 case because the oscillation frequency is indistinguish-able inFigs. 7 and 8. The averaged drag coefficients can be deter-mined when the flow reaches a quasi-steady state, and the values are 1.14, 1.11, and 1.06 for the TVTS

=

100, 150, and 300 cases, respectively. The results of TVTS

=

100 are closer to the experi-mental data from[1]for a continuum fluid (St

=

0

.

165, Cd

=

1

.

46), although this comparison may be problematic since the working fluid of the experiment was water.

3.1.2. Effect of different particle numbers per cell

Another parametric study conducted here is the influence of the number of simulated particles per cell on computational ex-pense and simulation outcome. Simulations using an initial

(11)

(av-1606 K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608

Fig. 12. Time traces of v-velocity components for different Reynolds numbers at points(0.03,0)(left),(0.06,0)(centre), and(0.09,0)(right) respectively.

erage) number of 50, 100, and 200 simulated particles per cell were conducted. The computational time in DSMC is proportional to the total simulated particle number and each simulation took about 21.6, 40.8, and 81.6 hours, respectively. As mentioned previ-ously, the factor of TVTS

=

100 is used and all simulations demon-strate vortex shedding. The increased number of simulation parti-cles leads to decreased statistical scatter as shown in contours of velocity components, streamlines, and time traces of velocity com-ponents at different locations. The Strouhal numbers are computed to be 0.17, 0.167, and 0.173, respectively; while the averaged drag coefficients are 0.79, 1.14, and 1.14, respectively. As can be seen, variation of the average number of simulation particles per cell has negligible influence on the Strouhal number. However, the av-erage drag coefficient computing using fewer simulated particles is much smaller than the value of using more simulated particles.

3.1.3. Effect of different domain sizes

The effect of the domain size on the simulation outcome is also investigated in this study. The simulation domains have dimen-sions of 10L

×

3L, 10L

×

5L, and 10L

×

7L (using the same cell size) requiring 31.2, 50.4, and 69.6 hours to complete respectively. All of these simulations exhibit clear vortex shedding; however, the Strouhal number (0.186, 0.164, and 0.164) and the averaged drag coefficient (0.93, 1.13, and 1.2) show the domain of 10L

×

3L is too small to obtain similar results. The reason is because the upper and bottom boundaries of the small domain are too close for the assumption of thermal equilibrium at domain boundaries to be correct and also possible wave reflection from the boundary. Although the average drag coefficients for 10L

×

5L and 10L

×

7L domains are not exactly the same, the differences are insignificant and the domain size 10L

×

5L is selected for the best compromise between computational cost and accuracy for the current test con-ditions.

(12)

K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608 1607

Fig. 13. The Stagnation point for flow past a vertical plate as a function of the normalized time.

3.2. Effect of different Reynolds numbers

After a series of parametric studies described above, simula-tions with four different Reynolds numbers were conducted. The average number of simulation particles per cell was set as 100 and a 10L

×

5L simulation domain was employed. The length of the plate (0.02 m) and the free-stream Mach number (0.77) were identical to those employed in previous simulations. Four different Reynolds numbers Re

=

73, 126, 287, and 412, with corresponding Knudsen numbers of 0.017, 0.01, 0.0044, and 0.0031, were car-ried out. Due to limited computational resources, the cell sizes are set to one, two, two, and three mean free paths for the cases of Re

=

73, 126, 287, and 412, respectively. The computational time is approximately proportional to the total number of simu-lation particles with each simusimu-lation taking 53.5, 129.5, 360.5, and 504.5 hours respectively.Figs. 9 and 10 illustrate the contours of

u- and v-velocity components of different Reynolds numbers at

different simulation times. The results taken from simulations us-ing Re

=

126 have been shown in Figs. 4 and 5 with results for

Re

=

73, 287 and 412 shown inFigs. 9 and 10for reference. Each case demonstrates different vortex patterns while the case with

Re

=

73 exhibits no sign of vortex shedding for the domain size employed.

Figs. 11 and 12 present time varying quantities of u- and

v-velocity components at different points with varying Reynolds

numbers. The following observations can be made: (a) The profile of the Re

=

73 case is too scattered to measure the flow oscillation frequency, making computation of the Strouhal number impossi-ble; (b) for the other three cases with higher Re numbers, the flow velocity behind the vertical plate is initially low, then increases to a quasi-steady value. The developing times for the periodic vortex shedding are reduced with increasing Reynolds number; (c) the magnitudes of the v-velocity component in Fig. 12 increase with increasing Reynolds number; (d) both the Strouhal numbers (N

/

A,

0.174, 0.188, and 0.21) and the averaged drag coefficients (1.05, 1.14, 1.35, and 1.4) increase with respect to the Re

=

73, 126, 287, and 412 cases respectively.

Fig. 13 presents the evolution of the stagnation point of dif-ferent Reynolds numbers and Taneda’s experimental data of the continuum flow[27]for comparison. This stagnation distance and time instant are normalized by the length of plate

(

L

)

and the flow transit time

(

L

/

U

)

, respectively. The distance of the stagna-tion point moves further downstream with time. This comparison is cautiously introduced with the understanding that the physics of rarefied and continuum flows are not identical. The comparison between these simulations and experiments may not be

(13)

appro-1608 K.C. Tseng et al. / Computer Physics Communications 183 (2012) 1596–1608

priate because of the continuum nature of the experiments that is unable to reproduce the rarefied behavior demonstrated across various Knudsen numbers. However, basic trends demonstrated in the experiments can be seen in the simulation of rarefied flows regardless.

4. Conclusions

The unsteady simulation of vortex shedding behind a flat ver-tical plate has been performed for various degrees of rarefaction and simulation conditions. A parallel DSMC code (PDSC), combined with a novel unsteady sampling method and the transient adaptive sub-cell technique were used to predict vortex shedding and drag coefficients. A systematic study about the effects of the TVTS factor, the number of particles per cell, and domain size was presented. From these simulations, a TVTS factor of 100 is proposed to save computational time and a domain of 10L

×

5L with one hundred simulated particles per cell on average was found to be sufficient to obtain accurate results in consequent simulations. A study of the influence of varying Reynolds numbers was conducted, show-ing that flows of Re

=

73 exhibit a steady vortex structure while cases with higher Reynolds number have periodic vortex shedding. The Strouhal number and the averaged drag coefficient were cal-culated and compared with experimental data in the continuum regime.

Acknowledgements

The authors would like to extend their sincere thanks to Dr. Matthew Smith at National Center of High-Performance Computing (NCHC) for additional assistance with the manuscript preparation. The computing resources provided by NCHC are also highly appre-ciated.

References

[1] A. Roshko, Technical Note 3169, Washington, 1954.

[2] A.E. Perry, M.S. Chong, T.T. Lim, J. Fluid Mech. 116 (1982) 77. [3] E. Meiburg, Phys. Fluids 29 (10) (1986) 3107.

[4] G.A. Bird, Phys. Fluids 30 (2) (1987) 364. [5] K. Koura, Phys. Fluids 2 (2) (1990) 209.

[6] G.A. Bird, in: Proceedings of AIAA/ASME Joint Thermophysics and Heat Transfer Conference, Albuquerque, NM, USA, 1998.

[7] J. Talbot-Stern, D.J. Auld, in: Proceedings of AIAA/ASME Joint Thermophysics and Heat Transfer Conference, Albuquerque, NM, USA, 1998.

[8] H.M. Cave, K.C. Tseng, J.S. Wu, M.C. Jermy, J.C. Huang, S.P. Krumdieck, J. Comput. Phys. 227 (12) (2008) 6249.

[9] G.A. Bird, Molecular Gas Dynamics and the Direct Simulation of Gas Flows, Clarendon Press, Oxford, 1994.

[10] G.A. Bird, in: Proceedings of 12th International Symposium on Rarefied Gas Dynamics, Charlottesville, VA, USA, 1980, p. 239.

[11] K. Koura, H. Matsumoto, Phys. Fluids 3 (1991) 2459.

[12] K. Nanbu, in: Proceedings of 15th Rarefied Gas Dynamics, Grado, Italy, 1986, p. 369.

[13] W. Wagner, J. Stat. Phys. 66 (1992) 1011.

[14] J.S. Wu, K.C. Tseng, F.Y. Wu, Comput. Phys. Comm. 162 (3) (2004) 166. [15] J.S. Wu, K.C. Tseng, Internat. J. Numer. Methods Engrg. 63 (1) (2005) 37. [16] J.S. Wu, S.Y. Chou, U.M. Lee, Y.L. Shao, Y.Y. Lian, J. Fluid Eng. 127 (6) (2005)

1161.

[17] J.S. Wu, Y.Y. Lian, G. Cheng, R.P. Koomullil, K.C. Tseng, J. Comput. Phys. 219 (2) (2006) 579.

[18] J.S. Wu, W.J. Hsiao, Y.Y. Lian, K.C. Tseng, Internat. J. Numer. Methods 43 (2003) 93.

[19] K.C. Tseng, J.S. Wu, I. Boyd, in: Proceedings of 14th AIAA/AHI Space Planes and Hypersonic Systems and Technologies Conference, Canberra, Australia, 2006. [20] D.M. Nicol, J.H. Saltz, IEEE Trans. Comput. 37 (9) (1988) 1073.

[21] G. Karypis, et al., ParMeTis*, Parallel graph partitioning and sparse matrix or-dering library: Version 2.0, University of Minnesota, Department of Computer Science/Army HPC Research Center, Minneapolis, MN 55455, http://glaros. dtc.umn.edu/gkhome/metis/parmetis/changes.

[22] G.A. Bird, in: Proceedings of 25th International Symposium on Rarefied Gas Dynamics, St. Petersburg, Russia, 2006.

[23] G.J. LeBeau, Comput. Methods Appl. Mech. Engrg. 174 (3–4) (1999) 319. [24] S. Dietrich, I. Boyd, J. Comput. Phys. 126 (2) (1996) 328.

[25] M.A. Gallis, J.R. Torczynski, D.J. Rader, G.A. Bird, in: Proceedings of 40th Ther-mophysics Conference, Seattle, WA, USA, 2008.

[26] C.C. Su, K.C. Tseng, H.M. Cave, T.C. Kuo, M.C. Jermy, S.P. Krumdieck, J.S. Wu, Comput. & Fluids 39 (7) (2010) 1136.

數據

Fig. 3. Sketch of the vortex-shedding flow after a 2D vertical plate ( 10L × 5L ) includ- includ-ing the samplinclud-ing points for the components of u-velocity (S1–S3) and v-velocity (S4–S6) components.
Fig. 4. Contour of u-velocity component at t = 8527 . 2, 8676.8, 8826.4, and 8976 μs with different TVTS factors (100, 150, and 300)
Fig. 5. Contour of v-velocity component at t = 8527 . 2, 8676.8, 8826.4, and 8976 μs with different TVTS factors (100, 150, and 300).
Fig. 6. Streamlines at t = 8527 . 2, 8676.8, 8826.4, and 8976 μs with different TVTS factors (100, 150, and 300)
+7

參考文獻

相關文件

You are given the wavelength and total energy of a light pulse and asked to find the number of photons it

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =>

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

The existence of cosmic-ray particles having such a great energy is of importance to astrophys- ics because such particles (believed to be atomic nuclei) have very great

incapable to extract any quantities from QCD, nor to tackle the most interesting physics, namely, the spontaneously chiral symmetry breaking and the color confinement.. 

• Formation of massive primordial stars as origin of objects in the early universe. • Supernova explosions might be visible to the most

Monopolies in synchronous distributed systems (Peleg 1998; Peleg