• 沒有找到結果。

Flow and Simulation Conditions

Chapter 3 Benchmark tests and verifications

3.1 Supersonic Nitrogen Flow over a Two-Dimensional Wedge

3.1.1 Flow and Simulation Conditions

Flow Conditions

A supersonic flow past a 2-D half-angle with a length of 60.69mm, the same as that in Wang et al. [2002], is chosen as the test case to validate the present coupled DSMC-NS method. An equivalent quasi-2-D DSMC simulation is performed with the PDSC code by imposing the Neumann boundary conditions in the span-wise direction (z-coordinate), which is normal to the 2-D wedge. Tests show that 3-4 cells in the z-direction are generally enough to mimic the 2-D flow. The numerical results of the 2-D DSMC simulation is taken as the benchmark values for the validation of the proposed DSMC-NS method. Also, the quasi

2-D simulation requires less computational resource than the 3-D simulation. Extension to 3-D flows is straightforward since the PDSC and HYB3D are both three-dimensional codes.

Sketch of the current benchmark test is shown in Fig. 3.1 Free-stream conditions for this test case include: gaseous nitrogen as the flowing fluid, a Mach number (M) of 4, a velocity (U) of 1111.1m/s, a density (ρ) of 6.545E-4kg/m3 and a temperature (T) of 185.6K. The wedge has a wall temperature (Tw) of 293.3K and a length of 60.69mm. The Knudsen number based on the length of the wedge and the free-stream conditions is 0.0017.

Simulation Conditions

In the pure DSMC simulation and the DSMC part of the coupled method, variable hard sphere (VHS) [Bird, 1994] model is used to simulate molecular collisions. A constant rotational collision number of 5 [Bird, 1994] is used in the Larsen-Borgne model [Borgnakke and Larsen, 1970] for simulating energy exchange between translational and rotational degrees of freedom. In the NS simulation (pure CFD or part of the coupled method), a CFL number of 100 and a threshold parameter of 0.5 in the limiter function proposed by Venkatakrishnan [1995] are used throughout this study, unless otherwise specified. The same mesh (180,000 hexahedrons) is used for the three numerical approaches (pure DSMC, pure CFD, and the coupled methods) detailed in the current study. Pure DSMC simulation using the PDSC code is taken as the benchmark result for comparison hereafter in the current test case, since it intrinsically solves the Boltzmann equation that governs the gas flows in all

regimes. There are four sets of simulation conditions tested in the current study, which are shown in Table 3.1. Among those, Set 1 is taken as the baseline case for future discussions, unless otherwise specified. Also, the parametric details about the different numerical approaches that are used in the current study are present below.

This simulation is performed on a PC-cluster system, termed as “Cahaba”, at University of Alabama, Birmingham. This system is configured in master-slave network architecture with the following features: 64 dual-processor nodes, 2.4 GHz Xeons for each processor, 2GB RAM at least for each node and GB-ethernet for networking. The proposed coupled DSMC-NS method is expected to have high portability across various parallel machines if they are memory-distributed and using MPI as the communication protocol. Totally 32 processors are used in this benchmark test unless otherwise specified.

Pure DSMC simulation and DSMC simulation in the coupled method

Note that the number appearing in parenthesis in the following description represents the value corresponding to the PDSC simulation in the coupled method. Approximately 3.1 million (0.7 million) particles are used for the pure DSMC simulation. The number of computational cells that are used for PDSC in the coupled method is in the range of 64,000~85,000 (Table 3.2), which is about 1/3~1/2 of the total number of cells of 180,000 for the pure DSMC method. A number of sampling time steps for the pure DSMC simulation of

35,000 (10,000), and a corresponding number of transient time steps of 30,000 (15,000) are employed to make sure the transient period does not affect the sampling result. The total number of sampling time steps for pure DSMC simulation is much larger than that of DSMC simulation in the coupled method at each iteration step to ensure low statistical uncertainties in the pure DSMC simulation. The reference (or smallest) time step of 8.71E-9 seconds is used for both DSMC simulations with the variable time-step approach. The number of particles per cell in the DSMC simulation is generally kept greater than 10 throughout the DSMC simulation domain. Resulting total computational time for the pure DSMC method and DSMC simulation in the coupled method is approximately 16.3 hours and 12.2 hours (for 10 coupled iterations), respectively. Related timing data are also shown in Table 3.3 for reference.

Pure NS simulation and NS simulation in the coupled method

In both NS simulations, an implicit scheme with a local time stepping and a CFL number of 100 is used for the time iterations. Iteration numbers of 7000 and 2000 are used for pure NS simulation and NS simulation in the coupled method respectively. In addition, grid convergence of the NS code is demonstrated in Fig. 3.2, where the simulated data using fewer cells (120,000 cells) are essentially the same as those by (180,000 cells) as used in the verification of the coupled method with the quasi-2-D wedge flow. In the current study,

180,000 cells are used throughout the study, unless otherwise specified. The number of computational cells for HYB3D in the coupled method is about 1/2~2/3 of the total number of cells used for pure NS simulation. Total HYB3D computational time is approximately 2.8 hours and 9.2 hours (for 10 coupling iterations), respectively, for the pure NS simulation and the NS simulation in the coupled method. Furthermore, the total computational time in the coupled method is about 24.2 hours (Table 3.3). For other sets of simulation, the total computational time of the coupled method is within ±20% difference from that of Set-1 simulation.

Distribution of Breakdown Parameters

The distributions of breakdown parameters of test case Set 1 (initial values of Knmax, and

Knmax and PTne at the end of 15th coupled iteration) along the normal direction from the wedge surface at x= 0.5, 5 and 50mm are illustrated in Fig. 3.3a-3.3c, respectively. As discussed before, only the DSMC method is able to produce PTne; hence, there is no corresponding distribution of the thermal non-equilibrium indicator after the initial HYB3D simulation. Two horizontal lines showing the threshold values (0.02 for Knmax and 0.03 for PTne) become the borderlines that continuum breaks down or thermal non-equilibrium exists, in which the NS solver cannot be used. General trend of the Knmax distribution along the normal direction from the wedge surface shows that the value is rather large (up to 0.4 or larger) near the surface (x=0.5, 5mm) due to large property gradients in the viscous boundary layer, and then

decreases to a much smaller value in the region between the boundary layer and the oblique shock, and finally becomes large again across the oblique shock (slightly larger than 0.2 or less). As for the PTne distribution, high value (up to 0.4) can be found only at the location across the oblique shock at all surface locations. Noticeably, a comparably broader region for

PTne than Knmax can be found in Figs. 3.3a-3.3b, which justifies the use of the thermal non-equilibrium indicator, PTne, in the current study. In addition, the maximum value of Knmax

across the oblique shock decreases slightly with increasing distance from the leading edge.

This is understandable since the property gradient near the leading edge is very large. Another important finding is that the initial distribution of Knmax, computed from the initial HYB3D simulation, differs to a great extent from that after the final (15th) coupling iteration. This shows that previous “one-shot” CFD simulation, which provides the Dirichlet-type boundary conditions for the DSMC simulation, is problematic for an accurate simulation. Also, the maximum values of Knmax, predicted by the latest PDSC simulation, are generally lower than those predicted by the initial HYB3D simulation except the leading edge region. Based on the distribution of breakdown parameters calculated at each iteration step, the threshold value of breakdown parameters and the concept of overlapping regions mentioned earlier, we can thus properly determine the computational domain for the DSMC and NS solvers, respectively.