LogoLogo
  • 1. GASFLOW Code
    • 1.1. Overview
    • 1.2. Code Approach
    • 1.3. Code Features
    • 1.4. Graphical User Interface
    • 1.5. Code V&V
    • 1.6. Application Highlights
    • 1.7. Publications
    • 1.8. Current Activities
  • 2. Tutorials
    • 2.1. Overview
    • 2.2. Sod's Shock Tube Problem
    • 2.3. Mesh Generation from CAD Models
    • 2.4. 2D Lid-driven Cavity Flow
    • 2.5. Hydrogen Diffusion into Air in a 1D Duct
    • 2.6. Supersonic Flow over a Forward-facing Step
    • 2.7. Vented Explosion of Premixed Hydrogen-Air Mixtures
    • 2.8. Transient Laminar Jet Flow at Low Mach Number Regime
  • 3. Brief User Guide
    • 3.1. Overview
    • 3.2. General User Guidance
    • 3.3. Unit System and Files
    • 3.4. Mesh Generation
    • 3.5. Geometry Definition
    • 3.6. Numerical Control
    • 3.7. Gas Species and Properties
    • 3.8. Initial and Boundary Conditions
    • 3.9. Solid Heat Structures
    • 3.10. Physical Models
    • 3.11. Restart and Output
    • 3.12. GASFLOW Parallelization
  • 4. Pre- and Post-Processing Tools
    • 4.1. GASVIEW
    • 4.2. Pyscan
    • 4.3. Create3D
  • 5. Verification and Validation
    • 5.1.Overview
    • 5.2. Fluid Dynamics
      • [AS-FD 1] Steady-State and Laminar Flow Startup
      • [AS-FD 2] Transient Compressible Flow
      • [AS-FD 3] Diffusion of Hydrogen into Air
      • [AS-FD 4] Flow past a Rectangular Block
      • [AS-FD 5] 1D Flow with an Orifice
      • [ED-FD 1] Incompressible Laminar Flow in a Lid-driven Cavity
      • [ED-FD 2] Stationary Turbulent Channel Flow
      • [ED-FD 3] Turbulent Flow between Two Parallel Plates
      • [ED-FD 4] Flow over Backward-Facing Step
      • [ED-FD 5] Transient Supersonic Flow at Mach 3 over a Forward-facing Step
      • [ED-FD 6] Large Eddy Simulations of the Turbulent Jet Flow
      • [ED-FD-7] Hydrogen Turbulent Dispersion in Nuclear Containment Compartment
      • [ED-FD 8] Buoyant Jet from Unintended Hydrogen Release
      • [ED-FD 9] Radiolytic Gas Accumulation in a Pipe
      • [ED-FD 10] Supersonic Flow at Mach 2 over a Backward Facing Step
    • 5.3. Combustion
      • [ED-CM 1] BOM Spherical Combustion Chamber
      • [ED-CM 2] SNL Flame Acceleration Measurement Facility Experiment
      • [ED-CM 3] Hydrogen Deflagration in a Multi-compartment System
      • [ED-CM 4] Hydrogen Jet Fire in a Compartment with Venting Hole
      • [ED-CM 5] Hydrogen-Air Fast Deflagration in ENACCEF Facility
      • [ED-CM 6] Detonation of Premixed H2-Air Mixture in a Hemispherical Balloon
      • [ED-CM 7] H2 Deflagration at a Refueling Station
      • [ED-CM 8] Methane-Air Explosion in LLEM
      • [ED-CM 9] Hydrogen-Methane Combution in a 20 L Spherical Vessel
    • 5.4. Heat and Mass Transfer
      • [AS-HT 1] Steady-State Heat Transfer through a Wall
      • [AS-HT 2] Pressure-Volume Work Term 1: Equilibrium Case
      • [AS-HT 3] Thermodynamic Benchmarks
      • [AS-HT 4] Uniform Energy Addition to Stagnant Fluid
      • [ED-HT 1] Natural Convection in an Air-filled Square Cavity
      • [ED-HT 2] Validation of the condensation model with COPAIN facility
      • [ED-HT 3] Heat and mass transfer of a thin film model in a channel
      • [ED-HT 4] Validation of the Film Model in the Integral Test Facility for Passive Containment Cooling
      • [ED-HT 5] Stratification Erosion Benchmark
      • [ED-HT 6] Battelle Containment HYJET Test JX7
      • [ED-HT 7] Battelle GX Tests
      • [ED-HT 8] Tests in ThAI Facility
      • [ED-HT 9] HDR Tests
      • [ED-HT 10] Phebus Thermal Hydraulic Tests
      • [ED-HT 11] Test Tosqan ISP47
      • [ED-HT 12] Test MISTRA ISP47
      • [ED-HT 13] Panda SETH Test Program
    • 5.5. Multiphase Flow
      • [AS-MP 1] Particle Terminal Velocity
      • [AS-MP 2] Water droplet evaporation
      • [ED-MP 1] Spray Single Droplet Test
      • [ED-MP 2] Spray Droplets Test 113 at IRSN TOSQAN
      • [ED-MP 3] Spray Droplets Test 101 at IRSN TOSQAN
  • 6. APPLICATION HIGHLIGHTS
    • 6.1. H2 Fuel Cell Vehicle Accident in Tunnel
    • 6.2. Hydrogen Explosion in a Refueling Station
    • 6.3. Hydrogen Explosion at Fukushima Accident
    • 6.4. Methane Explosion in the Roadway of a Coal Mine
    • 6.5. Aerosols and Droplets
      • 6.5.1. Coronavirus Aerosol Transmission
      • 6.5.2. Water Droplets
  • 7. Ongoing Development and Enhancements
    • 7.1. Combustion Modeling
      • 7.1.1. Multi-step Global Methane Combustion Models
        • 7.1.1.1. One-step Reaction Mechanism
        • 7.1.1.2. Two-step Reaction Mechanism
        • 7.1.1.3. Three-step Reaction Mechanism
        • 7.1.1.4. Four-step Reaction Mechanism
        • 7.1.1.5. Five-step Reaction Mechanism
        • 7.1.1.6. FAQ
      • 7.1.2. Laminar Flame Speed Correlations for Methane-air Mixtures
        • 7.1.2.1. Stone's Correlation
        • 7.1.2.2. Elia's Correlation
        • 7.1.2.3. Takizawa's Correlation
        • 7.1.2.4. Liao's Correlation
      • 7.1.3. Turbulent Flame Speed Correlations for Methane-air Mixtures
      • 7.1.4. Correction of Effective Turbulent Burning Velocity for Lean Hydrogen-air Mixtures
      • 7.1.5. Induction Time Model
      • 7.1.6. Detailed Chemical Kinetic Modeling
      • 7.1.7. Jet Flame Modeling
    • 7.2. Discrete Particle Modeling
      • 7.2.1. Particle mass in user-defined volumes - volpardef
      • 7.2.2. Particle injection from ring shaped volumes
    • 7.3. Heat Transfer Modeling
      • 7.3.1. Time-dependent tables for heat flux and heat transfer coefficient in sinkdef
      • 7.3.2. Thermal Radiation Model for Water Vapor and Carbon Dioxide
  • 8. INPUT FILE EXAMPLES
    • 8.1. Overview
    • 8.1. Fluid Dynamics
  • 8.2. Combustion
  • 8.3. Heat Transfer
  • 8.4. Multiphase Flow
  • 8.5. Applications
  • 9. Frequently Asked Questions
    • 9.1. How to set up models for the flashing of pressurized water?
  • 9.2. How to run GASFLOW on Windows?
  • 9.3. How to export/import WSL distribution?
Powered by GitBook
On this page
  • 3.12.1. Approach
  • 3.12.2. Domain decomposition
  • 3.12.3. To obtain decent parallel efficiency
  • 3.12.4. Running GASFLOW-MPI

Was this helpful?

  1. 3. Brief User Guide

3.12. GASFLOW Parallelization

3.12.1. Approach

GASFLOW-MPI is the parallel version of GASFLOW using the paradigms of Message Passing Interface (MPI) and domain decomposition. The data structure, parallel linear solvers and pre-conditioners of Portable Extensible Toolkit for Scientific Computing (PETSc) were employed.

PETSc is one of the most widely used software library for high-performance computational science. It can provide numerical infrastructure for application codes in which the implicit numerical solution of partial differential equations are involved. PETSc features distributed data structures, such as index sets, distributed vectors and distributed matrices in several sparse storage formats, as the fundamental objects. Krylov subspace methods, preconditioners and Newton-like nonlinear methods are implemented in a data structure-neutral manner which provides a uniform interface for ap-plication programmers. The portability of PETSc is achieved through MPI, but the detailed message passing required during the coordination of the computations is handled inside the PETSc library.

GASFLOW serial version was written in FORTRAN 90 with more than 120,000 lines and 634 subroutines in version 3.5. The ICE’d-ALE solution methodology incorporated in GASFLOW requires the solution of an elliptic pressure equation for the efficient calculation of flows at all-speeds. The discretization of this elliptic equation results in a large scale symmetrically sparse linear equation system. The GASFLOW serial preconditioning algorithm is dependent upon a recursive numerical methodology that heavily depends upon “indirect addressing” which may reduce the computational efficiency and not be suitable for parallelization. Therefore, all the programs relevant to the linear solver and preconditioner in GASFLOW serial version must be replaced by the parallel linear solvers and preconditioners in the PETSc library. Sparse symmetric system is derived from the discretization of the elliptic pressure equation in GASFLOW-MPI. The combination of linear solver, conjugate gradient (CG) and pre-conditioner, Block Jacobi (BJACOBI), was selected as the default solver for the solution of the elliptic pressure equation in the current version of GASFLOW-MPI.

3.12.2. Domain decomposition

In principle, the user should not manually decompose the computational domain in ingf file. By default, the computational domain is automatically decomposed in an optimized way in GASFLOW-MPI. Nevertheless, there is an option available to manually control the domain decomposition for advanced users. For most of the users, they can use the same input deck as they used for GASFLOW serial version.

The input variable is “autodecomp”. By default, autodecomp = 1 which means the domain decom-position is controlled automatically in GASFLOW-MPI. Unless absolutely necessary, such as for debugging purpose, the user can use autodecomp = 0 to manually control the domain decomposition. nxprocs, nyprocs and nzprocs are number of processes in x, y and z axis, respectively. Please note that nxprocs_nyprocs_nzprocs must be equal to the total number of processes allocated to the parallel computating.

Warning: autodecomp=0 means that the users manually control the domain decomposition. With this option, best performance is not guaranteed. It is highly recommended that the users use the default value autodecomp = 1.

3.12.3. To obtain decent parallel efficiency

GASFLOW-MPI can run on any kind of parallel systems which supports MPI. In order to achieve the best parallel performance, the users need to have:

  1. A fast, low-latency interconnect between computational nodes;

  2. High per-core memory performance. Each core needs to have its own memory bandwidth of roughly 2 or more Gigabytes/second. This is because the speed of sparse matrix computations is almost totally determined by the speed of the memory access, not the speed of the CPU. Number of floating point instructions submitted to the CPU is significantly less than number of memory references which have to be resolved to obtain data, meaning that matrix vector multiply kernel is memory bound;

  3. The computational domain must be decomposed in the way that each sub-domain has no less than approximately 10,000~20,000 cells. Workload of each CPU must overweigh the communication time. For example, for small problem with 640,000 cells, using 64 processor can usually obtain good speed-up. The performance may decrease by using more processors because the communication effort increases.

3.12.4. Running GASFLOW-MPI

We will demonstrate how to run GASFLOW-MPI in parallel on distributed processors. Domain decomposition is used as the method of parallel computing. The geometry and associated field variables are broken into small pieces in sub-domains and allocated to separate processors for solution. The parallel running uses the public domain openMPI implementation of the standard MPI. GASFLOW-MPI has been designed to be compatible to the input and output of the GASFLOW serial version. It means GASFLOW-MPI can read the same input file, ingf, and export the same calculation results in NETCDF format as GASFLOW serial version. Therefore, the users can use GASFLOW-MPI in the same way as they used the GASFLOW serial version without the need to know details of parallelization.

GASFLOW-MPI can be run on a local multiprocessor machine very simply but when running on machines across a network, a file must be created that contains the host names of the machines. The file can be given any name and located at any path. In the following description we shall refer to such a file by the generic name, including full path, <machines>.

An application is run in parallel using mpirun.

mpirun --hostfile <machines> -np <nProcs> xgfmpi

xgfmpi denotes the executable of GASFLOW-MPI. -np represents number of processes the user needs for the parallel computing. For example, if you have the hostfile, hostpgf, and you want to run xgfmpi with 32 processes:

mpirun --hostfile hostgf -np 32 xgf

Previous3.11. Restart and OutputNext4.1. GASVIEW

Last updated 1 year ago

Was this helpful?