Skip to content

Commit

Permalink
updating docs
Browse files Browse the repository at this point in the history
  • Loading branch information
asalmgren committed Nov 20, 2024
1 parent b0a3bd7 commit 1f012e9
Show file tree
Hide file tree
Showing 7 changed files with 56 additions and 4 deletions.
28 changes: 28 additions & 0 deletions Docs/sphinx_doc/AMReX.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@

.. role:: cpp(code)
:language: c++

.. _subsec:AMReX:

AMReX
==============

ERF is built on AMReX, a C++--based software framework that supports the development of structured mesh algorithms for solving systems of partial differential equations, with options for adaptive mesh refinement, on machines from laptops to exascale architectures. AMReX was developed in the U.S. Department of Energy’s Exascale Computing Project and is now a member project of the High Performance Software Foundation under the umbrella of the Linux Foundation.

AMReX uses an MPI+X model of hierarchical parallelism where blocks of data are distributed across MPI ranks (typically across multiple nodes). Fine-grained parallelism at the node level (X) is achieved using
OpenMP for CPU-only machines, or CUDA, HIP or SYCL for NVIDIA, AMD or Intel GPUs, respectively. AMReX provides extensive support for kernel launching on GPU accelerators (using ParallelFor looping constructs and C++ lambda functions) and the effective use of various memory types, including managed, device, and pinned. Architecture-specific aspects of the software for GPUs are highly localized within the code, and essentially hidden from the application developer or user.

In addition to portability across architectures, AMReX provides data structures and iterators that define, allocate and efficiently operate on distributed multi-dimensional arrays.
Data is defined on disjoint logically rectangular regions of the domain known as patches (or grids or boxes); we note that unlike WRF, AMReX (and therefore ERF) does not require one patch per MPI rank, thus allowing much more general domain decomposition. Common operations, such as parallel communication and reduction operations, as well as interpolation and averaging operators between levels of refinement, are provided by the AMReX framework.

Finally, ERF currently leverages, or plans to leverage, AMReX capabilities for effective load balancing, adaptive mesh refinement, memory management, asynchronous I/O, Lagrangian particles with particle-mesh interactions, and linear solvers.

We note that ERF supports both a cmake and a gmake build system.


Linear Solvers
==============

Evolving the anelastic equation set requires solution of a Poisson equation in which we solve for the update to the perturbational pressure at cell centers. AMReX provides several solver options: geometric multigrid, Fast Fourier Transforms (FFTs) and preconditioned GMRES. For simulations with no terrain or grid stretching, one of the FFT options is generally the fastest solver, followed by multigrid. We note that the multigrid solver has the option to ``ignore'' a coordinate direction if the domain is only one cell wide in that direction; this allows for efficient solution of effectively 2D problems. Multigrid can also be used when the union of grids at a level is not in itself rectangular; the FFT solvers do not work in that general case.
For simulations using grid stretching in the vertical but flat terrain, we must use the hybrid FFT solver in which we perform 2D transforms only in the lateral directions and couple the solution in the vertical direction with a tridiagonal solve. In both these cases we use a 7-point stencil. To solve the Poisson equation on terrain-fitted coordinates with general terrain, we rely on the preconditioned GMRES solver since the stencil effectively has variable coefficients and requires 19 points.
4 changes: 4 additions & 0 deletions Docs/sphinx_doc/Checkpoint.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ uses the native AMReX format for reading and writing checkpoints.
In the inputs file, the following options control the generation of
checkpoint files (which are really directories):

The computational cost associated with reading and writing checkpoint files is
typically negligible relative to the overall cost of the simulation; in a recent performance
study the cost of writing a checkpoint file was roughly a percent of the cost of a single timestep.

Writing the Checkpoint "Files"
==============================

Expand Down
2 changes: 1 addition & 1 deletion Docs/sphinx_doc/Inputs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1165,7 +1165,7 @@ Users have the option to define a dry or moist background state.

The initialization strategy is determined at runtime by ``init_type``, which has six possible values.

For more details on the hydrostatic initialization, see the ref:`Initialization section<theory/Initialization>`.
For more details on the hydrostatic initialization, see :ref:`sec:Initialization`.

In addition, there is a run-time option to project the initial velocity field to make it divergence-free.

Expand Down
12 changes: 12 additions & 0 deletions Docs/sphinx_doc/LinearSolvers.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@

.. role:: cpp(code)
:language: c++

.. _subsec:LinearSolvers:

Linear Solvers
==============

Evolving the anelastic equation set requires solution of a Poisson equation in which we solve for the update to the perturbational pressure at cell centers. AMReX provides several solver options: geometric multigrid, Fast Fourier Transforms (FFTs) and preconditioned GMRES. For simulations with no terrain or grid stretching, one of the FFT options is generally the fastest solver, followed by multigrid. We note that the multigrid solver has the option to ``ignore'' a coordinate direction if the domain is only one cell wide in that direction; this allows for efficient solution of effectively 2D problems. Multigrid can also be used when the union of grids at a level is not in itself rectangular; the FFT solvers do not work in that general case.
For simulations using grid stretching in the vertical but flat terrain, we must use the hybrid FFT solver in which we perform 2D transforms only in the lateral directions and couple the solution in the vertical direction with a tridiagonal solve. In both these cases we use a 7-point stencil. To solve the Poisson equation on terrain-fitted coordinates with general terrain, we rely on the preconditioned GMRES solver since the stencil effectively has variable coefficients and requires 19 points.
10 changes: 8 additions & 2 deletions Docs/sphinx_doc/Plotfiles.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,14 @@ Plotfiles
Controlling PlotFile Generation
===============================

"Plotfiles" can be written very efficiently in parallel in a native AMReX format
or in HDF5. They can also be written in NetCDF.
Plotfiles can be written very efficiently in parallel in a native AMReX format.
They can also be written in NetCDF. It is possible to output plotfiles in the
same or separate formats at two distinct frequencies.

The computational cost associated with reading and writing checkpoint files
in the AMReX native format is typically negligible relative to the overall cost of the simulation;
in a recent performance study the cost of writing a plotfile was roughly a percent or two
of the cost of a single timestep.

The following options in the inputs file control the generation of plotfiles.
Note that plotfiles can be written at two different frequencies; the names,
Expand Down
2 changes: 2 additions & 0 deletions Docs/sphinx_doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,12 @@ In addition to this documentation, there is API documentation for ERF generated
:caption: IMPLEMENTATION
:maxdepth: 1
AMReX.rst
ArakawaCGrid.rst
MapFactors.rst
TimeAdvance.rst
Discretizations.rst
LinearSolvers.rst
MeshRefinement.rst
BoundaryConditions.rst
MOST.rst
Expand Down
2 changes: 1 addition & 1 deletion Docs/sphinx_doc/theory/Initialization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
.. role:: f(code)
:language: fortran

.. _Initialization:
.. _sec:Initialization:

Initialization
==================
Expand Down

0 comments on commit 1f012e9

Please sign in to comment.