Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add codespell #275

Merged
merged 3 commits into from
Sep 29, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .codespell-ignore-words
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
blocs
hight
commun
4 changes: 4 additions & 0 deletions .codespellrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[codespell]
skip = .git,*.ipynb,*.bib,*.ps,*.patch,*/Submodules,*/Build,*/sphinx/landing
#,*/build,*/tmp_build_dir,plt*,chk*,__pycache__,.ccls,.ccls-cache,tran.dat,GNUmakefile,*.pdf,Testing,sphinx_doc,*/html,*.cti
ignore-words = .codespell-ignore-words
19 changes: 19 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -640,6 +640,25 @@ jobs:
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: sarif-results/cpp.sarif
Lint-codespell:
needs: Formatting
runs-on: ubuntu-latest
steps:
- name: Clone
uses: actions/checkout@v3
with:
submodules: false
- name: Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Dependencies
run: |
# Install Python packages
python -m pip install --upgrade pip
pip install codespell
- name: Run codespell
run: codespell
Save-PR-Number:
if: github.event_name == 'pull_request'
runs-on: ubuntu-latest
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ You are now free to modify your own fork of *PeleLMeX*. To add a new feature to
git commit -m "Developed AmazingNewFeature"

3. Alongside your development, regularly merge changes from the main repo `development` branch into your `AmazingNewFeature` branch,
fix any conficts, and push your changes to your GitHub fork :
fix any conflicts, and push your changes to your GitHub fork :

git push -u origin AmazingNewFeature

Expand Down
32 changes: 16 additions & 16 deletions Docs/sphinx/manual/Implementation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,15 @@ Overview of source code
-----------------------

*PeleLMeX* is based upon AMReX's `AmrCore <https://amrex-codes.github.io/amrex/docs_html/AmrCore.html>`_ from which it inherits
an AMR hierarchy data structure and basic regridding functionnalities. The code is entirely written in C++, with low level
an AMR hierarchy data structure and basic regridding functionalities. The code is entirely written in C++, with low level
compute-intensive kernels implemented as lambda functions to seamlessly run on CPU and various GPU backends through AMReX
high performance portatbility abstraction.

The core of the algorithm is implementation in the ``advance()`` function which acts on all the levels concurrently.
Projection operators and advection scheme functions are imported the `AMReX-Hydro library <https://amrex-codes.github.io/AMReX-Hydro>`_
while the core of the thermo-chemistry functionalities comes from `PelePhysics <https://amrex-combustion.github.io/PelePhysics/>`_ .
Users are responsible for providing initial and boundary conditions in the local subfolder implementing their case, i.e. it is
not possible to compile and run *PeleLMeX* without actually writting a few lines of codes. However, numerous example are provided
not possible to compile and run *PeleLMeX* without actually writing a few lines of codes. However, numerous example are provided
in ``Exec/RegTests`` from which new users can pull for their new case.

The source code contains a few dozen files, organized around the pieces of the algorithm and major functionalities:
Expand Down Expand Up @@ -50,11 +50,11 @@ The basic AMReX`s data structure is the `MultiFab <https://amrex-codes.github.io
(historically, multi Fortran Array Box (FAB)).
Within the block-structured AMR approach of AMReX, the domain is decomposed into non-overlapping rectangular `boxes`,
which can be assembled into a `boxArray`. Each AMR level has a `boxArray` providing the list of `boxes` of that level.
The `boxes` are distributed accross the MPI ranks, the mapping of which is described by a `DistributionMap`. Given a
The `boxes` are distributed across the MPI ranks, the mapping of which is described by a `DistributionMap`. Given a
`boxArray` and a `DistributionMap`, one can define an actual data container (`boxes` are only lightweight descriptor
of the geometrical rectangular object, containing bounds and centering information only), where each rank will
allocate a FAB for the boxes it owns in the `boxArray`, resulting in a collection of FABs or a MultiFab, distributed
accross the MPI ranks.
across the MPI ranks.

To access the data in a MultiFab, one uses a `MFIter <https://amrex-codes.github.io/amrex/docs_html/Basics.html#mfiter-and-tiling>`_
(or MultiFab iterator), which provides each MPI rank access to the FABs it owns within the MultiFab. Actual access to the data in
Expand Down Expand Up @@ -86,7 +86,7 @@ to get more familiar with AMReX data structures and environment.

The state vector of *PeleLMeX* contains the 2 or 3 components of velocity, the mixture density, species density (rhoYs),
rhoH, temperature and the thermodynamic pressure. The state components are stored in a cell-centered MultiFab with
`NVAR` components. Additionnally, the perturbational pressure stored at the nodes is contained in a separate MultiFab.
`NVAR` components. Additionally, the perturbational pressure stored at the nodes is contained in a separate MultiFab.
Together with the cell-centered pressure gradient, the cell-centered divergence constraint and cell-centered
transport properties, these MultiFabs are assembled into a `LevelData` struct.

Expand All @@ -96,10 +96,10 @@ calling : ::

auto ldata_p = getLevelDataPtr(lev,AmrOldTime);

with either `AmrOldTime` or `AmrNewTime` on level `lev`. Additionnally, calling this function with
with either `AmrOldTime` or `AmrNewTime` on level `lev`. Additionally, calling this function with
`AmrHalfTime` with return a `LevelData` struct whose `state` is a linearly interpolated between the old and new
states (but the other MultiFab in `LevelData` are empty !).
It is also often useful to have access to a vector of a state component accross the entire AMR hierarchy. To do so, *PeleLMeX*
It is also often useful to have access to a vector of a state component across the entire AMR hierarchy. To do so, *PeleLMeX*
provides a set of functions returning a vector of MultiFab `std::unique_ptr` aliased into the `LevelData`
MultiFab on each level: ::

Expand All @@ -115,7 +115,7 @@ MultiFab on each level: ::

where ``time`` can either be `AmrOldTime` or `AmrNewTime`.
Also available at any point during the simulation is the `LevelDataReact` which contains the species
chemical source terms. A single version of the container is avaible on each level and can be accessed
chemical source terms. A single version of the container is available on each level and can be accessed
using: ::

auto ldataR_p = getLevelDataReactPtr(lev);
Expand All @@ -141,30 +141,30 @@ The reader is referred to `AMReX GPU documentation <https://amrex-codes.github.i
the thread parallelism.

As mentioned above, the top-level spatial decomposition arises from AMReX's block-structured approach. On each level, non-overlapping
`boxes` are assembled into `boxArray` and distributed accross MPI rank with `DistributionMap` (or `DMap`).
`boxes` are assembled into `boxArray` and distributed across MPI rank with `DistributionMap` (or `DMap`).
It is in our best interest to ensure that all the MultiFab in the code use the same `boxArray` and `DMap`,
such that operation using `MFIter` can be performed and data copy accross MPI ranks is minimized.
such that operation using `MFIter` can be performed and data copy across MPI ranks is minimized.
However, it is also important to maintain a good load balancing, i.e. ensure that each MPI rank has the same amount
of work, to avoid wasting computational ressource. Reactive flow simulation are challenging, because the chemistry
of work, to avoid wasting computational resource. Reactive flow simulation are challenging, because the chemistry
integration is very spatially heterogeneous, with stiff ODE integration required within the flame front and non-stiff
integration of the linearized advection/diffusion required in the cold gases or burnt mixture. Additionnally, because
integration of the linearized advection/diffusion required in the cold gases or burnt mixture. Additionally, because
a non-subcycling approach is used in *PeleLMeX*, the chemistry doesn't have to be integrated in fine-covered region.
Two `boxArray` and associated `DMap` are thus available in *PeleLMeX*:

1. The first one is inherited from `AmrCore` and is availble as ``grid[lev]`` (`boxArray`) and ``dmap[lev]`` (`DMap`) throughout the code. Most
1. The first one is inherited from `AmrCore` and is available as ``grid[lev]`` (`boxArray`) and ``dmap[lev]`` (`DMap`) throughout the code. Most
of *PeleLMeX* MultiFabs use these two, and the `boxes` sizes are dictated by the `amr.max_grid_size` and `amr.blocking_factor` from the input
file. These are employed for all the operations in the code except the chemistry integration. The default load balancing approach is to use
space curve filling (SCF) with each box weighted by the number of cells in each box. Advanced users can try alternate appraoch using the
space curve filling (SCF) with each box weighted by the number of cells in each box. Advanced users can try alternate approach using the
keys listed in :doc:`LMeXControls`.
2. A second one is created, masking fine-covered regions and updated during regrid operations. It is used to perform the chemistry integration,
and because this is a purely local integration (in contrast with implicit diffusion solve for instance, which require communications
to solve the linear problem using GMG), a Knapsack load balancing approach is used by default, where the weight of each box is based
on the total number of chemistry RHS calls in the box. The size of the `boxes` in the chemistry `boxArray` (accessible with ``m_baChem[lev]``)
is controled by the `peleLM.max_grid_size_chem` in the input file. Once again, advanced users can try alternate approaches to load
is controlled by the `peleLM.max_grid_size_chem` in the input file. Once again, advanced users can try alternate approaches to load
balancing the chemistry `DMap` using the keys described in :doc:`LMeXControls`.

After each regrid operation, even if the grids did not actually change, *PeleLMeX* will try to find a better load balancing for the
`AmrCore` `DMap`. Because changing the load balancing requires copying data accross MPI ranks, we only want to change the `DMap`
`AmrCore` `DMap`. Because changing the load balancing requires copying data across MPI ranks, we only want to change the `DMap`
only if a significantly better new `DMap` can be obtained, with the threshold for a better `DMap` defined based on the value of
`peleLM.load_balancing_efficiency_threshold`.

Expand Down
18 changes: 9 additions & 9 deletions Docs/sphinx/manual/LMeXControls.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,12 +99,12 @@ IO parameters
::

#--------------------------IO CONTROL--------------------------
amr.plot_int = 20 # [OPT, DEF=-1] Frequency (as step #) for writting plot file
amr.plot_per = 0.002 # [OPT, DEF=-1] Period (time in s) for writting plot file
amr.plot_int = 20 # [OPT, DEF=-1] Frequency (as step #) for writing plot file
amr.plot_per = 0.002 # [OPT, DEF=-1] Period (time in s) for writing plot file
amr.plot_per_exact = 1 # [OPT, DEF=0] Flag to enforce exactly plt_per by shortening dt
amr.plot_file = "plt_" # [OPT, DEF="plt_"] Plot file prefix
amr.check_int = 100 # [OPT, DEF=-1] Frequency (as step #) for writting checkpoint file
amr.check_per = 0.05 # [OPT, DEF=-1] Period (time in s) for writting checkpoint file
amr.check_int = 100 # [OPT, DEF=-1] Frequency (as step #) for writing checkpoint file
amr.check_per = 0.05 # [OPT, DEF=-1] Period (time in s) for writing checkpoint file
amr.check_file = "chk" # [OPT, DEF="chk"] Checkpoint file prefix
amr.file_stepDigits = 6 # [OPT, DEF=5] Number of digits when adding nsteps to plt and chk names
amr.derive_plot_vars = avg_pressure ...# [OPT, DEF=""] List of derived variable included in the plot files
Expand Down Expand Up @@ -269,7 +269,7 @@ Chemistry integrator
peleLM.use_typ_vals_chem = 1 # [OPT, DEF=1] Use Typical values to scale components in the reactors
peleLM.typical_values_reset_int = 5 # [OPT, DEF=10] Frequency at which the typical values are updated
ode.rtol = 1.0e-6 # [OPT, DEF=1e-10] Relative tolerance of the chem. reactor
ode.atol = 1.0e-6 # [OPT, DEF=1e-10] Aboslute tolerance of the chem. reactor, or pre-factor of the typical values when used
ode.atol = 1.0e-6 # [OPT, DEF=1e-10] Absolute tolerance of the chem. reactor, or pre-factor of the typical values when used
cvode.solve_type = denseAJ_direct # [OPT, DEF=GMRES] Linear solver employed for CVODE Newton direction
cvode.max_order = 4 # [OPT, DEF=2] Maximum order of the BDF method in CVODE

Expand All @@ -278,7 +278,7 @@ Note that the last four parameters belong to the Reactor class of PelePhysics bu
Embedded Geometry
-----------------

`PeleLMeX` geomtry relies on AMReX implementation of the EB method. Simple geometrical objects
`PeleLMeX` geometry relies on AMReX implementation of the EB method. Simple geometrical objects
can thus be constructed using `AMReX internal parser <https://amrex-codes.github.io/amrex/docs_html/EB.html>`_.
For instance, setting up a sphere of radius 5 mm can be achieved:

Expand Down Expand Up @@ -397,7 +397,7 @@ Run-time diagnostics

`PeleLMeX` provides a few diagnostics to check you simulations while it is running as well as adding basic analysis ingredients.

It is often usefull to have an estimate of integrated quantities (kinetic energy, heat release rate, ,..), state extremas
It is often useful to have an estimate of integrated quantities (kinetic energy, heat release rate, ,..), state extremas
or other overall balance information to get a sense of the status and sanity of the simulation. To this end, it is possible
to activate `temporal` diagnostics performing these reductions at given intervals:

Expand All @@ -412,7 +412,7 @@ to activate `temporal` diagnostics performing these reductions at given interval

The `do_temporal` flag will trigger the creation of a `temporals` folder in your run directory and the following entries
will be appended to an ASCII `temporals/tempState` file: step, time, dt, kin. energy integral, enstrophy integral, mean pressure
, fuel consumption rate integral, heat release rate integral. Additionnally, if the `do_temporal` flag is activated, one can
, fuel consumption rate integral, heat release rate integral. Additionally, if the `do_temporal` flag is activated, one can
turn on state extremas (stored in `temporals/tempExtremas` as min/max for each state entry), mass balance (stored in
`temporals/tempMass`) computing the total mass, dMdt and advective mass fluxes across the domain boundaries as well as the error in
the balance (dMdt - sum of fluxes), and species balance (stored in `temporals/tempSpec`) computing each species total mass, dM_Ydt,
Expand Down Expand Up @@ -468,7 +468,7 @@ fine-covered regions are masked. The following provide examples for each diagnos
peleLM.xnormP.center = 0.005 # Coordinate in the normal direction
peleLM.xnormP.int = 5 # Frequency (as step #) for performing the diagnostic
peleLM.xnormP.interpolation = Linear # [OPT, DEF=Linear] Interpolation type : Linear or Quadratic
peleLM.xnormP.field_names = x_velocity mag_vort density # List of variables outputed to the 2D pltfile
peleLM.xnormP.field_names = x_velocity mag_vort density # List of variables outputted to the 2D pltfile

peleLM.condT.type = DiagConditional # Diagnostic type
peleLM.condT.file = condTest # Output file prefix
Expand Down
Loading