diff --git a/doc/DEPENDENCIES.md b/doc/DEPENDENCIES.md index 6c48b38f94..f48ddf41be 100644 --- a/doc/DEPENDENCIES.md +++ b/doc/DEPENDENCIES.md @@ -14,7 +14,7 @@ | [pybind11](#pybind11) | submodule | `v2.6.0` | Can be [excluded](#overriding-pybind11-dependency). | | [dmod.subsetservice](#the-dmodsubsetservice-package) | external | `>= 0.3.0` | Only required to perform integrated [hydrofabric file subdividing](DISTRIBUTED_PROCESSING.md#subdivided-hydrofabric) for distributed processing . | | [t-route](#t-route) | submodule | see below | Module required to enable channel-routing. Requires pybind11 to enable | -| [NetCDF Libraries](#netcdf-libraries) | external | \>= `4.7.4` | Enables NetCDF I/O support | +| [NetCDF Libraries](#netcdf-libraries) | external | NetCDF-C \>= `4.7.4`, CXX \>= `4.3.1` | Enables NetCDF I/O support | | [SQLite3](https://www.sqlite.org/cintro.html) | external | \> `3.7.17` | Enables GeoPackage reading support | # Details diff --git a/doc/NextGen_ON_CONUS.md b/doc/NextGen_ON_CONUS.md index d240ad8905..cdbdf5c307 100644 --- a/doc/NextGen_ON_CONUS.md +++ b/doc/NextGen_ON_CONUS.md @@ -10,35 +10,39 @@ This documentation provides instructions on all neccessary steps and components * [Generate Partition For Parallel Computation](#generate-partition-for-parallel-computation) * [Prepare the Input Data](#prepare-the-input-data) * [Build the Realization Configurations](#build-the-realization-configurations) -* [Run Computations with submodules](#run-computations-with-submodules) +* [Run Computations with Submodules](#run-computations-with-submodules) * [Resource Usage](#resource-usage) * [Run Computation with Topmodel](#run-computation-with-topmodel) +* [Run Computation with LASAM](#run-computation-with-lasam) * [Run Computation with Routing](#run-computation-with-routing) # Summary -This is a tutorial-like documentation. We provide suficient details in the hope that by following this document step by step, you can run NextGen computations from simple to sophisticated realization that models simple test examples to realistic cases. Throughout this document, we assume a Linux operating system environment. +This is a tutorial-like documentation. We provide suficient details in the hope that by following this document step by step, you can run NextGen computations from simple to sophisticated realizations that model simple test examples to realistic cases. Throughout this document, we assume a Linux operating system environment. # Download the Codes To download the `ngen` source code, run the following commands: `git clone https://github.com/NOAA-OWP/ngen.git` -`cd ngen` -Then we need all the submodule codes. So run the command below: +then `cd ngen` + +We also need all the submodule codes. So run the command below: `git submodule update --init --recursive` -# Setting up the Environment +If necessary, you may also go to a submodule's github repo to obtain the latest version. + +# Setting Up the Environment -For setting up the build and computation environment, we refer the users to our documentation chapter [DEPENDENCIES.md](DEPENDENCIES.md) for details. Basically, you will need to have access to C/C++ compiler, MPI, Boost, NetCDF, Cmake, SQLite3. Some of them may already be on your system. Otherwise, you have to install your own version. There are also some required software packages that come with `ngen` as submodules, such as `Udunits libraries`, `pybind11`, and `iso_c_fortran_bmi`. +For setting up the build and computation environment, we refer the users to our documentation chapter [DEPENDENCIES.md](DEPENDENCIES.md) for details. Basically, you will need to have access to C/C++ compiler, MPI, Boost, NetCDF, Cmake, SQLite3. Some of them may already be on your system. Otherwise, you have to install your own version. There are also some required software packages that come with `ngen` as submodules, such as `pybind11`, and `iso_c_fortran_bmi`. You most likely need to use Python. For that we recommend setting up a virtual environment. For details, see [PYTHON_ROUTING.md](PYTHON_ROUTING.md). After setting up the Python virtual environment and activating it, you may need install additional python modules depending on what `ngen` submodules you want to run. # Build the Executable -After setting up the environment variables, we need to first build the necessary dynamically linked libraries. Although `ngen` has the capability for automated building of submodule libraries, we build them explicitly so that users have a better understanding. For simplicity, we display the content a script which we name it `build_libs`. +After setting up the environment variables, we need to first build the necessary dynamically linked libraries. Although `ngen` has the capability for automated building of submodule libraries, we build them explicitly so that users have a better understanding. For simplicity, we display the content of a script which we name it `build_libs`. ``` cmake -B extern/sloth/cmake_build -S extern/sloth && \ @@ -53,21 +57,21 @@ cmake -B extern/noah-owp-modular/cmake_build -S extern/noah-owp-modular -DNGEN_I make -C extern/noah-owp-modular/cmake_build && \ cmake -B extern/evapotranspiration/evapotranspiration/cmake_build -S extern/evapotranspiration/evapotranspiration && \ make -C extern/evapotranspiration/evapotranspiration/cmake_build && \ -cmake -B extern/sloth/cmake_build -S extern/sloth && \ -make -C extern/sloth/cmake_build && \ cmake -B extern/SoilFreezeThaw/SoilFreezeThaw/cmake_build -S extern/SoilFreezeThaw/SoilFreezeThaw -DNGEN=ON && \ cmake --build extern/SoilFreezeThaw/SoilFreezeThaw/cmake_build --target sftbmi -- -j 2 && \ cmake -B extern/SoilMoistureProfiles/SoilMoistureProfiles/cmake_build -S extern/SoilMoistureProfiles/SoilMoistureProfiles -DNGEN=ON && \ -cmake --build extern/SoilMoistureProfiles/SoilMoistureProfiles/cmake_build --target smpbmi -- -j 2 && +cmake --build extern/SoilMoistureProfiles/SoilMoistureProfiles/cmake_build --target smpbmi -- -j 2 && \ +cmake -B extern/LGAR-C/cmake_build -S extern/LGAR-C/ -DNGEN=ON && \ +make -C extern/LGAR-C/cmake_build/ ``` -Copy the content into the file named `build_libs` and run the command: +Copy the content into a file named `build_libs` and run the command: ``` source build_libs ``` -This will build all libraries we need to run `ngen` at the time of this writing. +This will build all the submodule libraries we need to run `ngen` at the time of this writing. Then, with the Python virtual environment activated, we can build the MPI executable using the following script: @@ -97,6 +101,8 @@ cmake --build cmake_build_mpi --target all -j 8 For the meaning of each option in the script, see `ngen/wiki` [build](https://github.com/NOAA-OWP/ngen/wiki/Building) page. +Also note that in the above script, we have set `NGEN_WITH_ROUTING:BOOL=OFF`. If you need to run `Routing`, you need to set that to `ON`. + Suppose the above script is named `build_mpi`, execute the following command to build: `source build_mpi` @@ -105,9 +111,9 @@ This will build an executable in the `cmake_build_mpi` directory named `ngen` an # CONUS Hydrofabric -The CONUS hydrofabric is downloaded from [here](https://www.lynker-spatial.com/#hydrofabric/v20.1/). The file name under the list is `conus.gpkg`. Note that since the data there is continually evolving, a newer version may be available in the future. When using a newer version, be mindful that the corresponding initial configuration file generation and validation for all submodules at CONUS scale is necessary, which may be a non-trivial process due to the sheer size of the spatial scale. +The CONUS hydrofabric is downloaded from [here](https://www.lynker-spatial.com/data?path=hydrofabric%2Fv20.1%2F). The file name under the list is `conus.gpkg`. Note that since the data there is continually evolving, a newer version may be available in the future. When using a newer version, be mindful that the corresponding initial configuration file generation and validation for all submodules at CONUS scale is necessary, which may be a non-trivial process due to the sheer size of the spatial scale. -As the file is fairly large, it is worth some consideration to store it in a proper place, then simply build a symbolic link in the `ngen` home directory, thus named `./hydrofabric/conus.gpkg`. Note the easiest way to create the symbolic link is to create a `hydrofabric` directory and then create a link to that directory. +As the file is fairly large, it is worth some consideration to store it in a proper place, then simply sets a symbolic link in the `ngen` home directory, thus named `./hydrofabric/conus.gpkg`. Note that a straight forward way to create the symbolic link is to create a `hydrofabric` directory and then create a link to `conus.gpkg` in `./hydrofabric/`. # Generate Partition For Parallel Computation @@ -117,17 +123,19 @@ For parallel computation using MPI on hydrofabric, a [partition generate tool](D ./cmake-build_mpi/partitionGenerator ./hydrofabric/conus.gpkg ./hydrofabric/conus.gpkg ./partition_config_32.json 32 '' '' ``` -In the command above, `conus.gpkg` is the NextGen hydrofabric version 2.01 for CONUS, `partition_config_32.json` is the partition file that contains all features ids and their interconnected network information. The number `32` is intended number of processing cores for running parallel build `ngen` using MPI. The last two empty strings, as indicated by `''`, indicate there is no subsetting, i.e., we intend to run the whole CONUS hydrofabric. +In the command above, `conus.gpkg` is the NextGen hydrofabric version 2.01 for CONUS, `partition_config_32.json` is the partition file that contains all features ids and their interconnected network information. The number `32` is the intended number of processing cores for running parallel build `ngen` executable using MPI. You can choose a different number appropriate for your system and computation need. The last two empty strings, as indicated by `''`, indicate there is no subsetting, i.e., we intend to run the whole CONUS hydrofabric. # Prepare the Input Data -Input data includes the forcing data and initial parameter data for various submodules. These depend on what best suits the user's need. For our case, as of this documentation, beside forcing data, which can be accessed at `./forcing/NextGen_forcing_2016010100.nc` using the symbolic link scheme, we also generated initial input data for various submodules `noah-owp-modular`, `PET`, `CFE`, `SoilMoistureProfiles (SMP)`, `SoilFreezeThaw (SFT)`. The first three are located in `./conus_config/`, the SMP initial configs are located in `./conus_smp_configs/` and the SFT initial configs are located in `./conus_sft_configs/`. + +Input data include the forcing data and initial parameter data for various submodules. These depend on what best suits the user's need. For our case, as of this documentation, beside forcing data, which can be accessed at `./forcing/NextGen_forcing_2016010100.nc` using the symbolic link scheme, we also generated initial input data for various submodules: `noah-owp-modular`, `PET`, `CFE`, `SoilMoistureProfiles (SMP)`, `SoilFreezeThaw (SFT)`, and LASAM, or LGAR-C (lgc). The first three are located in `./conus_config/`, the SMP initial configs are located in `./conus_smp_configs/` and the SFT initial configs are located in `./conus_sft_configs/`. For each hydrologic model (CFE, Topmodel, and LASAM) coupled with SMP, a different initial configuration is needed for SMP, which is named `config_conceptual.txt`, `config_topmodel.txt`, and `config_layered.txt` in `config` directory of SMP, respectively. For testing purpose, we used the smae initial configure file for all catchments. + For code used to generate the initial config files for the various modules, the interested users are directed to this [web location](https://github.com/NOAA-OWP/ngen-cal/tree/master/python/ngen_config_gen). -The users are warned that since the simulated region is large, some of the initial config parameters values for some catchments may be unsuitable and cause the `ngen` execution to stop due to errors. Usually, in such cases, either `ngen` or the submodule itself may provide some hint as to the catchment ids or the location of the code that caused the error. Users may follow these hints to figure out as to which initial input parameter or parameters are initialized with inappropriate values. In the case of SFT, an initial value of `smcmax=1.0` would be too large. In the case of SMP, an initial value of `b=0.01` would be too small, for example. +The users are warned that since the simulated region is large, some of the initial config parameters values for some catchments may be unsuitable and cause the `ngen` execution to crash due to errors. Usually, in such cases, either `ngen` or the submodule itself may provide some hint as to the catchment ids or the location of the code that caused the error. Users may follow these hints to figure out as to which initial input parameter or parameters are initialized with inappropriate values. For example, in the case of SFT, an initial value of `smcmax=1.0` would be too large. In the case of SMP, an initial value of `b=0.01` would be too small. # Build the Realization Configurations -The realization configuration file, in JSON format, contains high level information to run a `ngen` simulation, such as interconnected submodules, paths to forcing file, shared libraries, initialization parameters, duration of simulation, I/O variables, etc. We have built the realization configurations for several commonly used submodules which are located in `data/baseline/`. These are built by adding one submodule at a time, performing a test run for a 10 day simulation. The successive submodules used are: +The realization configuration file, in JSON format, contains high level information to run a `ngen` simulation, such as inter-coupled submodules, paths to forcing file, shared libraries, initialization parameters, duration of simulation, I/O variables, etc. We have built the realization configurations for several commonly used submodules which are located in `data/baseline/realizations`. These are built by adding one submodule at a time (except the last row which used an alternative ordering of the submodules for comparison), performing a test run for a 10 day simulation. The successive submodules used for CFE model are: ``` sloth (conus_bmi_multi_realization_config_w_sloth.json) @@ -137,36 +145,37 @@ sloth+noah-owp-modular+pet+cfe (conus_bmi_multi_realization_config_w_sloth_noah_ sloth+noah-owp-modular+pet+smp (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp.json) sloth+noah-owp-modular+pet+smp+sft (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft.json) sloth+noah-owp-modular+pet+smp+sft+cfe (conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json) +sloth+noah-owp-modular+pet+cfe+smp+sft (conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe_smp_sft.json) ``` # Run Computations with Submodules -With all preparation steps completed, we are now ready to run computations. We use MPI as our parallel processing application with 32 cores as an example. Users are free to choose whatever number cores they want, just make sure you will need to have the appropriate corresponding partition JSON file for the number of cores used. The command line for running a MPI job is as follows: +With all preparation steps completed, we are now ready to run computations. We use MPI as our parallel processing application with 32 cores as testing example. Users are free to choose whatever number of CPU cores they want, just make sure you will need to have the appropriate corresponding partition JSON file for the number of cores used. The command line for running a MPI job is as follows: For a simple example run and quick turn around, you can run: ``` -mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth.json conus_partition_32.json +mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/realizations/conus_bmi_multi_realization_config_w_sloth.json conus_partition_32.json ``` For a more substantial example simulation, you can run: ``` -mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah.json conus_partition_32.json +mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/realizations/conus_bmi_multi_realization_config_w_sloth_noah.json conus_partition_32.json ``` -For an example taking into account more realistic contributions, you can try: +For an example taking into account more realistic conditions, you can try: ``` -mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json conus_partition_32.json +mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/realizations/conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json conus_partition_32.json ``` -where `ngen` is the executable we build in the [Building the Executable](#build-the-executable) section. All other terms have been discussed above in details. With the current existing realization config files, the above jobs run 10 days simulation time on CONUS scale. +where `ngen` is the executable we built in the [Building the Executable](#build-the-executable) section. All other terms have been discussed above in details. With the current existing realization config files, the above jobs run 10 day simulation time on CONUS scale. -Be aware that the above commands will generate over a million output files associated with catchment and nexus ids. In the realization config files used above, we have specified a directory `./output_dir/` to store these files. If you `cd` to `./output_dir` and issue a `ls` command, it will be significantly slower than usual to list all the file names. You can choose a different output file directory name than `./output_dir/` by modifying the directory name in the realization configuration file if you prefer. Note that you need to create the output file directory before running the executable. +Be ware that the above commands will generate over a million output files associated with catchment and nexus ids. In the realization config files used above, we have specified an `output_root` file directory named `./output_dir/` to store these files. If you `cd` to `./output_dir` and issue a `ls` command, it will be significantly slower than usual to list all the file names. You can choose a different `output_root` file directory name than `./output_dir/` by modifying the directory name in the realization configuration file if you prefer. Note that you can either create the output file directory before running the executable, or `ngen` will create one for you based on the `output_root` specification. # Resource Usage -The following table lists the CPU wall clock ime used for various realization configurations running 10 day simulation time. The timing values reported in the table are from single run, not from average. Note in particular that the `Initialization Time` may be significantly affected by system loads at the time of job start. +The following table lists the CPU wall clock time used for various realization configurations running 10 day simulation time. The timing values reported in the table are from single run, not from average. Note in particular that the `Initialization Time` may be significantly affected by system loads at the time of job start. | Realization | Number of CPUs | Initialization Time (s) | Computation Time (s) | Total Time (s) | | ------------- | :-----: | :--------: | :--------: | :--------: | @@ -174,11 +183,12 @@ The following table lists the CPU wall clock ime used for various realization co | conus_bmi_multi_realization_config_w_sloth_noah.json | 32 | 1360.1 | 2143.9 | 3504.0 | | conus_bmi_multi_realization_config_w_sloth_noah_pet.json | 32 | 3204.0 | 2106.5 | 5310.5 | | conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe.json | 32 | 1214.9 | 4069.2 | 5284.1 | -| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp.json | 32 | 1453.4 | 3087.0 | 4540.4 | -| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft.json | 32 | 3245.7 | 3808.1 | 7053.8 | -| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json | 32 | 1354.7 | 5283.1 | 6637.8 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp.json | 32 | 4871.0 | 5988.4 | 10859.4 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft.json | 32 | 2242.6 | 6976.0 | 9218.6 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe.json | 32 | 6827.9 | 5022.0 | 11849.9 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe_smp_sft.json | 32 | 2257.4 | 16000.5 | 18257.9 | -The abreviation used for submodule names in the table: +The abbreviation used for submodule names in the table: * noah - noah-owp-modular * pet - potential evapotranspiration * cfe - Conceptual functional equivalence @@ -187,13 +197,70 @@ The abreviation used for submodule names in the table: # Run Computation with Topmodel -To be added +Running Topmodel is similar to running CFE model. The build process is the same as described above in the [Build the Executable](#build-the-executable) using the same build script. In the realization configuration file, you will just need to replace the part for CFE submodule with the one for Topmodel submodule. Note that Topmodel uses different initial configuration data and parameters that need to be generated. For this we refer the users to githup home page for [Topmodel](https://github.com/NOAA-OWP/topmodel) for more in depth discussion. In the following, we provide two examples for illustration. Note that in the examples we tested the realization configurations with the so called `synthetic TWI (Topographic Wetness Index)`, i.e., the same initial configuration file for all catchments to make the computation simple. Also, for Topmodel, some of the realization configurations do not need `sloth` submodule for initialization. + +For a relatively simple example involving just two submodules, you can run the following command: + +``` +mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/realizations/conus_bmi_multi_realization_config_w_noah_topm.json conus_partition_32.json +``` + +For a more complex example, you can run: + +``` +mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/realizations/conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_topm.json conus_partition_32.json +``` + +The wall clock timing in our tests for various realization configurations running 10 day simulation are tabulated as follows. Note that the timing values are from a single run, no averaging was attempted. + +| Realization | Number of CPUs | Initialization Time (s) | Computation Time (s) | Total Time (s) | +| ------------- | :-----: | :--------: | :--------: | :--------: | +| conus_bmi_multi_realization_config_w_noah_topm.json | 32 | 3211.9 | 4175.2 | 7387.1 | +| conus_bmi_multi_realization_config_w_pet_topm.json | 32 | 3427.3 | 4658.9 | 8086.2 | +| conus_bmi_multi_realization_config_w_noah_pet_topm.json | 32 | 4019.4 | 6565.8 | 10585.2 | +| conus_bmi_multi_realization_config_w_pet_noah_topm.json | 32 | 4053.8 | 5449.9 | 9503.7 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_topm.json | 32 | 7329.4 | 7308.6 | 14638.0 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_topm_smp_sft.json | 32 | 3960.9 | 17410.0 | 21370.9 | + +* topm - abbreviation for Topmodel +* For all other abbreviations, see [Resource Usage](#resource-usage) + +# Run Computation with LASAM + +Running Lumped Arid/Semi-arid Model (LASAM) is similar to running CFE and Topmodel model. The build process is the same as described above in the [Build the Executable](#build-the-executable) using the same build script. In the realization configuration file, you will need to replace the part for CFE or Topmodel submodule with the one for [LASAM](https://github.com/NOAA-OWP/LGAR-C) submodule. For the initial configuration and parameters, for the computation results reported here, we have used the same one for all catchments on CONUS for testing purpose. The initial configuration is a work in progress and when the more realstic initial configuration becomes available, we will repeat the calculations and update the results. As noted above, a different initial condition file, `config_layered.txt`, is used for SMP. This is reflected in the realization configuration files. + +To run one of the example realization configs, you can execute the following command: + +``` +mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/realizations/conus_bmi_multi_realization_config_w_sloth_noah_pet_lgc.json conus_partition_32.json +``` + +To run a realization config with `Routing`, you can execute the following command: + +``` +mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/realizations/conus_bmi_multi_realization_config_w_sloth_noah_pet_lgc_trt.json conus_partition_32.json +``` + +The wall clock timing in our tests for various realization configurations running 10 day simulation are tabulated as follows. Note that the timing values are from a single run, on different machines with similar architecture, no averaging was attempted. Note also that frequent `print` statement to the standard output in submodules/model codes can significantly increase the `Computation Time`. + +| Realization | Number of CPUs | Initialization Time (s) | Computation Time (s) | Total Time (s) | +| ------------- | :-----: | :--------: | :--------: | :--------: | +| conus_bmi_multi_realization_config_w_sloth_lgc.json | 32 | 728.7 | 5670.2 | 6398.9 | +| conus_bmi_multi_realization_config_w_sloth_noah_lgc.json | 32 | 1160.9 | 10055.6 | 11216.5 | +| conus_bmi_multi_realization_config_w_sloth_pet_lgc.json | 32 | 1060.8 | 6018.8 | 7079.6 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_lgc.json | 32 | 3512.1 | 7606.0 | 11118.1 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_lgc.json | 32 | 1209.0 | 7461.5 | 8670.5 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_lgc.json | 32 | 1092.1 | 9163.8 | 10255.9 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_lgc_smp_sft.json | 32 | 1738.8 | 9153.6 | 10892.4 | + +* lgc - abbreviation for LGAR-C, which is the name of the submodule for LASAM +* For all other abbreviations, see [Resource Usage](#resource-usage) # Run Computation with Routing To run computation on CONUS with routing, we need to build the executable with the routing option turned on. This can be done using the build script displayed in the [Build the Executable](#build-the-executable) section, and ensure both `-DNGEN_WITH_PYTHON:BOOL=ON` and `-DNGEN_WITH_ROUTING:BOOL=ON` are enabled. Then, you can run the script to build the `ngen` executable. -You also need to build the `t-route` submodule. First, as the vendored `t-route` submodule is out of date, to get the latest version, you need to remove the old `t-route` and run `git clone https://github.com/NOAA-OWP/t-route` in the `extern` directory. For building `t-route` in general, we refer to the documentation [PYTHON_ROUTING.md](PYTHON_ROUTING.md) for essential details. We just want to add some additional discussion here to make the process easier. Note that there is more than one way to build `t-route`. Do `cd t-route`, then run: +You also need to build the `t-route` submodule. First, as the vendored `t-route` submodule may possibly be out of date, you may need to remove the old `t-route` and run `git clone https://github.com/NOAA-OWP/t-route` in the `extern` directory to get the latest version. For building `t-route` in general, we refer to the documentation [PYTHON_ROUTING.md](PYTHON_ROUTING.md) for essential details. We just want to add some additional discussion here to make the process easier. Note that there is more than one way to build `t-route`. Do `cd t-route`, then run: ``` FC=mpif90 NETCDFINC= ./compiler.sh @@ -224,21 +291,41 @@ Then, run the command: FC=mpif90 ./compiler.sh ``` -After successfully building `t-route`, you can run `ngen` with routing. Note that we have several realization configuration files and the `routing_config_CONUS.yaml` file for running `ngen` with routing. The realization configuration file and `routing_config_CONUS.yaml` specify where the input and output files are. For routing, we assume the existence of a `stream_output_dir` directory for writing output files. You need to do `mkdir stream_output_dir` before running `ngen`. With that, we can run an example with the command: +> [!WARNING] +> In the event that compilation does not complete and throws a Cython compile error, rerun with a non-editable flag: + +``` +FC=mpif90 ./compiler.sh no-e +``` + +Users are referred to [t-route](https://github.com/NOAA-OWP/t-route) github repo `readme.md` for details. + +After successfully building `t-route`, you can run `ngen` with routing. Note that we have several realization configuration files and the `routing_config_CONUS.yaml` file for running `ngen` with routing. The realization configuration file and `routing_config_CONUS.yaml` specify where the input and output files are. For routing, we assume the existence of a `stream_output_dir` directory in the project directory for writing output files. You need to do `mkdir stream_output_dir` before running `ngen`. + +Note that t-route is a continuously evolving software package. In the process of writing this documentation, we find that we also need to create a directory named `usgs_TimeSlice` in the project directory. An alternative is to comment out or remove the part after colon in the line `usgs_timeslices_folder : ./usgs_TimeSlice/` in the `routing_config_CONUS.yaml`, although the directory will not contain any data for the test computations described here. + +With that, we can run an example with the command: ``` -mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe_trt.json conus_partition_32.json +mpirun -n 32 ./cmake_build_mpi/ngen ./hydrofabric/conus.gpkg '' ./hydrofabric/conus.gpkg '' data/baseline/realizations/conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe_trt.json conus_partition_32.json ``` If your run is successful, you should see the directory `stream_output_dir` populated with output files in NetCDF format with each file corresponding to each hour between 2016-01-01 to 2016-01-10. -In the following table, we display the CPU timing information for a few realizations that we tested: +In the following table, we display the CPU timing information for a few representative realizations that we tested: | Realization | Number of CPUs | Initialization Time (s) | Ngen Computation Time (s) | Routing Computation Time (s) | Total Time (s) | | ------------- | :-----: | :--------: | :--------: | :--------: | :--------: | | conus_bmi_multi_realization_config_w_sloth_noah_trt.json | 32 | 958.1 | 2288.4 | 3694.1 | 6940.6 | | conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe_trt.json | 32 | 1069.8 | 4606.3 | 4474.1 | 10150.2 | -| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe_trt.json | 32 | 2142.0 | 5632.4 | 4510.3 | 12284.7 | - +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_cfe_trt.json | 32 | 9257.9 | 4211.5 | 4012.4 | 17481.8 | +| conus_bmi_multi_realization_config_w_noah_topm_trt.json | 32 | 5411.8 | 9186.5 | 3780.9 | 18299.2 | +| conus_bmi_multi_realization_config_w_noah_pet_topm_trt.json | 32 | 6810.9 | 5760.9 | 4922.4 | 17494.2 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_topm_trt.json | 32 | 9165.2 | 5201.3 | 3739.1 | 18105.6 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_lgc_trt.json | 32 | 8452.8 | 10284.4 | 3125.1 | 21862.3 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_smp_sft_lgc_trt.json | 32 | 7139.5 | 8416.9 | 3109.2 | 18665.6 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_cfe_smp_sft_trt.json | 32 | 10064.3 | 29039.3 | 4430.1 | 43533.7 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_topm_smp_sft_trt.json | 32 | 10808.9 | 5034.1 | 5723.6 | 21566.6 | +| conus_bmi_multi_realization_config_w_sloth_noah_pet_lgc_smp_sft_trt.json | 32 | 5156.8 | 13211.8 | 4320.8 | 22689.4 | * trt - abbreviation for t-route -* For all other abbreviations, see [Resource Usage](#resource-usage). +* For all other abbreviations, see [Resource Usage](#resource-usage)