-
Notifications
You must be signed in to change notification settings - Fork 0
Home
The QA process for CODAR is designed to find and track bugs that would prevent the integration of multiple independent software packages that is required when performing co-design studies. To this end, we propose the following:
- Each CODAR software package will maintain an integration branch, which contains new features that have passed the project's own unit tests and are thought to be ready to go into the next release, but have not yet been tested together with other CODAR software.
- For each non-CODAR maintained package that we have a dependency on (part of the Integration Platform), we will have a regular communication as to a fixed reference version (release or not) that we will use as a reference build target.
- Each week, a designated tester will build the software and run a test script on each of the target machine environments (see below). Results will be tabulated on this Wiki, and announced on the Slack #infrastructure channel.
- If issues are found, developer resources will be allocated to fix them.
We have three classes of the packages for which regular build testing is to be performed.
CODAR Products
- Cheetah
- Savanna
- SZ
- SOSFlow (?)
- Chimbuko (?)
Integration Platform Components
- ADIOS (1.X or 2.X)
- Tau
- ZFP
- spack
Integration Targets
- Heat_transfer
- Brusellator
We also want to include some integration tests, I believe? Like Heat?
A machine environment consists of a machine (Cori, Theta, Titan, Summitdev, Summit), architecture (KNL/Haswell on Cori/Theta), and compiler chain (GNU, PGI, Intel, IBM, etc.). The target machine environments are those on which we will perform regular testing. We need to decide which those should be.
We will attempt to leverage Spack as much as possible to simplify building the software stack, and to avoid re-inventing the wheel. A top level build script will still be maintained to fill any gaps not handled by Spack. The CODAR fork of Spack will be used to maintain bleeding edge and customized package files without worrying about where they are in the upstream merge process.
Note that to maintain fully independent Spack installs, putting configuration in ~/.spack
should be avoided. Instead, $SPACK_HOME/etc/spack
should be used, with a separate SPACK_HOME
for each task (e.g., specific codesign study, and each weekly QA run). See https://spack.readthedocs.io/en/latest/configuration.html#configuration-scopes.
To share the fun and to get fresh eyes trying things, we will distribute the work of testing among multiple people.
ssh {TARGET_MACHINE}
cd Software_Stack_QA
git checkout integration # or perhaps a weekly tag?
git pull
cd titan/gnu
./build.sh /path/to/top/install/dir
./test.sh /path/to/top/install/dir
The build script should load necessary modules, checkout the appropriate spack branch, copy spack configuration to $SPACK_HOME/etc/spack
, install the required software with spack + anything that requires a custom install. The test script should create and submit a set of cheetah campaigns. The test runner must monitor the campaigns and report the results. The campaigns should exist in shared project space, so if issues are encountered the whole CODAR team can examine the results. Periodic clean up of old campaign results will need to be done to avoid hitting quota on shared project space.
I like the idea of keeping reporting on github. As a starting point for discussion:
- Every week we have a run, we create an github issue for each target machine environment. If no problems are found, the issue can be closed immediately, otherwise detailed problems can be reported in the issue thread. Creating the issues could be automated with a script.
Alternatively, we could track everything free form or in a table on this wiki, but I prefer the discussion thread structure provided by issues.