Skip to content

Latest commit

 

History

History
160 lines (100 loc) · 6.41 KB

README.md

File metadata and controls

160 lines (100 loc) · 6.41 KB

MPIsuite - MPI / OpenMP Task Automation

License: MIT

Important Notice: This project is in early development. Features are sparse and bugs may arise.

The Suite

mpis-compile is responsible for compiling the supplied MPI, OpenMP or Hybrid source file.

It is expecting at least one arguement, which should be the source file. Any other arguements are grouped and form pairs of keys and values. Each pair represents a macro, named key with a value of value, that is going to be defined at compile time.

mpis-schedule is responsible for generating a job and submitting it to the PBS queue.

It is expecting exactly two arguements, the executable and the number of processes that should be created.

mpis-profile is responsible for compiling our program with various (key, value) macro pairs, running it with different numbers of processes and collecting our measurements.

It is expecting exactly two arguements, the source file and a profiling description.

Installation

chmod +x setup.sh

./setup.sh

Uninstalling MPIs can be achieved simply by running

./setup.sh --uninstall

Configuration

  • MPIS_OUTPUT_ROOT indicates the directory, under which the files and directories generated by our scheduler and profiler are going to be saved.
  • MPIS_USER_ID is being used when querying the job queue to check if a job is done running.
  • MPIS_EDITOR and MPIS_EDITOR_ARGS are optional and are used by the profiler to open files in your favorite editor.

You are more than welcome to check ~/.mpisrc out, for additional configuration settings.

Profiling Descriptions

mpis-profile is expecting a profiling description, which must define:

  • a regular expression named TIME_PATTERN, which indicates the format of our timer's output.
  • a string named MACRO, containing the name of the macro that should be defined by mpis-compile.
  • an array named VALUES, containing the values that macro MACRO should receive in different runs.

You can optionally define MPIS_ENABLE_PROFILING and/or MPIS_LINK_OPENMP, in order to bypass confirmation on the corresponding compilation step. These can be globally defined, as well, in ~/.mpisrc.

Example

Let' s assume we would like to estimate the integral from a to b of an equation f(x) using the trapezoidal rule.

Let' s also assume that we have already developed an MPI program, named mpi_trap1.c to do so and this program defines a macro named nTraps, which corresponds to the number of trapezoids that are going to be used in the calculation of the integral.

Compiling

We now need to compile our source file using mpis-compile as follows

mpis-compile mpi_trap1.c nTraps 512

[mpis-compile] enable profiling: y
[mpis-compile] link OpenMP: n

This results in the creation of an executable file named mpi_trap1.x, inside which the macro nTraps has been assigned the value 512.

ATTENTION: If your source code contains a #define statement corresponding to the macro provided, the supplied value will be overriden. Remove the #define statement or surround it with a #ifndef MACRO ... #endif preprocessor block to resolve this issue.

Executing mpis-compile with the --clean option deletes the executable.

Scheduling

Scheduling the executable can be achieved like so

mpis-schedule mpi_trap1.x 16

[mpis-schedule] name='mpi_trap1_16_argo059_job', id='14524.argo', ps=16, ns=2, ppn=8

Job id            Name             User              Time Use S Queue
----------------  ---------------- ----------------  -------- - -----
12507.argo        myJob            argo081                  0 Q workq
14524.argo        mpi_trap1_16_ar  argo059           00:00:00 R workq
  • ps stands for the number of processes
  • ns stands for the number of nodes
  • ppn stands for the number of processes per node

The following files and directories are generated

find ./out

./out/
./out/16
./out/16/21_11_2019
./out/16/21_11_2019/15_06_30
./out/16/21_11_2019/15_06_30/mpi_trap1_16_argo059_job.stderr
./out/16/21_11_2019/15_06_30/mpi_trap1_16_argo059_job.stdout

Executing mpis-schedule with the --clean option removes any mpiP and job associated files and removes any MPIS_USER_ID associated job from the queue.

Profiling

We firstly need to define a profiling description like so

#!/bin/bash

export MPIS_ENABLE_PROFILING=true
export MPIS_LINK_OPENMP=false

TIME_PATTERN="Elapsed time: \K([0-9]+\.[0-9]+)"

MACRO="nTraps"

VALUES=()

for ((power = 20; power <= 28; power += 2))
do
    VALUES+=( "$(( 2 << ($power - 1) ))" )
done

We can compile the source file with different numbers of trapezoids defined and schedule it with different numbers of processes in a single command as follows

mpis-profile ./mpi_trap1.c ./description.sh

The following files and directories are generated

find ./out

./out/
./out/21_11_2019
./out/21_11_2019/15_13_08
./out/21_11_2019/15_13_08/67108864
./out/21_11_2019/15_13_08/67108864/4
./out/21_11_2019/15_13_08/67108864/4/mpi_trap1_4_argo059_job.stderr
./out/21_11_2019/15_13_08/67108864/4/mpi_trap1_4_argo059_job.stdout
...
./out/21_11_2019/15_13_08/268435456/32
./out/21_11_2019/15_13_08/268435456/32/mpi_trap1_32_argo059_job.stderr
./out/21_11_2019/15_13_08/268435456/32/mpi_trap1_32_argo059_job.stdout
./out/21_11_2019/15_13_08/268435456/16
./out/21_11_2019/15_13_08/268435456/16/mpi_trap1_16_argo059_job.stderr
./out/21_11_2019/15_13_08/268435456/16/mpi_trap1_16_argo059_job.stdout
./out/21_11_2019/15_13_08/results.csv

Here is how our measurements look like

head -5 ./out/21_11_2019/15_13_08/results.csv

nTraps   , Processes, Time    , Speed Up       , Εfficiency
1048576  , 1        , 0.000925, 1.0            , 1.0
1048576  , 2        , 0.00082 , 1.12804878049  , 0.564024390245
1048576  , 4        , 0.000841, 1.09988109394  , 0.274970273485
1048576  , 8        , 0.001022, 0.905088062622 , 0.113136007828

Licence

This project is licensed under the MIT License.