Skip to content

Running analysis on Piz Daint

Salvatore Di Girolamo edited this page Jun 14, 2019 · 14 revisions
  • Source the script to set the environment up:
source /apps/pr/pr04/SimFS/simfs_daint.env 

Make sure the simfs command is available.

  • Start the DV server
simfs <ctx_name> start
  • Run the analysis application as:
simfs <ctx_name> run srun <analysis command and args>
  • Alternatively, the analysis can be profiled by running it with the profile_run command:
simfs <ctx_name> profile_run srun <analysis command and args>

In this case, the analysis will be profiled using LibLSB (https://github.com/spcl/liblsb, already available with the SimFS installation on Piz Daint). The profiling data will be created in a file named "lsb.profile.r0", which reports:

  • the operation name (e.g., dvl_nc_open, dvl_nc_close)
  • the operation count: how many operation of this type have been executed until that point
  • the operation ID: just a numerical ID that identifies the operation
  • the time from the start of the analysis (in microseconds)
  • the profiling overhead. A profiling file example is reported here:
➜  precip_tracking_3mP cat lsb.profile.r0
# Sysname : Linux
# Nodename: daint106
# Release : 4.4.162-94.72-default
# Version : #1 SMP Mon Nov 12 18:57:45 UTC 2018 (9de753f)
# Machine : x86_64
# Execution time and date (UTC): Fri Jun 14 09:24:24 2019
# Execution time and date (local): Fri Jun 14 11:24:24 2019
# Reported time measurements are in microseconds
# pretty output format
      op_name     op_count       id         time     overhead 
  dvl_nc_open             0          0 2288.115479            0 
 dvl_nc_close             0          4 9924.186523            0 
# Runtime: 0.010373 s (overhead: 0.000000 %) 2 records
  • Note: in the current settings, the analysis job may be delayed of zero, one, or more simulation jobs times. So please set the walltime of the analysis job so to account for that. This is bad, but we are looking for solutions to it. ;)

  • Note batch jobs seem to not run from /app/. Still, would be nice to keep the anlysis applications there. To do so, you need to create/copy the job in $HOME or $SCRATCH and make sure the job script runs the applications in /app/