-
Notifications
You must be signed in to change notification settings - Fork 16
Agent Based Model Development
There is a suite of benchmarks for the ABM that can be used to check the performance. The suite contains setups of different sizes. If you added a new feature (i.e., you didn't just fix a bug in an existing feature), make sure the feature is actually used by the benchmark. Add it to the benchmark if necessary, then run the benchmark to see if the cost for the new feature is acceptable and as expected. Most new features will add some overhead, but this needs to be limited and in proportion to the added value of the feature so runtime doesn't grow out of control. Optional features that can be disabled should only incur minimal overhad. If you did not add any new feature, just run the benchmark before and after your changes to make sure there are no performance regressions. This process will hopefully be automated soon by running benchmarks in the CI.
Build the benchmarks by defining the cmake variable MEMILIO_BUILD_BENCHMARKS=ON in the build. Make sure to use a Release build, to test performance.
cmake .. -DMEMILIO_BUILD_BENCHMARKS=ON -DCMAKE_BUILD_TYPE=Release
cmake --build .
Run the benchmark executable
.\build\bin\abm_benchmark
Each benchmark is run for a number of iterations and the average time is reported.
Benchmark Time CPU Iterations
---------------------------------------------------------------------------
abm_benchmark/abm_benchmark_50k 7583 ms 7583 ms 1
abm_benchmark/abm_benchmark_100k 18216 ms 18214 ms 1
abm_benchmark/abm_benchmark_200k 41492 ms 41489 ms 1
You may get a warning ***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
If possible, disable CPU scaling to improve the consistency of results, see here. You also want to have as little other load on your system as possible while the benchmark is running. If it is not possible to disable frequency scaling you need to increase the runtime of the benchmark with the commands below, but constant CPU frequency is necessary to get the most reliable results and measure small differences.
REMINDER: Don't forget to reenable CPU scaling after you ran the benchmarks to save energy. Reboot may restore the settings as well.
The benchmark executable has a number of command line arguments that customize the execution, use --help
to see all. There are two important arguments that can be used to get more consistent/stable results:
-
--benchmark_min_time=<T>
: Iterate every benchmark so that the total runtime of iterations is at leastT
seconds. By default, the minimum time is 1 sec, which may not be enough on systems that aren't completely dedicated to the benchmark (60 secs worked well in our initial tests, but you may need to experiment if timings vary a lot) -
--benchmark_repetitions=<N>
: Repeat every benchmarkN
times and report the mean, median, and variance of the repetitions. (Repetitions are not the same as iterations, i.e., a benchmark may be repeated 10 times for 5 iterations each. Every repetition runs for at least the minimum time.)
benchmark_repetitions
is useful to check the timing results, because it reports variance. But it can be expensive, because it also repeats the long running benchmarks. benchmark_min_time
only adds more iterations for short running benchmarks which are more prone to give unstable results. One possible workflow when using the benchmark for the first time is to use e.g. 5-10 repetitions to check the variance and increase the minimum time until variance is acceptable, then continue with only 1 repetion with that minimum time.