An open-source benchmarking tool.
EvalMe requires the following software to be already installed in the system:
The installation of EvalMe only requires:
- Cloning this repo
git clone https://github.com/reverseame/evalme
- Installing Python requirements
pip3 install -r requirements.txt
EvalMe has several avaialable options and it is fully compatible with Hyperfine. Use -h
flag to print the help message:
Available options:
flag | effect | value type | example | default value |
---|---|---|---|---|
-s, --slice | Defines the slice (in seconds) EvalMe will poll the spawned process and check its memory usage. | int | -s 0.001 | 0.1 |
-r, --runs | Perform exactly RUNS runs for each command. | int | -r 150 | 10 |
-w, --warmup | Perform NUM warmup runs before the actual benchmark. This can be used to fill (disk) caches for I/O-heavy programs. | int | -w 5 | |
-p, --prepare | Execute the command before each timing run. This is useful for clearing disk caches, for example. The --prepare option can be specified once for all commands or multiple times, once for each command. In the latter case, each preparation command will be run prior to the corresponding benchmark command. | |||
-c, --cleanup | Execute the command after the completion of all benchmarking runs for each individual command to be benchmarked. This is useful if the commands to be benchmarked produce artifacts that need to be cleaned up. | |||
-j, --json | Prints JSON-formatted output. CPU is measured in seconds; memory is measured in bytes. | |||
-v, --verbose | Prints the original hyperfine's output. | |||
-h | Prints help message |
-
Usage example:
./evalme.py 'i=0; while [ $i -le 1000 ]; do i=$((i+1));aux=$((i*i)); done' -r 100 -s 0.001
-
JSON output:
./evalme.py 'i=0; while [ $i -le 1000 ]; do i=$((i+1));aux=$((i*i)); done' -r 100 -s 0.001 --json
Licensed under the GNU GPLv3 license.