You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems like it would be a common use case to treat the execution of all parameters as one benchmark, and combine the results together and report one average/stddev result for all of them.
For example, benchmarking data transfers over a network - I could easily repeat the same command to transfer 1 data file * N times, but to avoid caching effects (include caching on the remote data server, which can't be cleared with a local command) I want to instead transfer N different data files * once each. So I can do this:
./hyperfine --runs 1 --parameter-scan index 1 20 'transfer_data file{index}.dat'
This runs the intended commands, but presents the result as if it were 20 individual benchmarks, instead of 20 trials of the same benchmark. Is there any way to combine the results together into a single average and standard deviation?
The text was updated successfully, but these errors were encountered:
An alternative simpler way to achieve this would be if hyperfine could simply expose the current run number to me as a variable.
Then using a parameter scan would not be needed at all.
i.e. I should be able do do something like
./hyperfine --runs 10 'echo {run}'
And it would print 1 through 10 (if --show-output).
It seems like it would be a common use case to treat the execution of all parameters as one benchmark, and combine the results together and report one average/stddev result for all of them.
For example, benchmarking data transfers over a network - I could easily repeat the same command to transfer 1 data file * N times, but to avoid caching effects (include caching on the remote data server, which can't be cleared with a local command) I want to instead transfer N different data files * once each. So I can do this:
This runs the intended commands, but presents the result as if it were 20 individual benchmarks, instead of 20 trials of the same benchmark. Is there any way to combine the results together into a single average and standard deviation?
The text was updated successfully, but these errors were encountered: