Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combine results from all parameters into one benchmark #778

Open
rptaylor opened this issue Nov 27, 2024 · 1 comment
Open

Combine results from all parameters into one benchmark #778

rptaylor opened this issue Nov 27, 2024 · 1 comment

Comments

@rptaylor
Copy link

It seems like it would be a common use case to treat the execution of all parameters as one benchmark, and combine the results together and report one average/stddev result for all of them.

For example, benchmarking data transfers over a network - I could easily repeat the same command to transfer 1 data file * N times, but to avoid caching effects (include caching on the remote data server, which can't be cleared with a local command) I want to instead transfer N different data files * once each. So I can do this:

./hyperfine --runs 1 --parameter-scan index 1 20 'transfer_data file{index}.dat'

This runs the intended commands, but presents the result as if it were 20 individual benchmarks, instead of 20 trials of the same benchmark. Is there any way to combine the results together into a single average and standard deviation?

@rptaylor
Copy link
Author

rptaylor commented Nov 28, 2024

An alternative simpler way to achieve this would be if hyperfine could simply expose the current run number to me as a variable.
Then using a parameter scan would not be needed at all.

i.e. I should be able do do something like

./hyperfine --runs 10 'echo {run}'

And it would print 1 through 10 (if --show-output).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant