-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a dataset write benchmark #123
Comments
Questions to self (not expecting timely answer from Jon): Provocative point of view: almost knowing nothing, I feel like this is an 'extend feature coverage in a test suite' kind of idea, and not so much a benchmarking topic in the most traditional sense. Am I on the right track or do I miss something? Or do we already care about performance aspects from the start, such as the wall time it takes for an operation to complete?
Does that imply 'to the file system'? I am asking because then it's interesting to see how much the run time (duration) would start to be affected by I/O performance of the test environment, and especially the volatility of that I/O performance. |
FYI, I am taking this on because Jon recommended this as a good first issue to get involved here 🎈. |
Documenting preparation of environment:
I then tried running a specific benchmark:
Interesting. Thoughts:
Exploring cmdline options.
Did this again with 10 iterations. Got this:
Now given all those aggregates there's the lack of the standard error of the mean which we could now use to plot meaningful error bars (at least that's a very canonical thing to do: plot the mean and then the standard error of the mean). And then maybe there could be a printable result, like: Of course the assumption that things are normally distributed might be flawed, and maybe the minimal value plus the volatility are more interesting than the mean value. Anyway. Just exploring, I know some of this is super off topic from the purpose of this ticket. |
On topic. Looks like we want to use Interestingly, there is no corresponding method on
On Linux |
Yesterday I spent more time investigating the method to choose. Investigated more about about how the dataset/table/scanner abstractions in pyarrow actually behave. Confirmed empirically that dataset and scanner abstractions can be used to build
(RDFSW) while retaining only tiny chunks in memory. (a cool benchmark for this pipeline would confirm that memory usage stays tiny!) I also found that this RDFSW pipeline is dominated by disk read I/O performance given the kinds of dataset we use here. That means that RDFSW would be a boring if not even useless benchmark, where the write phase is probably shorter than fluctuations in the read phase. That is, towards the goal of benchmarking serialization and writing I propose:
Tweaked timing reporting a bit by Conbench CLI, discovered a cool library called sigfig: conbench/conbench#538 |
I want to support that with data. For example, for a case
and the serialize/write phase took ~1/10th of that time:
|
My attempt to summarize. I want to start adding a benchmark that focuses on serialization and writing-to-tmpfs, starting with data being in memory. That is, timing of reading-from-whatever-disk-and-then-filtering does not contribute to the benchmark duration. That simplifies reasoning, and allows for drawing stronger conclusions (when compared with a benchmark that exercised the entire information flow). Given that, on my machine, I see that writing the same kind of data
These are all default settings, and I find the differences quite remarkable. From here, it's interesting to see how csv writing and parquet writing could indeed be improved when changing parameters like @joosthooz investigated elsewhere. It's also interesting to see more data being written, and of course it's interesting to see how this behaves in CI as opposed to my machine. So, working towards completing the patch to have something to iterate on. |
Update: this ticket after all motivated me to land a rather specific (focused) benchmark now called |
I'm super glad to see the Something alluded to above, I do think it might still be worth doing an end-to-end version that interacts with the disk (even if that is dominated by read (or write) disk!). Since that represents a non-trivial real-world chunk of work (e.g. I've got a raw dataset and I want to (re)partition it to fit a pattern that is useful for querying) that we want to make sure doesn't get slower. |
The high level motivation is absolutely reasonable! I have just spent a bit of time consolidating some thoughts around that. A little too deep for this ticket here, but I will write down my thoughts anyway. When working with a multi-stage benchmark, two challenges make such a benchmark difficult to extract value from:
A multi-stage benchmark's noise level is the sum of the noise levels of the individual stages. The noise level of one of those stages may easily be larger than the expected duration of a single substage. This is interesting to compare with testing. I like end-to-end functional/integration tests because the signal stays strong. That is, end-to-end tests do not suffer from signal weakening (1). With proper logging/debug information, end-to-end tests also often do not really suffer from (2) because there is insight into the complete flow. In contrast, the value of a benchmark quickly dilutes with the number of stages it covers. I think a good strategy is therefore to build benchmarks that are known to be dominated by a certain stage, and to call that out. I want to ack: there is value in doing the end-to-end (multi-stage) benchmarking. An end-to-end benchmark can certainly serve as a sanity check that can uncover drastic performance changes. But there is exponentially more value in covering individual stages via focused benchmarks. As of these thoughts a recommendable strategy for benchmarking a multi-stage information flow is: |
One can (re)write a dataset (partitioned or not) without reading the full thing into memory with pyarrow. We currently have a benchmark that runs a filter on datasets.
We should create a new benchmark that is similar to the filtering, but on top of filtering, also write the results out to a new dataset (instead of pulling into table like we do at
benchmarks/benchmarks/dataset_selectivity_benchmark.py
Line 71 in 5ea34d7
We might parameterize this over:
The text was updated successfully, but these errors were encountered: