diff --git a/README.md b/README.md index 9128a32..229be61 100644 --- a/README.md +++ b/README.md @@ -19,24 +19,6 @@ Discussions around the performance of different structured generation methods te Different methods make different trade-offs, and it is important to know when a method is faster than another. We will highlight differences, ideally using minimum pathological examples. -## Explanations +## How benchmarks are run We do not use models to run the benchmarks, as it would lead to increased runtime, more complex code, and unpredictable generation lengths. We instead take a string in the language of the regular expressions / JSON Schemas, tokenize it and iterate over it pretending these were generated tokens. - -### Outlines - -If you look at the [benchmarking suite for Outlines](https://github.com/outlines-dev/benchmarks/blob/main/src/outlines.py) you will notice that we execute: - -``` python -Regexguide("a", tokenizer) -``` - -in the initialization phase of the benchmark. This serves two purposes: - -1. JIT-compile the functions decorated with `@numba.njit`; -2. Convert vocabulary strings to Numba types. - -This only ever needs to be done once, possibly while loading the model, and could be made to disappear using Ahead Of Time compilation. In this benchmarking suite we thus measure: - -1. The time it takes to compile the index corresponding to a regular expression; -2. The time it takes to look for valid tokens when generating text.