Skip to content

Unbatched queries to OpenAI are faster than nlp.pipe #224

Answered by rmitsch
Zatteliet asked this question in Q&A
Discussion options

You must be logged in to vote

Alright, I ran some tests. The issue with benchmarking OpenAI is that the API performance fluctuates wildly, in my experience. So running the same tests can yield differences in runtime to a factor of 5 or more.

Because of this I ran the prompts in different variations a couple of times with different n. The average time was pretty comparable - if you haven't done so, I recommend running your script multiple times as well and averaging the runtimes for your comparison.

My conclusion is that using .pipe() is actually about equally fast as running documents individually, which is the expected behavior.

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@Zatteliet
Comment options

Answer selected by rmitsch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
bug Something isn't working feat/model Feature: models
2 participants