You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to do that, but the endpoint is changeable so if another API is used (I know that shiv recommended others) the return code of the API can differ (and actually the return code of OpenAPI doesn't indicate that the comment is too long as far as I remember, just a failure) and parsing the error message feels unreliable.
Currently if the LLM fails, we throw and abort the run completely, which cancels all the rewards and results, and needs a new run to be triggered afterwards. While some errors are not recoverable, the ones like us sending too many tokens to be handled can be avoided by splitting the prompt and retrying with a smaller prompt until it fits with the given model.
Needed changes
We should implement:
a retry mechanism which, on errors that are not fatal, would give another try with the LLM
covering cases like the token amount sent being too large, responses having truncated JSON content, network failures
post a message saying the results are being retried
set a failure limit
throw in the end if every try is unsecessfull
add related tests
The text was updated successfully, but these errors were encountered:
See error code 429: https://platform.openai.com/docs/guides/error-codes
Originally posted by @gentlementlegen in #225 (comment)
Description
Currently if the LLM fails, we throw and abort the run completely, which cancels all the rewards and results, and needs a new run to be triggered afterwards. While some errors are not recoverable, the ones like us sending too many tokens to be handled can be avoided by splitting the prompt and retrying with a smaller prompt until it fits with the given model.
Needed changes
We should implement:
The text was updated successfully, but these errors were encountered: