Interim Anthropic Prompt Caching Workaround #952
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
"Enables" prompt cache control through the system prompt for Anthropic models.
It's basically just a hack that -very roughly- checks if your system prompt string looks like a list, assumes that its been
json.dump
ed,json.load
s it, and converts everything to Anthropic's format.Also adds the returned cache usage values to pydantic-ai's Usage.details, so you can see if it's working or not.
The way I'm using this now is by:
This shouldn't be merged, but I thought I'd put it up in case others found this useful. Really happy with
pydantic-ai
and this was one thing that would have otherwise forced me to write/wrap a bunch of other stuff just to get around this.