Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disabling cache not working #1857

Open
lestan opened this issue Nov 25, 2024 · 0 comments
Open

Disabling cache not working #1857

lestan opened this issue Nov 25, 2024 · 0 comments

Comments

@lestan
Copy link

lestan commented Nov 25, 2024

Tried to set the cache flag to False as per documentation to force requests to not use cache. Not getting the results I expect - i.e. getting the same results each time.

Documentation on the Migration guide:

gpt_4o_mini = dspy.LM('openai/gpt-4o-mini', temperature=0.9, max_tokens=3000, stop=None, cache=False)

My code uses Ollama:
lm = dspy.LM("ollama_chat/granite3-moe:1b", provider="ollama", api_base="http://localhost:11434/", cache=False, launch_kwargs={'seed', 42}) dspy.configure(lm=llm)

When I call it to generate a random name, I get the same results from run to run with the following code:

llm("generate a random human person name. no descriptions. no famous people. not fantasy or science fiction based.")

Am I missing something in the configuration?

Appreciate any help!

Les

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant