You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tried to set the cache flag to False as per documentation to force requests to not use cache. Not getting the results I expect - i.e. getting the same results each time.
Tried to set the cache flag to False as per documentation to force requests to not use cache. Not getting the results I expect - i.e. getting the same results each time.
Documentation on the Migration guide:
gpt_4o_mini = dspy.LM('openai/gpt-4o-mini', temperature=0.9, max_tokens=3000, stop=None, cache=False)
My code uses Ollama:
lm = dspy.LM("ollama_chat/granite3-moe:1b", provider="ollama", api_base="http://localhost:11434/", cache=False, launch_kwargs={'seed', 42}) dspy.configure(lm=llm)
When I call it to generate a random name, I get the same results from run to run with the following code:
llm("generate a random human person name. no descriptions. no famous people. not fantasy or science fiction based.")
Am I missing something in the configuration?
Appreciate any help!
Les
The text was updated successfully, but these errors were encountered: