Using llama3.2 as local llm for deepeval #1203
Replies: 2 comments 1 reply
-
That's not clear how you are running the code, there are two approaches:
I think that when you set a local LLM like this you should use the first approach. Personally I suggest using the second approach for higher flexibility, however this involves writing a |
Beta Was this translation helpful? Give feedback.
-
Hey Lorenzo! I found I needed to set it up in a similar way to Google VertexAI on the Metrics Introduction page (with a few adjustments). Apologies for the formatting, but this is what worked for me from deepeval.metrics import GEval Modules for model implementationfrom deepeval.models.base_model import DeepEvalBaseLLM Create a class to implement the local Ollama Llama model for DeepEvalclass OllamaLlama(DeepEvalBaseLLM):
Config for the local Llama modelcustom_model_llama = ChatOllama( Instantiate the modelllama_model = OllamaLlama(custom_model_llama) Define your test casetest_case = LLMTestCase( correctness_metric = GEval( Print completeness score and reasoncorrectness_metric.measure(test_case) |
Beta Was this translation helpful? Give feedback.
-
Hi I wanted to test deepeval. I set llama3.2:latest as my local model with the command: deepeval set-local-model --model-name=llama3.2:latest --base-url="http://localhost:11434/v1/" --api-key="ollama".
When I run the example code on deep eval website and I specify the model I want to use(llama3.2:latest) it doesn't find it. How can I solve this? The code:
Beta Was this translation helpful? Give feedback.
All reactions