From 00e9aea0f7de03241bcd56025042395d21a3f7cf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Cihat=20SAL=C4=B0K?= <57585087+cihat@users.noreply.github.com> Date: Tue, 19 Mar 2024 11:13:14 +0300 Subject: [PATCH] Fix readme typo --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 9fe6c33..dc0f308 100644 --- a/README.md +++ b/README.md @@ -109,7 +109,7 @@ save_chat = 's' ## Chatgpt -To use `chatgpt` as the backemd, you'll need to provide an API key for OpenAI. There are two ways to do this: +To use `chatgpt` as the backend, you'll need to provide an API key for OpenAI. There are two ways to do this: Set an environment variable with your API key: @@ -132,7 +132,7 @@ The default model is set to `gpt-3.5-turbo`. Check out the [OpenAI documentation ## llama.cpp -To use `llama.cpp` as the backemd, you'll need to provide the url that points to the server : +To use `llama.cpp` as the backend, you'll need to provide the url that points to the server : ```toml [llamacpp] @@ -161,7 +161,7 @@ More infos about llama.cpp api [here](https://github.com/ggerganov/llama.cpp/blo ## Ollama -To use `ollama` as the backemd, you'll need to provide the url that points to the server with the model name : +To use `ollama` as the backend, you'll need to provide the url that points to the server with the model name : ```toml [ollama]