This repository has been archived by the owner on Nov 13, 2024. It is now read-only.
Config docs #331
Unanswered
uriafranko
asked this question in
Q&A
Config docs
#331
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey, I'm trying to configure a different LLM engine but can't find any docs on how to do this.
The current config I'm using is
chat_engine: params: system_prompt: > Use the following pieces of context to answer the user question at the next messages. This context retrieved from a knowledge database and you should use only the facts from the context to answer. Always remember to include the source to the documents you used from their 'source' field in the format 'Source: $SOURCE_HERE'. If you don't know the answer, just say that you don't know, don't try to make up an answer, use the context. Don't address the context directly, but use it to answer the user question like it's your own knowledge. My job is dependent on the user satisfaction, so make sure to provide the best answer you can. max_generated_tokens: 2000 max_prompt_tokens: 8000 max_context_tokens: 6000
But I can't seem to find a way to adjust the model or the provider it self.
Beta Was this translation helpful? Give feedback.
All reactions