-
Notifications
You must be signed in to change notification settings - Fork 122
Conversation
ContextEngine is responsible for providing context to the LLM, given a set of search queries. | ||
|
||
Once called with a set of queries, the ContextEngine will go through the following steps: | ||
1. Query the knowledge base for relevant documents | ||
2. Build a context from the documents retrieved that can be injected into the LLM prompt | ||
|
||
The context engine considers token budgeting when building the context, and tries to maximize the amount of relevant information that can be provided to the LLM within the token budget. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love it! 😍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
See two minor suggestions
The context engine considers token budgeting when building the context, and tries to maximize the amount of relevant information that can be provided to the LLM within the token budget. | ||
|
||
To create a context engine, you must provide a knowledge base and optionally a context builder. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should also mention that the returned Context
object could be either structured or unstructured, but it always supports a .to_text()
method dumping it as formatted text?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO it's too deep for putting in this layer of docstrings, it's something to look at in the context builder I think because it relevant only once you want to costumes it
Co-authored-by: igiloh-pinecone <[email protected]>
context engine docstring