Skip to content
This repository has been archived by the owner on Nov 13, 2024. It is now read-only.

Commit

Permalink
Apply Byron's and Nathan's suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: byronnlandry <[email protected]>
  • Loading branch information
igiloh-pinecone and byronnlandry authored Nov 5, 2023
1 parent 08a7f23 commit 3d16600
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 12 deletions.
8 changes: 4 additions & 4 deletions src/canopy/models/data_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@ class Query(BaseModel):
text: str = Field(description="The query text.")
namespace: str = Field(
default="",
description="The namespace of the query, to learn more about namespaces, see https://docs.pinecone.io/docs/namespaces", # noqa: E501
description="The namespace of the query. To learn more about namespaces, see https://docs.pinecone.io/docs/namespaces", # noqa: E501
)
metadata_filter: Optional[dict] = Field(
default=None,
description="A pinecone metadata filter, to learn more about metadata filters, see https://docs.pinecone.io/docs/metadata-filtering", # noqa: E501
description="A Pinecone metadata filter, to learn more about metadata filters, see https://docs.pinecone.io/docs/metadata-filtering", # noqa: E501
)
top_k: Optional[int] = Field(
default=None,
Expand All @@ -39,7 +39,7 @@ class Document(BaseModel):
)
metadata: Metadata = Field(
default_factory=dict,
description="The document metadata, to learn more about metadata, see https://docs.pinecone.io/docs/manage-data", # noqa: E501
description="The document metadata. To learn more about metadata, see https://docs.pinecone.io/docs/manage-data", # noqa: E501
)

class Config:
Expand Down Expand Up @@ -89,7 +89,7 @@ class Role(Enum):


class MessageBase(BaseModel):
role: Role = Field(description="The role of the messages author.")
role: Role = Field(description="The role of the message's author. Can be one of ['User', 'Assistant', 'System']")
content: str = Field(description="The contents of the message.")

def dict(self, *args, **kwargs):
Expand Down
2 changes: 1 addition & 1 deletion src/canopy_server/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
description = """
Canopy is an open-source Retrieval Augmented Generation (RAG) framework and context engine built on top of the Pinecone vector database. Canopy enables you to quickly and easily experiment with and build applications using RAG. Start chatting with your documents or text data with a few simple commands.
Canopy provides a configurable built-in server so you can effortlessly deploy a RAG-powered chat application to your existing chat UI or interface. Or you can build your own, custom RAG application using the Canopy lirbary.
Canopy provides a configurable built-in server, so you can effortlessly deploy a RAG-powered chat application to your existing chat UI or interface. Or you can build your own custom RAG application using the Canopy library.
## Prerequisites
Expand Down
4 changes: 2 additions & 2 deletions src/canopy_server/api_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@
class ChatRequest(BaseModel):
model: str = Field(
default="",
description="ID of the model to use. Currecntly this field is ignored and this should be configured on Canopy config.", # noqa: E501
description="The ID of the model to use. This field is ignored; instead, configure this field in the Canopy config.", # noqa: E501
)
messages: Messages = Field(
description="A list of messages comprising the conversation so far."
)
stream: bool = Field(
default=False,
description="""Whether or not to stream the chatbot's response. If set, the response will be server-sent events containing [chat.completion.chunk](https://platform.openai.com/docs/api-reference/chat/streaming) objects""", # noqa: E501
description="""Whether or not to stream the chatbot's response. If set, the response is server-sent events containing [chat.completion.chunk](https://platform.openai.com/docs/api-reference/chat/streaming) objects""", # noqa: E501
)
user: Optional[str] = Field(
default=None,
Expand Down
10 changes: 5 additions & 5 deletions src/canopy_server/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ async def chat(
"""
Chat with Canopy, using the LLM and context engine, and return a response.
The request schema is following OpenAI's chat completion API schema: https://platform.openai.com/docs/api-reference/chat/create.
The request schema follows OpenAI's chat completion API schema: https://platform.openai.com/docs/api-reference/chat/create.
Note that all fields other than `messages` and `stream` are currently ignored. The Canopy server uses the model parameters defined in the `ChatEngine` config for all underlying LLM calls.
""" # noqa: E501
Expand Down Expand Up @@ -122,9 +122,9 @@ async def query(
) -> ContextContent:
"""
Query the knowledge base for relevant context.
The returned text might be structured or unstructured, depending on the ContextEngine's configuration.
Query allows limiting the context length (in tokens), to control LLM costs.
This method does not pass through the LLM and uses only retieval and construction from Pinecone DB.
The returned text may be structured or unstructured, depending on the Canopy configuration.
Query allows limiting the context length in tokens to control LLM costs.
This method does not pass through the LLM and uses only retrieval and construction from Pinecone DB.
""" # noqa: E501
try:
context: Context = await run_in_threadpool(
Expand Down Expand Up @@ -152,7 +152,7 @@ async def upsert(
Upsert documents into the knowledge base. Upserting is a way to add new documents or update existing ones.
Each document has a unique ID. If a document with the same ID already exists, it is updated.
The documents will be chunked and encoded, then the resulting encoded chunks will be sent to the Pinecone index in batches
The documents are chunked and encoded, then the resulting encoded chunks are sent to the Pinecone index in batches.
""" # noqa: E501
try:
logger.info(f"Upserting {len(request.documents)} documents")
Expand Down

0 comments on commit 3d16600

Please sign in to comment.