Skip to content
This repository has been archived by the owner on Nov 13, 2024. It is now read-only.

Commit

Permalink
Edit README.md
Browse files Browse the repository at this point in the history
Changes for project name and correctness. I did not update the word 'resin' where it appears in code, since I'm not sure what the updated keywords will be.
  • Loading branch information
byronnlandry authored Oct 20, 2023
1 parent 3fa0194 commit a4bcf30
Showing 1 changed file with 41 additions and 38 deletions.
79 changes: 41 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,34 @@
# Resin
# Canopy

**Resin** is a tool that allows you to build AI applications using your own data. It is built on top of Pinecone, the world's most powerfull vector database. **Resin** is desinged to be well packaged and easy to use. It can be used as a library or as a service. It is also designed to be modular, so you can use only the parts that you need. **Resin** is built with the following principles in mind:
**Canopy** is a tool that allows you to build AI applications using your own data. **Canopy** is built with the following principles in mind:

* **Easy to use** - **Resin** is designed to be easy to use. It is well packaged and can be installed with a single command. It is also designed to be modular, so you can use only the parts that you need.
* **Easy to use** - **Canopy** is designed to be easy to use. It is well packaged and can be installed with a single command. It is also designed to be modular, so you can use only the parts that you need.

* **Production ready** - **Resin** is built on top of Pinecone, the world's most powerfull vector database. It is designed to be production ready, 100% tested, well documented, maintained and supported.
* **Production ready** - **Canopy** is built on top of Pinecone, the world's most powerfull vector database. It is designed to be production ready, 100% tested, well documented, maintained and supported.

* **Open source** - **Resin** is open source and free to use. It is also designed to be open and extensible, so you can easily add your own components and extend the functionality.
* **Open source** - **Canopy** is open source and free to use. It is also designed to be open and extensible, so you can easily add your own components and extend the functionality.

* **Operative AI** - **Resin** is designed to be used in production and as such, it allows developers to set up operating point to have better control over token consumption in prompt or in generation. **Resin** can maximize context quality and relevance while controlling the cost of the AI system.
* **Operative AI** - **Canopy** is designed to be used in production. It allows developers to set up operating point to have better control over token consumption in prompt or in generation. **Canopy** can maximize context quality and relevance while controlling the cost of the AI system.

## Concept

**Resin** can be used as a library and as-a-service. The conceptual model is the following:
**Canopy** can be used as a library and as a service. The following diagram illustrates the conceptual model:

![conceptual model](https://github.com/pinecone-io/context-engine/blob/dev/.readme-content/sketch.png)

Where:
In the diagram above, entities have the following meanings:

* **ChatEngine** _`/context/chat/completions`_ - is a complete RAG unit, this will perform the RAG pipeline and return the answer from the LLM. This API follows the OpenAI API specification and can be used as a drop-in replacement for the OpenAI API.
* **ChatEngine** _`/context/chat/completions`_ - is a complete RAG unit. This performs the RAG pipeline and returns the answer from the LLM. This API follows the OpenAI API specification and can be used as a drop-in replacement for the OpenAI API.

* **ContextEngine** _`/context/query`_ - is a proxy between your application and Pinecone. It will handle the R in the RAG pipeline and will return the snippet of context along with the respected source. ContextEngine internally performs a process of ContextBuilding - i.e. it will find the most relevant documents to your query and will return them as a single context object.
* **ContextEngine** _`/context/query`_ - is a proxy between your application and Pinecone. It handles the retrieval in the RAG pipeline and returns the snippet of context along with the trusted source. ContextEngine internally performs a process of ContextBuilding; that is, it finds the most relevant documents to your query and returns them as a single context object.

* **KnowledgeBase** _`/context/upsert`_ - is the interface to upload your data into Pinecone. It will create a new index and will configure it for you. It will also handle the processing of the data, namely - processing, chunking and encoding (embedding).
* **KnowledgeBase** _`/context/upsert`_ - is the interface to upload your data into Pinecone. It creates a new index and configures it for you. It also handles the processing of the data, namely processing, chunking and encoding (embedding).

## How to install
## How to install Canopy

1. Set up the environment variables
Follow the steps below to install Canopy.

1. Set up the environment variables.

```bash
export PINECONE_API_KEY="<PINECONE_API_KEY>"
Expand All @@ -35,40 +37,41 @@ export OPENAI_API_KEY="<OPENAI_API_KEY>"
export INDEX_NAME="test-index-1"
```

2. install the package
2. Install the package.

```bash
pip install pinecone-resin
```

3. you are good to go! see the quickstart guide on how to run basic demo
Now you are good to go! See the quickstart guide below on how to run basic demo.

## Quickstart

In this quickstart, we will show you how to use the **Resin** to build a simple question answering system using RAG (retrival augmented generation).
In this quickstart, we show you how to use **Canopy** to build a simple question-answering system using RAG (retrival augmented generation).

### 1. Create a new **Resin** Index
### 1. Create a new **Canopy** index.

**Resin** will create and configure a new Pinecone index on your behalf. Just run:
**Canopy** creates and configures a new Pinecone index on your behalf. Just run the following command:

```bash
resin new
```

And follow the CLI instructions. The index that will be created will have a prefix `resin--<INDEX_NAME>`.
Next, follow the CLI instructions. The new index has the prefix `resin--<INDEX_NAME>`.

> Note, this will have to be done only once per index.
> Note: Perform this step only once per index.
![](https://github.com/pinecone-io/context-engine/blob/change-readme-cli-names/.readme-content/resin-new.gif)

### 2. Uploading data
### 2. Upload data.

You can load data into your **Resin** Index by simply using the CLI:
You can load data into your **Canopy** Index using the CLI. Run the following command:

```bash
resin upsert <PATH_TO_DATA>
```

The data should be in a parquet format where each row is a document. The documents should have the following schema:
The data should be in a Parquet format, where each row is a document. The documents should have the following schema:

```
+----------+--------------+--------------+---------------+
Expand All @@ -82,18 +85,18 @@ Follow the instructions in the CLI to upload your data.

![](https://github.com/pinecone-io/context-engine/blob/change-readme-cli-names/.readme-content/resin-upsert.gif)

### 3. Start the **Resin** service
### 3. Start the **Canopy** service.

**Resin** service serve as a proxy between your application and Pinecone. It will also handle the RAG part of the application. To start the service, run:
The **Canopy** service serves as a proxy between your application and Pinecone. It also handles the RAG part of the application. To start the service, run the following command:

```bash
resin start
```

Now, you should be prompted with standard Uvicorn logs:
Now, you should be prompted with standard Uvicorn logs, like in the following example shell output:

```
Starting Resin service on 0.0.0.0:8000
Starting Canopy service on 0.0.0.0:8000
INFO: Started server process [24431]
INFO: Waiting for application startup.
INFO: Application startup complete.
Expand All @@ -103,44 +106,44 @@ INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
![](https://github.com/pinecone-io/context-engine/blob/change-readme-cli-names/.readme-content/resin-start.gif)


### 4. Chat with your data
### 4. Chat with your data.

Now that you have data in your index, you can chat with it using the CLI:
Now that you have data in your index, you can chat with it using the CLI. Run the following command:

```bash
resin chat
```

This will open a chat interface in your terminal. You can ask questions and the **Resin** will try to answer them using the data you uploaded.
This will open a chat interface in your terminal. If you ask questions, the **Canopy** service tries to answer them using the data you uploaded.

![](https://github.com/pinecone-io/context-engine/blob/change-readme-cli-names/.readme-content/resin-chat.gif)

To compare the chat response with and without RAG use the `--no-rag` flag
To compare the chat response with and without RAG, use the `--no-rag` flag, as in the following command:

```bash
resin chat --no-rag
```

This will open a similar chat interface window, but will send your question directly to the LLM without the RAG pipeline.
This opens a similar chat interface window, but sends your question directly to the LLM without the RAG pipeline.

![](https://github.com/pinecone-io/context-engine/blob/change-readme-cli-names/.readme-content/resin-chat-no-rag.gif)


### 5. Stop the **Resin** service
### 5. Stop the **Canopy** service

To stop the service, simply press `CTRL+C` in the terminal where you started it.
To stop the service, press `CTRL+C` in the terminal where you started it.

If you have started the service in the background, you can stop it by running:
If you have started the service in the background, you can stop it by running the following command:

```bash
resin stop
```

## Advanced usage

### 1. Migrating existing OpenAI application to **Resin**
### 1. Migrating existing OpenAI applications to **Canopy**

If you already have an application that uses the OpenAI API, you can migrate it to **Resin** by simply changing the API endpoint to `http://host:port/context` as follows:
If you already have an application that uses the OpenAI API, you can migrate it to **Canopy** by changing the API endpoint to `http://host:port/context` as follows:

```python
import openai
Expand All @@ -150,7 +153,7 @@ openai.api_base = "http://host:port/context"
# now you can use the OpenAI API as usual
```

or without global state change:
To perform the same task without global state change, use code like the following:

```python
import openai
Expand Down

0 comments on commit a4bcf30

Please sign in to comment.