Skip to content

Commit

Permalink
Grammar fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
ChrisJar committed Nov 22, 2024
1 parent e53f34c commit 655fc25
Show file tree
Hide file tree
Showing 3 changed files with 44 additions and 27 deletions.
61 changes: 39 additions & 22 deletions examples/langchain_multimodal_rag.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,23 +13,23 @@
"id": "91ece9e3-155a-44f4-81e5-2f9492c62a2f",
"metadata": {},
"source": [
"This notebook shows how to perform RAG on the table, chart, and text extraction results of nv-ingest's pdf extraction tools using LangChain"
"This notebook shows how to perform RAG on the table, chart, and text extraction results of NV-Ingest's pdf extraction tools using LangChain"
]
},
{
"cell_type": "markdown",
"id": "c6905d11-0ec3-43c8-961b-24cb52e36bfe",
"metadata": {},
"source": [
"**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)."
"**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest python client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)."
]
},
{
"cell_type": "markdown",
"id": "81014734-f765-48fc-8fc2-4c19f5f28eae",
"metadata": {},
"source": [
"To start, make sure Langchain and pymilvus are installed and up to date"
"To start, make sure LangChain and pymilvus are installed and up to date"
]
},
{
Expand All @@ -47,7 +47,7 @@
"id": "d888ba26-04cf-4577-81a3-5bcd537fc2f6",
"metadata": {},
"source": [
"Then, we'll use NV-Ingest's Ingestor interface extract the tables and charts from a test pdf, embed them, and upload them to our Milvus vector database (VDB)"
"Then, we'll use NV-Ingest's Ingestor interface to extract the tables and charts from a test pdf, embed them, and upload them to our Milvus vector database (VDB)"
]
},
{
Expand Down Expand Up @@ -80,7 +80,7 @@
"id": "02131711-31bf-4536-81b7-8c464c7473e3",
"metadata": {},
"source": [
"Now, the text, table, and chart content is extracted and stored in the Milvus VDB along with the embeddings. Next we'll connect LlamaIndex to Milvus and create a vector store so that we can query our extraction results"
"Now, the text, table, and chart content is extracted and stored in the Milvus VDB along with the embeddings. Next we'll connect LangChain to Milvus and create a vector store so that we can query our extraction results"
]
},
{
Expand Down Expand Up @@ -111,7 +111,7 @@
"id": "b87111b5-e5a8-45a0-9663-2ae6d9ea2ab6",
"metadata": {},
"source": [
"Finally, we'll create an RAG chain using [llama-3.1-405b-instruct](https://build.nvidia.com/meta/llama-3_1-405b-instruct) that we can use to query our pdf in natural language"
"Then, we'll create an RAG chain using [llama-3.1-405b-instruct](https://build.nvidia.com/meta/llama-3_1-405b-instruct) that we can use to query our pdf in natural language"
]
},
{
Expand All @@ -125,28 +125,17 @@
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"# TODO: Add your NVIDIA API key\n",
"os.environ[\"NVIDIA_API_KEY\"] = \"<YOUR_NVIDIA_API_KEY>\"\n",
"os.environ[\"NVIDIA_API_KEY\"] = \"[YOUR NVIDIA API KEY HERE]\"\n",
"\n",
"llm = ChatNVIDIA(model=\"meta/llama-3.1-405b-instruct\")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "b547a19a-9ada-4a40-a246-6d7bc4d24482",
"execution_count": 17,
"id": "77fd17f8-eac0-4457-b6fb-6e5c8ce90c84",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The dog is chasing a squirrel in the front yard.'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
Expand All @@ -169,7 +158,35 @@
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
")"
]
},
{
"cell_type": "markdown",
"id": "cc2ee8fb-a154-46c9-9181-29a035fdcfbb",
"metadata": {},
"source": [
"And now we can ask our pdf questions"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "b547a19a-9ada-4a40-a246-6d7bc4d24482",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The dog is chasing a squirrel in the front yard.'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rag_chain.invoke(\"What is the dog doing and where?\")"
]
},
Expand Down
6 changes: 3 additions & 3 deletions examples/llama_index_multimodal_rag.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"id": "c65edc4b-2084-47c9-a837-733264201802",
"metadata": {},
"source": [
"**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)."
"**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest python client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)."
]
},
{
Expand All @@ -47,7 +47,7 @@
"id": "45412661-9516-47f9-8bea-6f857e0e173f",
"metadata": {},
"source": [
"Then, we'll use NV-Ingest's Ingestor interface extract the tables and charts from a test pdf, embed them, and upload them to our Milvus vector database (VDB)"
"Then, we'll use NV-Ingest's Ingestor interface to extract the tables and charts from a test pdf, embed them, and upload them to our Milvus vector database (VDB)"
]
},
{
Expand Down Expand Up @@ -127,7 +127,7 @@
"from llama_index.llms.nvidia import NVIDIA\n",
"\n",
"# TODO: Add your NVIDIA API key\n",
"os.environ[\"NVIDIA_API_KEY\"] = \"<YOUR_NVIDIA_API_KEY>\"\n",
"os.environ[\"NVIDIA_API_KEY\"] = \"[YOUR NVIDIA API KEY HERE]\"\n",
"\n",
"llm = NVIDIA(model=\"meta/llama-3.1-405b-instruct\")\n",
"query_engine = index.as_query_engine(llm=llm)"
Expand Down
4 changes: 2 additions & 2 deletions examples/store_and_display_images.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,15 @@
"id": "2a598d15-adf0-406a-95c6-6d49c0939508",
"metadata": {},
"source": [
"**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)."
"**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest python client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)."
]
},
{
"cell_type": "markdown",
"id": "380f6180-e90a-4019-8cc9-418884a62444",
"metadata": {},
"source": [
"To start make sure the minio python client is installed and up to date"
"To start, make sure the minio python client is installed and up to date"
]
},
{
Expand Down

0 comments on commit 655fc25

Please sign in to comment.