diff --git a/examples/langchain_multimodal_rag.ipynb b/examples/langchain_multimodal_rag.ipynb index 62bbe7e..395834a 100644 --- a/examples/langchain_multimodal_rag.ipynb +++ b/examples/langchain_multimodal_rag.ipynb @@ -13,7 +13,7 @@ "id": "91ece9e3-155a-44f4-81e5-2f9492c62a2f", "metadata": {}, "source": [ - "This notebook shows how to perform RAG on the table, chart, and text extraction results of nv-ingest's pdf extraction tools using LangChain" + "This notebook shows how to perform RAG on the table, chart, and text extraction results of NV-Ingest's pdf extraction tools using LangChain" ] }, { @@ -21,7 +21,7 @@ "id": "c6905d11-0ec3-43c8-961b-24cb52e36bfe", "metadata": {}, "source": [ - "**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)." + "**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest python client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)." ] }, { @@ -29,7 +29,7 @@ "id": "81014734-f765-48fc-8fc2-4c19f5f28eae", "metadata": {}, "source": [ - "To start, make sure Langchain and pymilvus are installed and up to date" + "To start, make sure LangChain and pymilvus are installed and up to date" ] }, { @@ -47,7 +47,7 @@ "id": "d888ba26-04cf-4577-81a3-5bcd537fc2f6", "metadata": {}, "source": [ - "Then, we'll use NV-Ingest's Ingestor interface extract the tables and charts from a test pdf, embed them, and upload them to our Milvus vector database (VDB)" + "Then, we'll use NV-Ingest's Ingestor interface to extract the tables and charts from a test pdf, embed them, and upload them to our Milvus vector database (VDB)" ] }, { @@ -80,7 +80,7 @@ "id": "02131711-31bf-4536-81b7-8c464c7473e3", "metadata": {}, "source": [ - "Now, the text, table, and chart content is extracted and stored in the Milvus VDB along with the embeddings. Next we'll connect LlamaIndex to Milvus and create a vector store so that we can query our extraction results" + "Now, the text, table, and chart content is extracted and stored in the Milvus VDB along with the embeddings. Next we'll connect LangChain to Milvus and create a vector store so that we can query our extraction results" ] }, { @@ -111,7 +111,7 @@ "id": "b87111b5-e5a8-45a0-9663-2ae6d9ea2ab6", "metadata": {}, "source": [ - "Finally, we'll create an RAG chain using [llama-3.1-405b-instruct](https://build.nvidia.com/meta/llama-3_1-405b-instruct) that we can use to query our pdf in natural language" + "Then, we'll create an RAG chain using [llama-3.1-405b-instruct](https://build.nvidia.com/meta/llama-3_1-405b-instruct) that we can use to query our pdf in natural language" ] }, { @@ -125,28 +125,17 @@ "from langchain_nvidia_ai_endpoints import ChatNVIDIA\n", "\n", "# TODO: Add your NVIDIA API key\n", - "os.environ[\"NVIDIA_API_KEY\"] = \"\"\n", + "os.environ[\"NVIDIA_API_KEY\"] = \"[YOUR NVIDIA API KEY HERE]\"\n", "\n", "llm = ChatNVIDIA(model=\"meta/llama-3.1-405b-instruct\")" ] }, { "cell_type": "code", - "execution_count": 16, - "id": "b547a19a-9ada-4a40-a246-6d7bc4d24482", + "execution_count": 17, + "id": "77fd17f8-eac0-4457-b6fb-6e5c8ce90c84", "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "'The dog is chasing a squirrel in the front yard.'" - ] - }, - "execution_count": 16, - "metadata": {}, - "output_type": "execute_result" - } - ], + "outputs": [], "source": [ "from langchain_core.prompts import PromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", @@ -169,7 +158,35 @@ " | prompt\n", " | llm\n", " | StrOutputParser()\n", - ")\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "cc2ee8fb-a154-46c9-9181-29a035fdcfbb", + "metadata": {}, + "source": [ + "And now we can ask our pdf questions" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "b547a19a-9ada-4a40-a246-6d7bc4d24482", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'The dog is chasing a squirrel in the front yard.'" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ "rag_chain.invoke(\"What is the dog doing and where?\")" ] }, diff --git a/examples/llama_index_multimodal_rag.ipynb b/examples/llama_index_multimodal_rag.ipynb index 96482e1..a60e7d1 100644 --- a/examples/llama_index_multimodal_rag.ipynb +++ b/examples/llama_index_multimodal_rag.ipynb @@ -21,7 +21,7 @@ "id": "c65edc4b-2084-47c9-a837-733264201802", "metadata": {}, "source": [ - "**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)." + "**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest python client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)." ] }, { @@ -47,7 +47,7 @@ "id": "45412661-9516-47f9-8bea-6f857e0e173f", "metadata": {}, "source": [ - "Then, we'll use NV-Ingest's Ingestor interface extract the tables and charts from a test pdf, embed them, and upload them to our Milvus vector database (VDB)" + "Then, we'll use NV-Ingest's Ingestor interface to extract the tables and charts from a test pdf, embed them, and upload them to our Milvus vector database (VDB)" ] }, { @@ -127,7 +127,7 @@ "from llama_index.llms.nvidia import NVIDIA\n", "\n", "# TODO: Add your NVIDIA API key\n", - "os.environ[\"NVIDIA_API_KEY\"] = \"\"\n", + "os.environ[\"NVIDIA_API_KEY\"] = \"[YOUR NVIDIA API KEY HERE]\"\n", "\n", "llm = NVIDIA(model=\"meta/llama-3.1-405b-instruct\")\n", "query_engine = index.as_query_engine(llm=llm)" diff --git a/examples/store_and_display_images.ipynb b/examples/store_and_display_images.ipynb index f74f50a..e5a31b4 100644 --- a/examples/store_and_display_images.ipynb +++ b/examples/store_and_display_images.ipynb @@ -21,7 +21,7 @@ "id": "2a598d15-adf0-406a-95c6-6d49c0939508", "metadata": {}, "source": [ - "**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)." + "**Note:** In order to run this notebook, you'll need to have the NV-Ingest microservice running along with all of the other included microservices. To do this, make sure all of the services are uncommented in the file: [docker-compose.yaml](https://github.com/NVIDIA/nv-ingest/blob/main/docker-compose.yaml) and follow the [quickstart guide](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#quickstart) to start everything up. You'll also need to have the NV-Ingest python client installed as demonstrated [here](https://github.com/NVIDIA/nv-ingest?tab=readme-ov-file#step-2-installing-python-dependencies)." ] }, { @@ -29,7 +29,7 @@ "id": "380f6180-e90a-4019-8cc9-418884a62444", "metadata": {}, "source": [ - "To start make sure the minio python client is installed and up to date" + "To start, make sure the minio python client is installed and up to date" ] }, {