Skip to content

A sample app for the Retrieval-Augmented Generation pattern using LlamaIndex.ts, running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences using your own data.

License

Notifications You must be signed in to change notification settings

Azure-Samples/llama-index-vector-search-javascript

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LlamaIndex RAG chat app with Azure OpenAI and Azure AI Search (JavaScript)

This solution creates a ChatGPT-like, Retrieval Augmented Generation (RAG) agentic application, over your own documents, powered by Llamaindex (TypeScript). It uses Azure OpenAI Service to access GPT models and embedding, and Azure AI Search for data indexing and retrieval.

Learn more about developing AI apps using Azure AI Services.

Open in GitHub Codespaces Open in Dev Containers

Important Security Notice

This template, the application code and configuration it contains, has been built to showcase Microsoft Azure specific services and tools. We strongly advise our customers not to make this code part of their production environments without implementing or enabling additional security features. See our productionizing guide for tips, and consult the Azure OpenAI Landing Zone reference architecture for more best practices.

Table of Contents

Chat screen

The repo includes sample data so it's ready to try end to end. In this sample application we use one of Paul Graham's essays, What I Worked On, and the experience allows you to ask questions about this essay.

Architecture Diagram

RAG Architecture

Azure account requirements

IMPORTANT: In order to deploy and run this example, you'll need:

  • Azure account. If you're new to Azure, get an Azure account for free and you'll get some free Azure credits to get started.
  • Azure account permissions:
    • Your Azure account must have Microsoft.Authorization/roleAssignments/write permissions, such as Role Based Access Control Administrator, User Access Administrator, or Owner. If you don't have subscription-level permissions, you must be granted RBAC for an existing resource group and deploy to that existing group.
    • Your Azure account also needs Microsoft.Resources/deployments/write permissions on the subscription level.

Cost estimation

Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. However, you can try the Azure pricing calculator for the resources below.

  • Azure Container Apps: Consumption plan with 1 CPU core, 2.0 GB RAM. Pricing with Pay-as-You-Go. Pricing
  • Azure OpenAI: Standard tier, gpt-4o-mini and text-embedding-3-large models. Pricing per 1K tokens used. Pricing
  • Azure AI Search: Standard tier, 1 replica, free level of semantic search. Pricing per hour. Pricing
  • Azure Blob Storage: Standard tier with ZRS (Zone-redundant storage). Pricing per storage and read operations. Pricing
  • Azure Monitor: Pay-as-you-go tier. Costs based on data ingested. Pricing

To reduce costs, you can switch to free SKUs for various services, but those SKUs have limitations.

To avoid unnecessary costs, remember to take down your app if it's no longer in use, either by deleting the resource group in the Portal or running azd down.

Getting Started

You have a few options for setting up this project. The easiest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally if desired.

GitHub Codespaces

You can run this repo virtually by using GitHub Codespaces, which will open a web-based VS Code in your browser:

Open in GitHub Codespaces

Once the codespace opens (this may take several minutes), open a terminal window.

VS Code Dev Containers

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project: Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

Local environment

  1. Install the required tools:
  1. Create a new folder and switch to it in the terminal.
  2. Run this command to download the project code:
azd init -t llama-index-vector-search-javascript

Note

This command will initialize a git repository, so you do not need to clone this repository.

Deploying

The steps below will provision Azure resources and deploy the application code to Azure Container Apps.

Login to your Azure account:

azd auth login

For GitHub Codespaces users, if the previous command fails, try:

azd auth login --use-device-code

Create a new azd environment:

azd env new

Enter a name that will be used for the resource group. This will create a new folder in the .azure folder, and set it as the active environment for any calls to azd going forward.

Provision the infrastructure needed to run the application.

azd provision

Important

This application specifically requires some environment variables to be available during the packaging phase. This is why we need to provision the infra first before packaging and deploying the app. In most cases, simply running 'azd up' will package, provision and deploy your apps.

Package and deploy the app to Azure:

azd package
azd deploy

Note

This will provision Azure resources and deploy this sample to those resources, including building the search index based on the files found in the ./data folder.

After the application has been successfully deployed you will see a URL printed to the console. Click that URL to interact with the application in your browser.

It will look like the following:

'Output from running azd up'

Note

It may take 5-10 minutes after you see 'SUCCESS' for the application to be fully deployed.

Important

Beware that the resources created by this command will incur immediate costs, primarily from the AI Search resource. These resources may accrue costs even if you interrupt the command before it is fully executed. You can run azd down or delete the resources manually to avoid unnecessary spending.

You will be prompted to select two locations, one for the majority of resources and one for the OpenAI resource, which is currently a short list. That location list is based on the OpenAI model availability table and may become outdated as availability changes.

Deploying again

If you've only changed the Next.js app code in the app folder, then you don't need to re-provision the Azure resources. You can just run:

azd deploy

If you've changed the infrastructure files (infra folder or azure.yaml), then you'll need to re-provision the Azure resources. You can do that by running:

azd up

Running the development server

You can run a development server locally after having successfully run the azd up (or simply azd provision) command. If you haven't yet, follow the deploying steps above.

First, azd auth login to authenticate to your Azure account.

Then, install the project dependencies:

npm install

Next, generate the embeddings of the documents in the ./data directory:

npm run generate

Third, run the development server:

npm run dev

Open http://localhost:3000 with your browser to see the result.

Using the app

  • In Azure: navigate to the Azure app deployed by azd. The URL is printed out when azd completes (as "Endpoint"), or you can find it in the Azure portal.
  • Running locally: navigate to http://localhost:3000

Clean up

To clean up all the resources created by this sample:

  1. Run azd down
  2. When asked if you are sure you want to continue, enter y
  3. When asked if you want to permanently delete the resources, enter y

NOTE: you can also run azd down --purge --force.

The resource group and all the resources will be deleted.

Guidance

You can find extensive documentation in the docs folder:

About

A sample app for the Retrieval-Augmented Generation pattern using LlamaIndex.ts, running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences using your own data.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks