Skip to content

wonkishtofu/next_doc_search

Repository files navigation

Document-grounded Generation Search for EMA

Try the demo here: Demo Website

Background

This project was conceived as part of an annual Hackathon at DSAID, GovTech (Singapore) in 2023.

The Energy Market Authority of Singapore (EMA) faced a perennial problem which plagues public officers Government-wide:

  • Members of the public are often unable to find information publicly available on Government Websites.
  • The email public officers with their enquiries; and our public officers are flooded with common enquiries already addressed on their publicly available FAQs

Often, the answer to most public enquiries are scattered across our websites; and they require cross-referencing between various sources to arrive at a coherent answer. While traditional search systems may point users to information sources, more is needed to synthesise search output to get users the answers they seek.

Prototype Solution

Document Grounded Generation was first posited by Shrimai Prabhumoye at CMU ML, as a method for grounding Generative AI output with context documents.

DGG

In our prototype solution, we built a search assistant we applied Document Grounded Generation to search results derived from a traditional vector search on EMA's knowledge base.

We envision that this could be deployed alongside existing search systems on government websites, as a production-ready module that can be redeployed easily, and maintained centrally.

Deploy

Deploy this starter to Vercel. The Supabase integration will automatically set the required environment variables and configure your Database Schema. All you have to do is set your OPENAI_KEY and you're ready to go!

Deploy with Vercel

Using Docker

  1. Build your container: docker build -t ema-doc-search .
  2. Run your container: docker run -p 3000:3000 ema-doc-search

You can view your images created with docker images.

Technical Details

Building your own custom Document Grounded Generation involves four steps:

  1. [👷 Build time] Pre-process the knowledge base (your .mdx files in your pages folder).
  2. [👷 Build time] Store embeddings in Postgres with pgvector.
  3. [🏃 Runtime] Perform vector similarity search to find the content that's relevant to the question.
  4. [🏃 Runtime] Inject content into OpenAI GPT-3 text completion prompt and stream response to the client.

👷 Build time

Step 1. and 2. happen at build time, e.g. when Vercel builds your Next.js app. During this time the generate-embeddings script is being executed which performs the following tasks:

sequenceDiagram
    participant Vercel
    participant DB (pgvector)
    participant OpenAI (API)
    loop 1. Pre-process the knowledge base
        Vercel->>Vercel: Chunk .mdx pages into sections
        loop 2. Create & store embeddings
            Vercel->>OpenAI (API): create embedding for page section
            OpenAI (API)->>Vercel: embedding vector(1536)
            Vercel->>DB (pgvector): store embedding for page section
        end
    end
Loading

In addition to storing the embeddings, this script generates a checksum for each of your .mdx files and stores this in another database table to make sure the embeddings are only regenerated when the file has changed.

🏃 Runtime

Step 3. and 4. happen at runtime, anytime the user submits a question. When this happens, the following sequence of tasks is performed:

sequenceDiagram
    participant Client
    participant Edge Function
    participant DB (pgvector)
    participant OpenAI (API)
    Client->>Edge Function: { query: lorem ispum }
    critical 3. Perform vector similarity search
        Edge Function->>OpenAI (API): create embedding for query
        OpenAI (API)->>Edge Function: embedding vector(1536)
        Edge Function->>DB (pgvector): vector similarity search
        DB (pgvector)->>Edge Function: relevant docs content
    end
    critical 4. Inject content into prompt
        Edge Function->>OpenAI (API): completion request prompt: query + relevant docs content
        OpenAI (API)-->>Client: text/event-stream: completions response
    end
Loading

The relevant files for this are the SearchDialog (Client) component and the vector-search (Edge Function).

The initialization of the database, including the setup of the pgvector extension is stored in the supabase/migrations folder which is automatically applied to your local Postgres instance when running supabase start.

Local Development

Configuration

  • cp .env.example .env
  • Set your OPENAI_KEY in the newly created .env file.
  • Set NEXT_PUBLIC_SUPABASE_ANON_KEY and SUPABASE_SERVICE_ROLE_KEY run:

Note: You have to run supabase to retrieve the keys.

Start Supabase

Make sure you have Docker installed and running locally. Then run

supabase start

To retrieve NEXT_PUBLIC_SUPABASE_ANON_KEY and SUPABASE_SERVICE_ROLE_KEY run:

supabase status

Start the Next.js App

In a new terminal window, run

pnpm dev

Using your custom .mdx docs

  1. By default your documentation will need to be in .mdx format. This can be done by renaming existing (or compatible) markdown .md file.
  2. Run pnpm run embeddings to regenerate embeddings.

    Note: Make sure supabase is running. To check, run supabase status. If is not running run supabase start.

  3. Run pnpm dev again to refresh NextJS localhost:3000 rendered page.

Deploy

Deploy this starter to Vercel. The Supabase integration will automatically set the required environment variables and configure your Database Schema. All you have to do is set your OPENAI_KEY and you're ready to go!

Deploy with Vercel

Credits

We based our solution off the fantastic work done by Greg over at Rabbit Hole Syndrome. You can follow his work on Twitter at @ggrdson.

Releases

No releases published

Packages

No packages published