EvaDB is an AI-SQL database system for developing applications powered by AI models. We aim to simplify the development and deployment of AI-powered applications that operate on structured (tables, feature stores) and unstructured data (text documents, videos, PDFs, podcasts, etc.).
EvaDB accelerates AI pipelines by 10x using a collection of performance optimizations inspired by time-tested SQL database systems, including data-parallel query execution, function caching, sampling, and cost-based predicate reordering. EvaDB supports an AI-oriented query language tailored for analyzing both structured and unstructured data. It has first-class support for PyTorch, Hugging Face, YOLO, and Open AI models.
The high-level Python and SQL APIs allows even beginners to use EvaDB in a few lines of code. Advanced users can define custom user-defined functions that wrap around any AI model or Python library. EvaDB is fully implemented in Python and licensed under the Apache license.
- Features
- Quick Start
- Documentation
- Roadmap
- Architecture Diagram
- Illustrative Applications
- Screenshots
- Community and Support
- Contributing
- License
- ๐ฎ Build simpler AI-powered applications using short Python or SQL queries
- โก๏ธ 10x faster applications using AI-centric query optimization
- ๐ฐ Save money spent on GPUs
- ๐ First-class support for your custom deep learning models through user-defined functions
- ๐ฆ Built-in caching to eliminate redundant model invocations across queries
- โจ๏ธ First-class support for PyTorch, Hugging Face, YOLO, and Open AI models
- ๐ Installable via pip and fully implemented in Python
Here are some illustrative EvaDB-powered applications (each Jupyter notebook can be opened on Google Colab):
- ๐ฎ Reddit Image Similarity Search
- ๐ฎ ChatGPT-based video question answering
- ๐ฎ Quering PDF documents
- ๐ฎ Analysing traffic flow with YOLO
- ๐ฎ Examining emotion palette of a movie
- ๐ฎ Image segmentation with Hugging Face
- ๐ฎ Recognizing license plates
- ๐ฎ Analysing toxicity of social media memes
- Detailed Documentation
- The Getting Started page shows how you can use EvaDB for different AI tasks and how you can easily extend EvaDB to support your custom deep learning model through user-defined functions.
- The User Guides section contains Jupyter Notebooks that demonstrate how to use various features of EvaDB. Each notebook includes a link to Google Colab, where you can run the code yourself.
- Tutorials
- Join us on Slack
- Follow us on Twitter
- Medium-Term Roadmap
- Demo
- Step 1: Install EvaDB using pip. EvaDB supports Python versions >=
3.8
:
pip install evadb
- Step 2: Write your AI app!
import evadb
# Grab a EvaDB cursor to load data and run queries
cursor = evadb.connect().cursor()
# Load a collection of news videos into the 'news_videos' table
# This command returns a Pandas Dataframe with the query's output
# In this case, the output indicates the number of loaded videos
cursor.load(
file_regex="news_videos/*.mp4",
format="VIDEO",
table_name="news_videos"
).df()
# Define a function that wraps around a speech-to-text (Whisper) model
# Such functions are known as user-defined functions or UDFs
# So, we are creating a Whisper UDF here
# After creating the UDF, we can use the function in any query
cursor.create_udf(
udf_name="SpeechRecognizer",
type="HuggingFace",
task='automatic-speech-recognition',
model='openai/whisper-base'
).df()
# EvaDB automatically extract the audio from the video
# We only need to run the SpeechRecongizer UDF on the 'audio' column
# to get the transcript and persist it in a table called 'transcripts'
cursor.query(
"""CREATE TABLE transcripts AS
SELECT SpeechRecognizer(audio) from news_videos;"""
).df()
# We next incrementally construct the ChatGPT query using EvaDB's Python API
# The query is based on the 'transcripts' table
# This table has a column called 'text' with the transcript text
query = cursor.table('transcripts')
# Since ChatGPT is a built-in function, we don't have to define it
# We can just directly use it in the query
# We need to set the OPENAI_KEY as an environment variable
os.environ["OPENAI_KEY"] = OPENAI_KEY
query = query.select("ChatGPT('Is this video summary related to LLMs', text)")
# Finally, we run the query to get the results as a dataframe
response = query.df()
- Write functions to wrap around your custom deep learning models
# Define a function that wraps around a speech-to-text (Whisper) model
# Such functions are known as user-defined functions or UDFs
# So, we are creating a Whisper UDF here
# After creating the UDF, we can use the function in any query
cursor.create_udf(
udf_name="SpeechRecognizer",
type="HuggingFace",
task='automatic-speech-recognition',
model='openai/whisper-base'
).df()
- Chain multiple models in a single query to set up useful AI pipelines
# Analyse emotions of actors in an Interstellar movie clip using PyTorch models
query = cursor.table("Interstellar")
# Get faces using a `FaceDetector` function
query = query.cross_apply("UNNEST(FaceDetector(data))", "Face(bounding_box, confidence)")
# Focus only on frames 100 through 200 in the clip
query = query.filter("id > 100 AND id < 200")
# Get the emotions of the detected faces using a `EmotionDetector` function
query = query.select("id, bbox, EmotionDetector(Crop(data, bounding_box))")
# Run the query and get the query result as a dataframe
response = query.df()
-
EvaDB runs queries faster using its AI-centric query optimizer. Two key optimizations are:
๐พ Caching: EvaDB automatically caches and reuses previous query results (especially model inference results), eliminating redundant computation and reducing query processing time.
๐ฏ Predicate Reordering: EvaDB optimizes the order in which the query predicates are evaluated (e.g., runs the faster, more selective model first), leading to faster queries and lower inference costs.
-- Query 1: Find all images of black-colored dogs
SELECT id, bbox FROM dogs
JOIN LATERAL UNNEST(Yolo(data)) AS Obj(label, bbox, score)
WHERE Obj.label = 'dog'
AND Color(Crop(data, bbox)) = 'black';
-- Query 2: Find all Great Danes that are black-colored
SELECT id, bbox FROM dogs
JOIN LATERAL UNNEST(Yolo(data)) AS Obj(label, bbox, score)
WHERE Obj.label = 'dog'
AND DogBreedClassifier(Crop(data, bbox)) = 'great dane'
AND Color(Crop(data, bbox)) = 'black';
By reusing the results of the first query and reordering the predicates based on the available cached inference results, EvaDB runs the second query 10x faster!
This diagram presents the key components of EvaDB. EvaDB's AI-centric Query Optimizer takes a parsed query as input and generates a query plan that is then executed by the Query Engine. The Query Engine hits multiple storage engines to retrieve the data required for efficiently running the query:
- Structured data (SQL database system connected via
sqlalchemy
). - Unstructured media data (on cloud buckets or local filesystem).
- Vector data (vector database system).
๐ฎ Traffic Analysis (Object Detection Model)
Source Video | Query Result |
---|---|
๐ฎ PDF Question Answering (Question Answering Model)
App |
---|
๐ฎ MNIST Digit Recognition (Image Classification Model)
Source Video | Query Result |
---|---|
๐ฎ Movie Emotion Analysis (Face Detection + Emotion Classification Models)
Source Video | Query Result |
---|---|
๐ฎ License Plate Recognition (Plate Detection + OCR Extraction Models)
Query Result |
---|
๐ If you have general questions about EvaDB, want to say hello or just follow along, we'd like to invite you to join our Slack Community and to follow us on Twitter.
If you run into any problems or issues, please create a Github issue and we'll try our best to help.
Don't see a feature in the list? Search our issue tracker if someone has already requested it and add a comment to it explaining your use-case, or open a new issue if not. We prioritize our roadmap based on user feedback, so we'd love to hear from you.
EvaDB is the beneficiary of many contributors. All kinds of contributions to EvaDB are appreciated. To file a bug or to request a feature, please use GitHub issues. Pull requests are welcome.
For more information, see our contribution guide.
Copyright (c) 2018-present Georgia Tech Database Group. Licensed under Apache License.