Skip to content
/ txtai Public
forked from neuml/txtai

πŸ’‘ Build AI-powered semantic search applications

License

Notifications You must be signed in to change notification settings

a-bawane/txtai

Β 
Β 

Repository files navigation

All-in-one embeddings database

Version GitHub last commit GitHub issues Join Slack Build Status Coverage Status

txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.

architecture architecture

Embeddings databases are a union of vector indexes (sparse and dense), graph networks and relational databases. This enables vector search with SQL, topic modeling, retrieval augmented generation and more.

Embeddings databases can stand on their own and/or serve as a powerful knowledge source for large language model (LLM) prompts.

Summary of txtai features:

  • πŸ”Ž Vector search with SQL, object storage, topic modeling, graph analysis and multimodal indexing
  • πŸ“„ Create embeddings for text, documents, audio, images and video
  • πŸ’‘ Pipelines powered by language models that run LLM prompts, question-answering, labeling, transcription, translation, summarization and more
  • β†ͺ️️ Workflows to join pipelines together and aggregate business logic. txtai processes can be simple microservices or multi-model workflows.
  • βš™οΈ Build with Python or YAML. API bindings available for JavaScript, Java, Rust and Go.
  • ☁️ Run local or scale out with container orchestration

txtai is built with Python 3.8+, Hugging Face Transformers, Sentence Transformers and FastAPI. txtai is open-source under an Apache 2.0 license.

Why txtai?

why why

New vector databases, LLM frameworks and everything in between are sprouting up daily. Why build with txtai?

  • Up and running in minutes with pip or Docker
# Get started in a couple lines
import txtai

embeddings = txtai.Embeddings()
embeddings.index(["Correct", "Not what we hoped"])
embeddings.search("positive", 1)
#[(0, 0.29862046241760254)]
  • Built-in API makes it easy to develop applications using your programming language of choice
# app.yml
embeddings:
    path: sentence-transformers/all-MiniLM-L6-v2
CONFIG=app.yml uvicorn "txtai.api:app"
curl -X GET "http://localhost:8000/search?query=positive"
  • Run local - no need to ship data off to disparate remote services
  • Work with micromodels all the way up to large language models (LLMs)
  • Low footprint - install additional dependencies and scale up when needed
  • Learn by example - notebooks cover all available functionality

Use Cases

The following sections introduce common txtai use cases. A comprehensive set of over 50 example notebooks and applications are also available.

Semantic Search

Build semantic/similarity/vector/neural search applications.

demo

Traditional search systems use keywords to find data. Semantic search has an understanding of natural language and identifies results that have the same meaning, not necessarily the same keywords.

search search

Get started with the following examples.

Notebook Description
Introducing txtai ▢️ Overview of the functionality provided by txtai Open In Colab
Similarity search with images Embed images and text into the same space for search Open In Colab
Build a QA database Question matching with semantic search Open In Colab
Semantic Graphs Explore topics, data connectivity and run network analysis Open In Colab

LLM Orchestration

Prompt-driven search, retrieval augmented generation (RAG), pipelines and workflows that interface with large language models (LLMs).

llm

Integrate conversational search, LLM chains and self-critique.

Notebook Description
Prompt-driven search with LLMs Embeddings-guided and Prompt-driven search with Large Language Models (LLMs) Open In Colab
Prompt templates and task chains Build model prompts and connect tasks together with workflows Open In Colab

Language Model Workflows

Language model workflows, also known as semantic workflows, connect language models together to build intelligent applications.

flows flows

While LLMs are powerful, there are plenty of smaller, more specialized models that work better and faster for specific tasks. This includes models for extractive question-answering, automatic summarization, text-to-speech, transcription and translation.

Notebook Description
Run pipeline workflows ▢️ Simple yet powerful constructs to efficiently process data Open In Colab
Building abstractive text summaries Run abstractive text summarization Open In Colab
Transcribe audio to text Convert audio files to text Open In Colab
Translate text between languages Streamline machine translation and language detection Open In Colab

Installation

install install

The easiest way to install is via pip and PyPI

pip install txtai

Python 3.8+ is supported. Using a Python virtual environment is recommended.

See the detailed install instructions for more information covering optional dependencies, environment specific prerequisites, installing from source, conda support and how to run with containers.

Model guide

models

See the table below for the current recommended models. These models all allow commercial use and offer a blend of speed and performance.

Component Model(s)
Embeddings all-MiniLM-L6-v2
E5-base-v2
Image Captions BLIP
Labels - Zero Shot BART-Large-MNLI
Labels - Fixed Fine-tune with training pipeline
Large Language Model (LLM) Flan T5 XL
Falcon 7B Instruct
Summarization DistilBART
Text-to-Speech ESPnet JETS
Transcription Whisper
Translation OPUS Model Series

Models can be loaded as either a path from the Hugging Face Hub or a local directory. Model paths are optional, defaults are loaded when not specified. For tasks with no recommended model, txtai uses the default models as shown in the Hugging Face Tasks guide.

See the following links to learn more.

Powered by txtai

The following applications are powered by txtai.

apps

Application Description
txtchat Conversational search and workflows for all
paperai Semantic search and workflows for medical/scientific papers
codequestion Semantic search for developers
tldrstory Semantic search for headlines and story text

In addition to this list, there are also many other open-source projects, published research and closed proprietary/commercial projects that have built on txtai in production.

Further Reading

further further

Documentation

Full documentation on txtai including configuration settings for embeddings, pipelines, workflows, API and a FAQ with common questions/issues is available.

Contributing

For those who would like to contribute to txtai, please see this guide.

About

πŸ’‘ Build AI-powered semantic search applications

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.3%
  • Other 0.7%