paper-qa: python package for RAG with a focus on scientific literature #949
Labels
AI-Agents
Autonomous AI agents using LLMs
AI-Chatbots
Topics related to advanced chatbot platforms integrating multiple AI models
Algorithms
Sorting, Learning or Classifying. All algorithms go here.
llm
Large Language Models
llm-applications
Topics related to practical applications of Large Language Models in various fields
Papers
Research papers
RAG
Retrieval Augmented Generation for LLMs
PaperQA2
PaperQA2 is a package for doing high-accuracy retrieval augmented generation (RAG) on PDFs or text files, with a focus on the scientific literature.
See our recent 2024 paper to see examples of PaperQA2's superhuman performance in scientific tasks like question answering, summarization, and contradiction detection.
ask
manuallyQuickstart
In this example we take a folder of research paper PDFs, magically get their metadata - including citation counts with a retraction check, then parse and cache PDFs into a full-text search index, and finally answer the user question with an LLM agent.
Example Output
Question: Has anyone designed neural networks that compute with proteins or DNA?
What is PaperQA2
PaperQA2 is engineered to be the best agentic RAG model for working with scientific papers. Here are some features:
By default, it uses OpenAI embeddings and models with a Numpy vector DB to embed and search documents. However, you can easily use other closed-source, open-source models or embeddings (see details below).
PaperQA2 depends on some awesome libraries/APIs that make our repo possible. Here are some in no particular order:
PaperQA2 vs PaperQA
We've been working on hard on fundamental upgrades for a while and mostly followed SemVer. meaning we've incremented the major version number on each breaking change. This brings us to the current major version number v5. So why call is the repo now called PaperQA2? We wanted to remark on the fact though that we've exceeded human performance on many important metrics. So we arbitrarily call version 5 and onward PaperQA2, and versions before it as PaperQA1 to denote the significant change in performance. We recognize that we are challenged at naming and counting at FutureHouse, so we reserve the right at any time to arbitrarily change the name to PaperCrow.
What's New in Version 5 (aka PaperQA2)?
Version 5 added:
pqa
Docs
objectNote that
Docs
objects pickled from prior versions ofPaperQA
are incompatible with version 5, and will need to be rebuilt. Also, our minimum Python version was increased to Python 3.11.PaperQA2 Algorithm
To understand PaperQA2, let's start with the pieces of the underlying algorithm. The default workflow of PaperQA2 is as follows:
The tools can be invoked in any order by a language agent. For example, an LLM agent might do a narrow and broad search, or using different phrasing for the gather evidence step from the generate answer step.
Installation
For a non-development setup, install PaperQA2 (aka version 5) from PyPI. Note version 5 requires Python 3.11+.
pip install paper-qa>=5
For development setup, please refer to the CONTRIBUTING.md file.
PaperQA2 uses an LLM to operate, so you'll need to either set an appropriate API key environment variable (i.e.
export OPENAI_API_KEY=sk-...
) or set up an open source LLM server (i.e. using llamafile. Any LiteLLM compatible model can be configured to use with PaperQA2.If you need to index a large set of papers (100+), you will likely want an API key for both Crossref and Semantic Scholar, which will allow you to avoid hitting public rate limits using these metadata services. Those can be exported as
CROSSREF_API_KEY
andSEMANTIC_SCHOLAR_API_KEY
variables.CLI Usage
The fastest way to test PaperQA2 is via the CLI. First navigate to a directory with some papers and use the
pqa
cli:$ pqa ask 'What manufacturing challenges are unique to bispecific antibodies?'
You will see PaperQA2 index your local PDF files, gathering the necessary metadata for each of them (using Crossref and Semantic Scholar), search over that index, then break the files into chunked evidence contexts, rank them, and ultimately generate an answer. The next time this directory is queried, your index will already be built (save for any differences detected, like new added papers), so it will skip the indexing and chunking steps.
All prior answers will be indexed and stored, you can view them by querying via the
search
subcommand, or access them yourself in yourPQA_HOME
directory, which defaults to~/.pqa/
.PaperQA2 is highly configurable, when running from the command line,
pqa --help
shows all options and short descriptions. For example to run with a higher temperature:$ pqa --temperature 0.5 ask 'What manufacturing challenges are unique to bispecific antibodies?'
You can view all settings with
pqa view
. Another useful thing is to change to other templated settings - for examplefast
is a setting that answers more quickly and you can see it withpqa -s fast view
Maybe you have some new settings you want to save? You can do that with
and then you can use it with
pqa -s my_new_settings ask 'What manufacturing challenges are unique to bispecific antibodies?'
If you run
pqa
with a command which requires a new indexing, say if you change the default chunk_size, a new index will automatically be created for you.pqa --parsing.chunk_size 5000 ask 'What manufacturing challenges are unique to bispecific antibodies?'
You can also use
pqa
to do full-text search with use of LLMs view the search command. For example, let's save the index from a directory and give it a name:Now I can search for papers about thermoelectrics:
or I can use the normal ask
pqa -i nanomaterials ask 'Are there nm scale features in thermoelectric materials?'
Both the CLI and module have pre-configured settings based on prior performance and our publications, they can be invoked as follows:
Bundled Settings
Inside
paperqa/configs
we bundle known useful settings:evidence_k
= 15) query using aToolSelector
agent.tier<1-5>_limits
to specify the tier.Rate Limits
If you are hitting rate limits, say with the OpenAI Tier 1 plan, you can add them into PaperQA2. For each OpenAI tier, a pre-built setting exists to limit usage.
This will limit your system to use the tier1_limits, and slow down your queries to accommodate.
You can also specify them manually with any rate limit string that matches the specification in the limits module:
Or by adding into a
Settings
object, if calling imperatively:The text was updated successfully, but these errors were encountered: