Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

paper-qa: python package for RAG with a focus on scientific literature #949

Open
1 task
ShellLM opened this issue Nov 13, 2024 · 1 comment
Open
1 task
Labels
AI-Agents Autonomous AI agents using LLMs AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Algorithms Sorting, Learning or Classifying. All algorithms go here. llm Large Language Models llm-applications Topics related to practical applications of Large Language Models in various fields Papers Research papers RAG Retrieval Augmented Generation for LLMs

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented Nov 13, 2024

PaperQA2

GitHub
tests
PyPI version

PaperQA2 is a package for doing high-accuracy retrieval augmented generation (RAG) on PDFs or text files, with a focus on the scientific literature.
See our recent 2024 paper to see examples of PaperQA2's superhuman performance in scientific tasks like question answering, summarization, and contradiction detection.

Quickstart

In this example we take a folder of research paper PDFs, magically get their metadata - including citation counts with a retraction check, then parse and cache PDFs into a full-text search index, and finally answer the user question with an LLM agent.

pip install paper-qa
cd my_papers
pqa ask 'How can carbon nanotubes be manufactured at a large scale?'

Example Output

Question: Has anyone designed neural networks that compute with proteins or DNA?

The claim that neural networks have been designed to compute with DNA is supported by multiple sources. The work by Qian, Winfree, and Bruck demonstrates the use of DNA strand displacement cascades to construct neural network components, such as artificial neurons and associative memories, using a DNA-based system (Qian2011Neural pages 1-2, Qian2011Neural pages 15-16, Qian2011Neural pages 54-56). This research includes the implementation of a 3-bit XOR gate and a four-neuron Hopfield associative memory, showcasing the potential of DNA for neural network computation. Additionally, the application of deep learning techniques to genomics, which involves computing with DNA sequences, is well-documented. Studies have applied convolutional neural networks (CNNs) to predict genomic features such as transcription factor binding and DNA accessibility (Eraslan2019Deep pages 4-5, Eraslan2019Deep pages 5-6). These models leverage DNA sequences as input data, effectively using neural networks to compute with DNA. While the provided excerpts do not explicitly mention protein-based neural network computation, they do highlight the use of neural networks in tasks related to protein sequences, such as predicting DNA-protein binding (Zeng2016Convolutional pages 1-2). However, the primary focus remains on DNA-based computation.

What is PaperQA2

PaperQA2 is engineered to be the best agentic RAG model for working with scientific papers. Here are some features:

  • A simple interface to get good answers with grounded responses containing in-text citations.
  • State-of-the-art implementation including document metadata-awareness in embeddings and LLM-based re-ranking and contextual summarization (RCS).
  • Support for agentic RAG, where a language agent can iteratively refine queries and answers.
  • Automatic redundant fetching of paper metadata, including citation and journal quality data from multiple providers.
  • A usable full-text search engine for a local repository of PDF/text files.
  • A robust interface for customization, with default support for all LiteLLM models.

By default, it uses OpenAI embeddings and models with a Numpy vector DB to embed and search documents. However, you can easily use other closed-source, open-source models or embeddings (see details below).

PaperQA2 depends on some awesome libraries/APIs that make our repo possible. Here are some in no particular order:

  1. Semantic Scholar
  2. Crossref
  3. Unpaywall
  4. Pydantic
  5. tantivy
  6. LiteLLM
  7. pybtex
  8. PyMuPDF

PaperQA2 vs PaperQA

We've been working on hard on fundamental upgrades for a while and mostly followed SemVer. meaning we've incremented the major version number on each breaking change. This brings us to the current major version number v5. So why call is the repo now called PaperQA2? We wanted to remark on the fact though that we've exceeded human performance on many important metrics. So we arbitrarily call version 5 and onward PaperQA2, and versions before it as PaperQA1 to denote the significant change in performance. We recognize that we are challenged at naming and counting at FutureHouse, so we reserve the right at any time to arbitrarily change the name to PaperCrow.

What's New in Version 5 (aka PaperQA2)?

Version 5 added:

  • A CLI pqa
  • Agentic workflows invoking tools for paper search, gathering evidence, and generating an answer
  • Removed much of the statefulness from the Docs object
  • A migration to LiteLLM for compatibility with many LLM providers as well as centralized rate limits and cost tracking
  • A bundled set of configurations (read here) containing known-good hyperparameters

Note that Docs objects pickled from prior versions of PaperQA are incompatible with version 5, and will need to be rebuilt. Also, our minimum Python version was increased to Python 3.11.

PaperQA2 Algorithm

To understand PaperQA2, let's start with the pieces of the underlying algorithm. The default workflow of PaperQA2 is as follows:

Phase PaperQA2 Actions
1. Paper Search - Get candidate papers from LLM-generated keyword query
- Chunk, embed, and add candidate papers to state
2. Gather Evidence - Embed query into vector
- Rank top k document chunks in current state
- Create scored summary of each chunk in the context of the current query
- Use LLM to re-score and select most relevant summaries
3. Generate Answer - Put best summaries into prompt with context
- Generate answer with prompt

The tools can be invoked in any order by a language agent. For example, an LLM agent might do a narrow and broad search, or using different phrasing for the gather evidence step from the generate answer step.

Installation

For a non-development setup, install PaperQA2 (aka version 5) from PyPI. Note version 5 requires Python 3.11+.

pip install paper-qa>=5

For development setup, please refer to the CONTRIBUTING.md file.

PaperQA2 uses an LLM to operate, so you'll need to either set an appropriate API key environment variable (i.e. export OPENAI_API_KEY=sk-...) or set up an open source LLM server (i.e. using llamafile. Any LiteLLM compatible model can be configured to use with PaperQA2.

If you need to index a large set of papers (100+), you will likely want an API key for both Crossref and Semantic Scholar, which will allow you to avoid hitting public rate limits using these metadata services. Those can be exported as CROSSREF_API_KEY and SEMANTIC_SCHOLAR_API_KEY variables.

CLI Usage

The fastest way to test PaperQA2 is via the CLI. First navigate to a directory with some papers and use the pqa cli:

$ pqa ask 'What manufacturing challenges are unique to bispecific antibodies?'

You will see PaperQA2 index your local PDF files, gathering the necessary metadata for each of them (using Crossref and Semantic Scholar), search over that index, then break the files into chunked evidence contexts, rank them, and ultimately generate an answer. The next time this directory is queried, your index will already be built (save for any differences detected, like new added papers), so it will skip the indexing and chunking steps.

All prior answers will be indexed and stored, you can view them by querying via the search subcommand, or access them yourself in your PQA_HOME directory, which defaults to ~/.pqa/.

$ pqa search -i 'answers' 'antibodies'

PaperQA2 is highly configurable, when running from the command line, pqa --help shows all options and short descriptions. For example to run with a higher temperature:

$ pqa --temperature 0.5 ask 'What manufacturing challenges are unique to bispecific antibodies?'

You can view all settings with pqa view. Another useful thing is to change to other templated settings - for example fast is a setting that answers more quickly and you can see it with pqa -s fast view

Maybe you have some new settings you want to save? You can do that with

pqa -s my_new_settings --temperature 0.5 --llm foo-bar-5 save

and then you can use it with

pqa -s my_new_settings ask 'What manufacturing challenges are unique to bispecific antibodies?'

If you run pqa with a command which requires a new indexing, say if you change the default chunk_size, a new index will automatically be created for you.

pqa --parsing.chunk_size 5000 ask 'What manufacturing challenges are unique to bispecific antibodies?'

You can also use pqa to do full-text search with use of LLMs view the search command. For example, let's save the index from a directory and give it a name:

pqa -i nanomaterials index

Now I can search for papers about thermoelectrics:

pqa -i nanomaterials search thermoelectrics

or I can use the normal ask

pqa -i nanomaterials ask 'Are there nm scale features in thermoelectric materials?'

Both the CLI and module have pre-configured settings based on prior performance and our publications, they can be invoked as follows:

pqa --settings <setting name> ask 'Are there nm scale features in thermoelectric materials?'

Bundled Settings

Inside paperqa/configs we bundle known useful settings:

Setting Name Description
high_quality Highly performant, relatively expensive (due to having evidence_k = 15) query using a ToolSelector agent.
fast Setting to get answers cheaply and quickly.
wikicrow Setting to emulate the Wikipedia article writing used in our WikiCrow publication.
contracrow Setting to find contradictions in papers, your query should be a claim that needs to be flagged as a contradiction (or not).
debug Setting useful solely for debugging, but not in any actual application beyond debugging.
tier1_limits Settings that match OpenAI rate limits for each tier, you can use tier<1-5>_limits to specify the tier.

Rate Limits

If you are hitting rate limits, say with the OpenAI Tier 1 plan, you can add them into PaperQA2. For each OpenAI tier, a pre-built setting exists to limit usage.

pqa --settings 'tier1_limits' ask 'Are there nm scale features in thermoelectric materials?'

This will limit your system to use the tier1_limits, and slow down your queries to accommodate.

You can also specify them manually with any rate limit string that matches the specification in the limits module:

pqa --summary_llm_config '{"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}}' ask 'Are there nm scale features in thermoelectric materials?'

Or by adding into a Settings object, if calling imperatively:

from paperqa import Settings, ask

answer = ask(
    "What manufacturing challenges are unique to bispecific antibodies?",
    settings=Settings(
        llm_config={"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}},
        summary_llm_config

#### Suggested labels
#### None
@ShellLM ShellLM added AI-Agents Autonomous AI agents using LLMs AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Algorithms Sorting, Learning or Classifying. All algorithms go here. llm Large Language Models llm-applications Topics related to practical applications of Large Language Models in various fields Papers Research papers RAG Retrieval Augmented Generation for LLMs labels Nov 13, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented Nov 13, 2024

Related content

#762 similarity score: 0.9
#948 similarity score: 0.9
#877 similarity score: 0.89
#625 similarity score: 0.89
#778 similarity score: 0.89
#494 similarity score: 0.89

@irthomasthomas irthomasthomas changed the title paper-qa/README.md at main · Future-House/paper-qa paper-qa: python package for RAG with a focus on scientific literature Nov 13, 2024
@ShellLM ShellLM mentioned this issue Nov 16, 2024
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI-Agents Autonomous AI agents using LLMs AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Algorithms Sorting, Learning or Classifying. All algorithms go here. llm Large Language Models llm-applications Topics related to practical applications of Large Language Models in various fields Papers Research papers RAG Retrieval Augmented Generation for LLMs
Projects
None yet
Development

No branches or pull requests

1 participant