We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
High accuracy RAG for answering questions from scientific documents with citations
Python 6.7k 667
Gymnasium framework for training language model agents on constructive tasks
Python 84 11
Agent framework for constructing language model agents and training on constructive tasks.
Python 43 7
36 2
LitQA Eval: A difficult set of scientific questions that require context of full-text research papers to answer
Python 35 5
Evaluation dataset for AI systems intended to benchmark capabilities foundational to scientific research in biology
Python 32 2
Central LLM client for use by Aviary and PaperQA
Fork of upstream
April 2024 Hackathon Crow Project
This organization has no public members. You must be a member to see who’s a part of this organization.