diff --git a/README.md b/README.md index c510917..10be605 100644 --- a/README.md +++ b/README.md @@ -1,25 +1,56 @@ # gpt-resolve Can GPT solve Brazilian university entrance exams? -This project is a simple implementation of how to use LLMs to solve challenging Brazilian university entrance exams. +This project is an implementation of how to use LLMs to solve challenging Brazilian university entrance exams. -We'll use `o1-preview`, which is the best OpenAI model so far with reasoning capabilities, and `gpt-4o` to describe the exam images so that `o1-preview` can solve them (as it does not have image capabilities yet). Results are saved as txt files with LaTeX formatting, and you can optionally convert them to a nice PDF or using some LaTeX editor. +We'll use `o1-preview`, which is the best OpenAI model so far with reasoning capabilities, and `gpt-4o` to describe the exam images so that `o1-preview` can solve them on question at a time (as it does not have image capabilities yet). Results are saved as txt files with LaTeX formatting, and you can optionally convert them to a nice PDF or using some LaTeX editor. -The first exam to be solved is the ITA (Instituto Tecnológico de Aeronáutica) exam for admissions in 2025, which is considered one of the most challenging exams in Brazil. This exam currently has two phases: the first one is a multiple choice test and a second one with a 4-hour essay test with 10 questions. The project will start by solving the second phase of the Math section, which is the essay test. This is particularly interesting because (i) the exam happened very recently on the 5th of November 2024 and (ii) the essay test requires a deep understanding of the subjects and the ability to write the answer step by step, which we'll evaluate as well. +The project begins with the ITA (Instituto Tecnológico de Aeronáutica) 2025 exam, focusing first on the Math essay section. This section, from the recent exam on November 5, 2024, demands deep subject understanding and step-by-step solutions. More details are in the [report](exams/ita_2025/report.md). -After the first exam is solved, the project will try to solve the multiple choice test for Math and expand to other sections and eventually other exams. Feel free to contribute with ideas and implementations of other exams! +After the first ITA 2025 exam is fully solved, the project will try to expand to other sections and eventually other exams. Feel free to contribute with ideas and implementations of other exams! Table of exams to be solved: -| Exam | Phase | Section | Type | Model | Status | Score | -|------|-------|---------|------|-------|--------|-------| -| ITA | 2025 | Math | Essay | o1-preview | 🚧 In Progress | - | +| Exam | Year | Model | Status | Score | Report | +|------|------|-------|--------|-------|--------| +| ITA | 2025 | o1-preview | 🚧 In Progress | - | [Report](exams/ita_2025/report.md) | -## How to use -So far, with just one exam, you just need to run `python src/resolve.py`. It will process a `exam_path` and it will save the results in the subfolder `solutions` as `.txt` files, one for each question. Make sure to set your env var `OPENAI_API_KEY` in the `.env` file. See section [Convert to LaTeX PDF](#convert-to-latex-pdf) to see how to convert the `.txt` files to a PDF. +### Installation and How to use -## Convert to LaTeX PDF -🚧 In Progress... +```bash +pip install gpt-resolve +``` + +`gpt-resolve` provides a simple CLI with two main commands: `resolve` for solving exam questions and `compile-solutions` for generating PDFs from the solutions. + +### Solve exams + +To generate solutions for an exam: +- save the exam images in the exam folder `exam_path`, one question per image file +- add `OPENAI_API_KEY` to your global environment variables or to a `.env` file in the current directory +- run `gpt-resolve resolve -p exam_path` and grab a coffee while it runs. + +If you want to test the process without making real API calls, you can use the `--dry-run` flag. See `gpt-resolve resolve --help` for more details about solving only a subset of questions or controlling token usage. + + +### Compile solutions into a single PDF + +Once you have the solutions in your exam folder `exam_path`, you can compile them into a single PDF: +- run `gpt-resolve compile-solutions -p exam_path --title "Your Exam Title"` + +For that command to work, you'll need a LaTeX distribution in your system. See some guidelines [here](https://www.tug.org/texlive/) (MacTeX for MacOS was used to start this project). + +## Troubleshooting + +Sometimes, it was observed that the output from `o1-preview` produced invalid LaTeX code when nesting display math environments (such as `\[...\]` and `\begin{align*} ... \end{align*}` together). The current prompt for `o1-preview` adds an instruction to avoid this, which works most of the time. If that happens, you can try to solve the question again by running `gpt-resolve resolve -p exam_path -q `, or making more adjustments to the prompt, or fixing the output LaTeX code manually. + +## Costs + +The `o1-preview` model is so far [available only for Tiers 3, 4 and 5](https://help.openai.com/en/articles/9824962-openai-o1-preview-and-o1-mini-usage-limits-on-chatgpt-and-the-api). It is [6x more expensive](https://openai.com/api/pricing/) than `gpt-4o`, and also consumes much more tokens to "reason" (see more [here](https://platform.openai.com/docs/guides/reasoning/controlling-costs#controlling-costs)), so be mindful about the number of questions you are solving and how many max tokens you're allowing gpt-resolve to use (see `gpt-resolve resolve --help` to control `max-tokens-question-answer`, which drives the cost). You can roughly estimate an upper bound for costs of solving an exam by +``` +(number of questions) * (max_tokens_question_answer / 1_000_000) * (price per 1M tokens) +``` +For the current price for o1-preview of $15/$60 per 1M tokens for input/output tokens, an 10 question exam with 10000 max tokens per question would cost less than $6. ## Contributing diff --git a/exams/ita_2025/report.md b/exams/ita_2025/report.md new file mode 100644 index 0000000..3292f20 --- /dev/null +++ b/exams/ita_2025/report.md @@ -0,0 +1,31 @@ +# ITA 2025 Math Essay Exam Report + +## Overview +The Instituto Tecnológico de Aeronáutica (ITA) entrance exam for 2025 consists of two phases: + +- **Phase 1**: + - Written exams covering the subjects listed in the Examination Program found in ANNEX D and available on the ITA Vestibular website. + - The exam includes 48 multiple-choice questions, divided into: + - 12 questions in Mathematics + - 12 questions in Physics + - 12 questions in Chemistry + - 12 questions in English + +- **Phase 2**: + - Essay exams in Mathematics, Physics, and Chemistry, each consisting of 10 questions. + - An argumentative essay. + - 15 objective questions in Portuguese. + +## Results + +| Exam | Phase | Section | Type | Model | Status | Score | +|------|-------|---------|------|-------|--------|-------| +| ITA | 2025 | Math | Essay | o1-preview | ✅ Completed | 90%| +| ITA | 2025 | Physics | Essay | o1-preview | 🚧 TODO | - | +| ITA | 2025 | Chemistry | Essay | o1-preview | 🚧 TODO | - | +| ITA | 2025 | Portuguese | Essay | o1-preview | 🚧 TODO | - | +| ITA | 2025 | Math | Multiple Choice | o1-preview | 🚧 TODO | - | + +## Comments + +`o1-preview` almost got all questions correct in the Math essay exam. The only question it got wrong was question 10, which is a question about spacial geometry, which is a known area of weakness for LLMs. After running that question several times, it can get it correct sometimes, but not always. Since it did not got it correct in the first try, it was considered wrong. Check one of these correct answers [here](exams/ita_2025/math/essays/solutions/q10_solution_rerun.txt). \ No newline at end of file diff --git a/src/gpt_resolve/resolve.py b/src/gpt_resolve/resolve.py index a982c8c..8f2d3b7 100644 --- a/src/gpt_resolve/resolve.py +++ b/src/gpt_resolve/resolve.py @@ -235,7 +235,7 @@ def resolve( False, help="Run in dry-run mode without making actual API calls" ), max_tokens_question_description: int = typer.Option( - 400, help="Maximum tokens for question description" + 400, help="Maximum tokens for question description from image" ), max_tokens_question_answer: int = typer.Option( 5000, help="Maximum completion tokens"