Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running instruction? #4

Open
jacklxc opened this issue Nov 21, 2024 · 2 comments
Open

Running instruction? #4

jacklxc opened this issue Nov 21, 2024 · 2 comments

Comments

@jacklxc
Copy link

jacklxc commented Nov 21, 2024

Hi, is there any instruction to run this refactor benchmark?

@paul-gauthier
Copy link
Collaborator

Thanks for your interest in aider and the refactor benchmark.

You can run it like the main aider benchmark:

https://github.com/Aider-AI/aider/tree/main/benchmark

You just need to use --exercises-dir to point it at the refactor exercises.

@jacklxc
Copy link
Author

jacklxc commented Nov 24, 2024

Thanks for your quick reply. I was diving into the source code but still confused. Can you explain what is the outputs of the LLM and how is the output applied to the original file? Is the LLM output the full updated file content or only the difference between the original file content and the updated file content?

I am trying to use my own LLM for this benchmark, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants