You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't know of any. But I am running it on the repositories of a team of ~35 developers, and it does prove somewhat useful — even if you run against gpt-3.5-turbo instead of GPT-4 models. It's no replacement for human reviewers and the context awareness/nuance they bring. It can be a bit nit-picky about things like variable names, but it also helps call out potential null pointer exceptions, race conditions, and so on.
I have a fork over at lukehollenback/ai-codereviewer@dev that has support for custom prompts and stuff too, which you can use to help tune down some of the nit-pickiness.
There's no evaluations of the ai-codereviewer as far as I know. If you find any, I would be very curious! However, you can easily go to see how the prompt is generated to get a grasp on how it is instructed.
@lukehollenback awesome! Are you planning to develop the fork further to a different direction from this, or would you be willing to consider merging the changes here? Would be great to get more contributions for this to make this more robust that it's current state. e.g better context awareness would be great but I don't have time to work on that. Also nitpickiness is maybe the biggest problem I'm personally having with the reviewer and I haven't had great success in tuning it down.
Are there any blogs or example showing how well using GPT4 for reviews works? Ideally using this action.
The text was updated successfully, but these errors were encountered: