-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tips for prompting Vicuna-13b #5
Comments
Thanks for raising this @rlancemartin! In general, I believe the preferred prompt format for this model is:
However, even with that format, I still observed some pretty verbose responses. Setting I ran some tests with with one of your examples and settled on the following:
Which yielded:
|
Cool, very helpful! Thanks. |
White Classical Watch, Cyberpunk, sweat drops, neon hubs, 80s aesthetics, insane, intricate, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, unreal engine 5, 8k, art by artgerm and greg rutkowski and alphonse mucha |
@joehoover thanks mate, playing a bit with Vircuna13 and was struggling to get consistent completion, it has been very helpful !! ✌️ |
I'm not sure this is the correct place for the ticket, but wanted to add it here anyway b/c @daanelson has been very helpful.
We're using the Replicate Vicuna-13b API.
We're evaluating it for question-answering using an app we developed w/ LangChain here.
I notice the question responses for Vicuna are quite verbose: all are here.
We are using GPT4 as the grader, and currently Vicuna is failing each question :-P (vs GPT-3.5-turbo 70% pass rate).
We are using this prompt template:
Perhaps Vicuna benefits from an alternative prompt?
The text was updated successfully, but these errors were encountered: