Vicuna v1.5 released with 4K and 16K context lengths #431
ianscrivener
started this conversation in
General
Replies: 1 comment 3 replies
-
The 16k model should already work using the parameter |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
"Vicuna v1.5 series, featuring 4K and 16K context lengths with improved performance on almost all benchmarks.. based on the commercial-friendly Llama2"
Excited to see llama.cpp support soon?!
Beta Was this translation helpful? Give feedback.
All reactions