Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduced performance/bottleneck with concurrent requests and Llama-3.1 #127

Open
thigger opened this issue Aug 1, 2024 · 1 comment
Open

Comments

@thigger
Copy link

thigger commented Aug 1, 2024

Using TabbyAPI/exllamav2 with Llama3.1-8B
Threadripper Pro/A6000 GPU

Inference at ~70t/s unconstrained, single request. ~35t/s with lm-format-enforcer (JSON schema)
Running 30 simultaneous requests performance drops to ~1-2t/s, with CUDA use at ~10%. This does not occur if the lm-format-enforcer is not used (90-100% CUDA use with 10-20t/s on each request)

@turboderp has been able to replicate and suggests it is due to the large Llama3.1 vocabulary combined with the GIL forcing single-threaded behaviour.

Is this likely to be fixable or is it too complex? Thanks!

@noamgat
Copy link
Owner

noamgat commented Sep 3, 2024

I think the correct way to approach this would probably be to use some multiprocessing / queue setup, but it would have to be deeply integrated with exllamav2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants