You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your great effort in implementing KV cache compression methods in vLLM. I recently tried running experiments with tensor parallel enabled, and I wanted to ask if there are any plans to support tensor parallel, as it would be very helpful. Thanks again for your work!
Alternatives
No response
Additional context
File "/home/aiscuser/vllm-kvcompress/vllm/config.py", line 2089, in __post_init__
self.cache_config.verify_with_parallel_config(self.parallel_config)
File "/home/aiscuser/vllm-kvcompress/vllm/config.py", line 703, in verify_with_parallel_config
raise ValueError("KV-Compress with multi-GPU not yet supported")
ValueError: KV-Compress with multi-GPU not yet supported
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
We'll be adding support for several vLLM features (initially excluded for simplicity) as we work to upstream this work over the next few weeks. TP should be a quick one--I'll update here once it's supported :)
🚀 The feature, motivation and pitch
Hi folks,
Thank you for your great effort in implementing KV cache compression methods in vLLM. I recently tried running experiments with tensor parallel enabled, and I wanted to ask if there are any plans to support tensor parallel, as it would be very helpful. Thanks again for your work!
Alternatives
No response
Additional context
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: