Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Support tensor parallel #1

Open
1 task done
iofu728 opened this issue Oct 10, 2024 · 2 comments
Open
1 task done

[Feature]: Support tensor parallel #1

iofu728 opened this issue Oct 10, 2024 · 2 comments

Comments

@iofu728
Copy link

iofu728 commented Oct 10, 2024

🚀 The feature, motivation and pitch

Hi folks,

Thank you for your great effort in implementing KV cache compression methods in vLLM. I recently tried running experiments with tensor parallel enabled, and I wanted to ask if there are any plans to support tensor parallel, as it would be very helpful. Thanks again for your work!

Alternatives

No response

Additional context

  File "/home/aiscuser/vllm-kvcompress/vllm/config.py", line 2089, in __post_init__
    self.cache_config.verify_with_parallel_config(self.parallel_config)
  File "/home/aiscuser/vllm-kvcompress/vllm/config.py", line 703, in verify_with_parallel_config
    raise ValueError("KV-Compress with multi-GPU not yet supported")
ValueError: KV-Compress with multi-GPU not yet supported

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@IsaacRe
Copy link
Owner

IsaacRe commented Oct 10, 2024

Hi thanks for your interest,

We'll be adding support for several vLLM features (initially excluded for simplicity) as we work to upstream this work over the next few weeks. TP should be a quick one--I'll update here once it's supported :)

@iofu728
Copy link
Author

iofu728 commented Oct 11, 2024

Thanks for your response! Looking forward to your next version!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants