-
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Faster logit_bias_logits_processor #13334
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: Xu Song <[email protected]>
Signed-off-by: Xu Song <[email protected]>
If |
Signed-off-by: Xu Song <[email protected]>
@imkero Thanks for your suggestion, a new commit has been added, which avoid duplicated tensor copy. After this change, the time_cost is reduced to 0.01ms
|
Signed-off-by: Xu Song <[email protected]>
Signed-off-by: Xu Song <[email protected]>
Signed-off-by: Xu Song <[email protected]>
This PR changes python ops to tensor ops, which reduce time cost from 106ms to 0.01ms.
Before
The above approach is time consuming especially when
len(logit_bias)
is very large.After
Time Cost
before -> v1 -> v2
experiment settings:
impl history
vllm/vllm/entrypoints/openai/logits_processors.py
Lines 50 to 51 in a50b0ea
106ms -> 0.4ms
vllm/vllm/entrypoints/openai/logits_processors.py
Lines 50 to 53 in 62a74e3
106ms -> 0.01ms
vllm/vllm/entrypoints/openai/logits_processors.py
Lines 70 to 74 in cd9f33f