Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] lora can batch inference in lmdeploy? #3060

Open
MichoChan opened this issue Jan 21, 2025 · 1 comment
Open

[Feature] lora can batch inference in lmdeploy? #3060

MichoChan opened this issue Jan 21, 2025 · 1 comment
Assignees

Comments

@MichoChan
Copy link

Motivation

rt

Related resources

No response

Additional context

No response

@MichoChan MichoChan changed the title [Feature] lora can batch inference? [Feature] lora can batch inference in lmdeploy? Jan 21, 2025
@RunningLeon
Copy link
Collaborator

@MichoChan yes, lmdeploy supports lora with continuous batching. If you met any problem, feel free to post your error msg, env and sample code to reproduce.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants