-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add guided decoding to TGIS gRPC API #31
Conversation
// Output will follow the provided regex pattern | ||
string regex = 5; | ||
// Output will be exactly one of the specified choices | ||
StringChoices choice = 6; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately you cannot have repeated
fields directly within oneof
s :(
384d566
to
f9ee133
Compare
enum ResponseFormat { // Plain text, no constraints TEXT = 0; // Valid json JSON = 1; } message StringChoices { repeated string choices = 1; } // Mutually-exclusive guided decoding options oneof guided { // Output will be in the specified format ResponseFormat format = 3; // Output will follow the provided JSON schema string json_schema = 4; // Output will follow the provided regex pattern string regex = 5; // Output will be exactly one of the specified choices StringChoices choice = 6; // Output will follow the provided context free grammar string grammar = 7; } Signed-off-by: Nick Hill <[email protected]>
|
||
if outlines_decoding.global_thread_pool is None: | ||
outlines_decoding.global_thread_pool = ( | ||
concurrent.futures.ThreadPoolExecutor(max_workers=2)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't looked much at logits processors, why does this require its own thread pool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the same code as here:
global_thread_pool = concurrent.futures.ThreadPoolExecutor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's right. The code is just the same as that in the http API. It's dispatched to a threadpool to avoid blocking the asyncio event loop, but I think it could be made more efficient since we only care about this in the case that the LP is not already cached. In any case we can fix that as a follow-on since we need to fix that related concurrency bug anyhow.
@@ -118,7 +120,8 @@ def __init__(self, engine: AsyncLLMEngine, args: argparse.Namespace): | |||
|
|||
async def _post_init(self): | |||
self.config = await self.engine.get_model_config() | |||
self.tokenizer_group = await self.engine.get_tokenizer_group() | |||
# self.tokenizer_group = await self.engine.get_tokenizer_group() | |||
self.tokenizer_group = self.engine.engine.tokenizer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen versions of the code where the get_tokenizer_group function exists and others where it doesn't. What's happening with this function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maxdebayser that's from this upstream PR vllm-project/vllm#3512
It didn't get merged in a timely manner and is now buried in conflicts :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the bug reported in issue https://github.ibm.com/ai-foundation/fmaas-inference-server/issues/718 is not cause by the code in this PR, I think we can merge it and fix the problem in another PR.
Within the existing
decoding
request parameter section: