Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overloaded MultipleCompletionLLMModel.call type #13

Merged
merged 23 commits into from
Dec 9, 2024
Merged

Conversation

maykcaldas
Copy link
Collaborator

Overloaded typing in MultipleCompletionLLMModel.call. It returns a list or a single element of LLMResult, depending on how many completions are requested.

This PR is motivated by the fact that LDP had a dummy class, LLMModel, which only filtered the return of MultipleCompletionLLMModel. To make it more general, MultipleCompletionLLMModel now adapts its call return to be either LLMResult (if only one completion n is requested) or list[LLMResult] (if n>1).
The call method was overloaded to satisfy pydantic.

…her a list or a single element of LLMResult depending on how many completions are requested
@maykcaldas maykcaldas self-assigned this Dec 5, 2024
@maykcaldas maykcaldas marked this pull request as ready for review December 6, 2024 23:30
Comment on lines 883 to 904
async def call(
self,
messages: list[Message],
callbacks: list[Callable] | None = None,
output_type: type[BaseModel] | None = None,
tools: list[Tool] | None = None,
tool_choice: Tool | str | None = TOOL_CHOICE_REQUIRED,
n: Literal[1] = 1,
**chat_kwargs,
) -> LLMResult: ...

@overload
async def call(
self,
messages: list[Message],
callbacks: list[Callable] | None = None,
output_type: type[BaseModel] | None = None,
tools: list[Tool] | None = None,
tool_choice: Tool | str | None = TOOL_CHOICE_REQUIRED,
n: int | None = None,
**chat_kwargs,
) -> list[LLMResult]: ...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When would we expect someone to use these overloads instead of the dedicated methods call_single and call_multiple?

IMO, it would be easier to maintain just two methods:

async def call(self, ..., n: int) -> list[LLMResult]:
    assert n > 0
    ...

async def call_single(self, ...) -> LLMResult:
    return self.call(..., n=1)[0]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like Sid's suggestion too. Also, let's make a docstring somewhere mentioning what n does on the "back end". Readers won't intuitively know what n means, it can refer to so many things.

On a related note, MultipleCompletionLLMModel.achat calls litellm.acompletion. Can we rename MultipleCompletionLLMModel.achat to be MultipleCompletionLLMModel.acompletion to standardize with the actual API endpoint ultimately being invoked?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we still have the overloads, call_single, and call_multiple here - can we reduce this to call and call_single?

uv.lock Show resolved Hide resolved
tests/test_llms.py Outdated Show resolved Hide resolved
Comment on lines 883 to 904
async def call(
self,
messages: list[Message],
callbacks: list[Callable] | None = None,
output_type: type[BaseModel] | None = None,
tools: list[Tool] | None = None,
tool_choice: Tool | str | None = TOOL_CHOICE_REQUIRED,
n: Literal[1] = 1,
**chat_kwargs,
) -> LLMResult: ...

@overload
async def call(
self,
messages: list[Message],
callbacks: list[Callable] | None = None,
output_type: type[BaseModel] | None = None,
tools: list[Tool] | None = None,
tool_choice: Tool | str | None = TOOL_CHOICE_REQUIRED,
n: int | None = None,
**chat_kwargs,
) -> list[LLMResult]: ...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like Sid's suggestion too. Also, let's make a docstring somewhere mentioning what n does on the "back end". Readers won't intuitively know what n means, it can refer to so many things.

On a related note, MultipleCompletionLLMModel.achat calls litellm.acompletion. Can we rename MultipleCompletionLLMModel.achat to be MultipleCompletionLLMModel.acompletion to standardize with the actual API endpoint ultimately being invoked?

llmclient/llms.py Outdated Show resolved Hide resolved
uv.lock Outdated Show resolved Hide resolved
Comment on lines 961 to 965
if not n or n <= 0:
logger.info(
"Invalid number of completions `n` requested to the call function. "
"Will get it from the model's configuration."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should raise an error if n<=0. And I don't think we need to emit a logging message if n is unspecified, since that will be a common case

Comment on lines 883 to 904
async def call(
self,
messages: list[Message],
callbacks: list[Callable] | None = None,
output_type: type[BaseModel] | None = None,
tools: list[Tool] | None = None,
tool_choice: Tool | str | None = TOOL_CHOICE_REQUIRED,
n: Literal[1] = 1,
**chat_kwargs,
) -> LLMResult: ...

@overload
async def call(
self,
messages: list[Message],
callbacks: list[Callable] | None = None,
output_type: type[BaseModel] | None = None,
tools: list[Tool] | None = None,
tool_choice: Tool | str | None = TOOL_CHOICE_REQUIRED,
n: int | None = None,
**chat_kwargs,
) -> list[LLMResult]: ...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we still have the overloads, call_single, and call_multiple here - can we reduce this to call and call_single?

Messages are not implemented in llmclient anymore
@maykcaldas maykcaldas changed the base branch from main to update_init December 9, 2024 22:23
Copy link
Contributor

@sidnarayanan sidnarayanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

Base automatically changed from update_init to main December 9, 2024 22:42
@maykcaldas maykcaldas merged commit 5a2deb7 into main Dec 9, 2024
5 checks passed
@maykcaldas maykcaldas deleted the over-mult branch December 9, 2024 22:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants