Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolving PR comments #39

Merged
merged 3 commits into from
Jan 23, 2025
Merged

Resolving PR comments #39

merged 3 commits into from
Jan 23, 2025

Conversation

jamesbraza
Copy link
Contributor

@jamesbraza jamesbraza added the bug Something isn't working label Jan 23, 2025
@jamesbraza jamesbraza self-assigned this Jan 23, 2025
@@ -368,25 +358,26 @@ async def call_single(
return results[0]


LLMModelOrChild = TypeVar("LLMModelOrChild", bound=LLMModel)
P = ParamSpec("P")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was key

Copy link

@mskarlin mskarlin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me, the initial intention to limit this to working with LLMModelOrChild was because it was tightly coupled to LLMModel methods (so it was typed as such to handle the self input for the methods) -- it makes sense that swapping to a ParamSpec worked, but I'm not clear on how the ModelResponse types ever worked or what they were meant to capture since the LLMModel methods are returning Chunk types.

@jamesbraza jamesbraza merged commit 10953bc into refactor-llms Jan 23, 2025
7 checks passed
@jamesbraza jamesbraza deleted the james-changes branch January 23, 2025 17:39
maykcaldas added a commit that referenced this pull request Jan 24, 2025
* WIP: commited partially refactored llms to merge from main

* Removed Chunk. Now LiteLLMModel's methods return a LLMResult

* Refactored LiteLLMModel and MultipleCompletionsLLMModel

MultipleCompletionsLLMModel is deprecated

* updated tests

* Updated uv.lock for new uv version

* Updated test checking if an asyncIterator was a list

* Reverted uv.lock changes

* calling async callbacks with asyncio.gather

* Dropped support to run_prompt

* Updated cassettes for test_call

* Fixed typing of llm_result_callback

* Fix reburb error

* added missing new cassette

* Removed support to completion models

This also renamed achat to acompletion to align better with litellm interface

* Updated uv.lock

It seems some entries had the old 'platform_system' marker due my old uv version. Now it is updated to 'sys_platform'

* added typeguard to pyproject.toml

* Fixed rate_limited typing

* Fixed typing check in rate_limited

* Avoided vcr for test_call_w_figure

* Casting results in rate_limited to avoid type ignoring

* Prepared to get deepseek reasoning from litellm

Waiting for their release to validate this commit

* Added atext_completion back to PassThroughRouter

* Renamed LLM models in tests with more caution

gpt-4o-mini was renamed to OPENAI_TEST, gpt-4o to GPT_4O, and gpt-3.5-turbo to GPT_35. As support to gpt-3.5-turbo-instruct was dropped, these tests were adapted to ANTHROPIC_TEST

* Ruff fix

* Implemented logprobs calculation in streaming response

* Added .mailmap so pre-commit passes

* Many fixes to typing in llms

* Formatted cassettes

* added deepseek test

* Bumped litellm version for deepseek

* Getting reasoning_content from deepseek models

* Resolving PR comments (#39)

* Added .mailmap so pre-commit passes

* Many fixes to typing in llms

* Formatted cassettes

* Removed litellm from dev and removed python version checking

* Changed attribute description for reasoning_content

* removed deprecated MultipleCompletionLLMModel from llm.__all__

* adding formatted uv.lock

* Fixing LiteLLM `Router.acompletion` typing issue (#43)

* Cleaned up cassettes

---------

Co-authored-by: James Braza <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants