Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: resolve issue #29513 - cannot retrieve reasoning_content while s… #29540

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

codergma
Copy link

@codergma codergma commented Feb 2, 2025

Extract reasoning_content in the _convert_delta_to_message_chunk function

…tent while streaming

Extract reasoning_content in the _convert_delta_to_message_chunk function
@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Feb 2, 2025
Copy link

vercel bot commented Feb 2, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchain ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 2, 2025 0:49am

@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Feb 2, 2025
Copy link
Collaborator

@ccurme ccurme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we do this on the Deepseek integration, similar to how it's done for invoke?

rtn = super()._create_chat_result(response, generation_info)
if not isinstance(response, openai.BaseModel):
return rtn
if hasattr(response.choices[0].message, "reasoning_content"): # type: ignore
rtn.generations[0].message.additional_kwargs["reasoning_content"] = (
response.choices[0].message.reasoning_content # type: ignore
)

@ccurme ccurme self-assigned this Feb 4, 2025
@liuruibin
Copy link

anything new?

@codergma
Copy link
Author

codergma commented Feb 7, 2025

Can we do this on the Deepseek integration, similar to how it's done for invoke?

rtn = super()._create_chat_result(response, generation_info)
if not isinstance(response, openai.BaseModel):
return rtn
if hasattr(response.choices[0].message, "reasoning_content"): # type: ignore
rtn.generations[0].message.additional_kwargs["reasoning_content"] = (
response.choices[0].message.reasoning_content # type: ignore
)

Yes, you are right, but it seems that there isn't a suitable way to do it at the moment.

@codergma
Copy link
Author

codergma commented Feb 7, 2025

Hi @baskaryan , could you please take a look at this PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
Status: In review
Development

Successfully merging this pull request may close these issues.

3 participants