-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Search prod ready #114
Search prod ready #114
Conversation
Search tool
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some comments to cleanup and structure. Core logic looks good but some questions. Also want to test it out a bit locally before full approval
@@ -63,13 +84,31 @@ def think(cls, cache: Conversation, input: str): | |||
|
|||
@classmethod | |||
@sentry_sdk.trace | |||
def respond(cls, cache: Conversation, thought: str, input: str): | |||
async def respond(cls, cache: Conversation, thought: str, input: str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The inferences for the search tool are all done sequentially so there isn't any added benefit of running it with async and it changes up the interface elsewhere the respond method is used.
For consistency we should remove those. I'm also wondering if the logic for tools should be encoded within the respond method or the BloomChain
class should have a set of methods for tool support then we just add that to the chat
and stream
methods
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I did include a synchronous implementation for the search tool, but right now it's set up to use the async implementation which does each inference async and is much faster from my testing.
I agree on the interface issue, do you think using threads instead here might be cleaner?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for now, nest_asyncio
should work to make the search_tool call sync. It seems like Langchain doesn't really work with threading yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've also decoupled the search logic from respond, so feel free to let me know your thoughts on that too!
* implement autoscroll Refactored imports in "page.tsx" to improve readability and maintainability. Updated the type of the "input" ref to be more specific. Added a new state and ref to track the scroll position of the message container. Set up an event listener to update the state when the scroll position changes. Made adjustments to the scroll position in certain event handlers. * update scroll --------- Co-authored-by: Vineeth Voruganti <[email protected]>
- Update langchain to fix thoughts not appearing issue - Switch to regular HTML Loader instead of playwright for simplicity and performance - Update search prompt to fix citation formatting issues - Downgrade remark to prevent Math rendering error (source: remarkjs/remark-math#89)
Issue is stale and out of date. Closing for now |
No description provided.