Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatMessageRetriever flexibility issues #72

Closed
anakin87 opened this issue Sep 4, 2024 · 2 comments
Closed

ChatMessageRetriever flexibility issues #72

anakin87 opened this issue Sep 4, 2024 · 2 comments

Comments

@anakin87
Copy link
Member

anakin87 commented Sep 4, 2024

While preparing a notebook on Tools, I needed some form of short-term memory of the conversation.
I tried to use ChatMessageStore, ChatMessageWriter, andChatMessageRetriever.

Unfortunately, I have encountered some roadblocks with the ChatMessageRetriever.
You can find my experiments in this notebook.


Main problems I have encountered:
ChatMessageRetriever has no input sockets

  • cannot receive any input from other components
  • cannot be placed in a conditional branch
  • must be placed at the beginning of a Pipeline
  • since it does not need inputs, it always runs

I am aware of the ongoing work on the ChatMessageRetriever (see deepset-ai/haystack#8258), but my impression is that most of these issues will remain.

Given that my use case is not very exotic, I think it would be beneficial to reflect on how to improve and make this experimental component more flexible.

@shadeMe shadeMe transferred this issue from deepset-ai/haystack Sep 4, 2024
@TuanaCelik
Copy link
Contributor

@anakin87 @shadeMe - the discussion for this component is ready so you can move this there and close issues: #75

@shadeMe
Copy link
Contributor

shadeMe commented Sep 6, 2024

Moved to the discussion.

@shadeMe shadeMe closed this as completed Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants