Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support for different types of responses and interaction #7

Open
dave1010 opened this issue Sep 17, 2023 · 1 comment
Open

Comments

@dave1010
Copy link
Owner

Clipea is designed to give you a quick 1 line shell command to run. This should always be the priority.

With the shell integration, Clipea currently exits as soon as it gets the first shell command from the LLM.

But ideally Clipea would handle other responses better:

  • More than 1 command to run
  • explanatory text before or after the command

And also handle other interaction modes

  • Continue chat mode, to refine the response, without starting from scratch.
  • a REPL-like mode, where the LLM continually monitors the shell (expensive due to context length!)
  • a multi stage mode. This would let the LLM get input and/or parse output
  • re-act autonomous agent mode?

Need to consider how this works with shell integration. Easier if Clipea spawns sub processes but not ideal.

Could save the LLM output to a file and then stream it again if you run ?? by itself

@vividfog
Copy link

I asked ?? write a simple template for a docker build file and found out that indeed line-by-line is a different use case. As for explanation, perhaps this would help

?? -v delete all files that are cache files from this subfolder

The -v would make it explain what it's about to do and why.

There's a slippery slope in going too deep into the multiline and chat rabbit hole though.

There's open-interpreter, which is basically that. A chatbot with execution capabilities. But that same thing that makes it powerful, also makes it needy. Embracing the premise of being a Really Smart Autocomplete is what brought me here, so personally I'd like to see that all extensions always remain true to that premise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants