You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a user I tend to like to switch between LLMs often (this includes switching between Ollama models and OpenAI + Anthropic external models), and I currently find it very confusing / impossible to predict which of my LLMs are going to be chosen for a thread.
LLM can be set in the Instance settings
LLM can be set in the workspace settings
LLM can be set in the workspace settings (but specific for agents)
As far as I can tell, if I change the chat agent in my workspace, the existing threads remain unaffected, which "feels" like a bug to me. I would expect (where possible) context to be carried over to the newly chosen LLM.
Ideally, like in various apps: ChatGPT or TypingMind, the LLM can be chosen inside the chat context/thread. For example, here are a few UX experiences I find to be very beneficial:
Just my 2 cents. Keep up the great work!
The text was updated successfully, but these errors were encountered:
What would you like to see?
As a user I tend to like to switch between LLMs often (this includes switching between Ollama models and OpenAI + Anthropic external models), and I currently find it very confusing / impossible to predict which of my LLMs are going to be chosen for a thread.
As far as I can tell, if I change the chat agent in my workspace, the existing threads remain unaffected, which "feels" like a bug to me. I would expect (where possible) context to be carried over to the newly chosen LLM.
Ideally, like in various apps: ChatGPT or TypingMind, the LLM can be chosen inside the chat context/thread. For example, here are a few UX experiences I find to be very beneficial:
Just my 2 cents. Keep up the great work!
The text was updated successfully, but these errors were encountered: