Replies: 6 comments
-
I think that's okay if that's same API. Let's see if @gptlang @deathbeam agree on this. |
Beta Was this translation helpful? Give feedback.
-
From what I understand, support for other models would depend on an API-level translation layer. In that case, I don't think we should hard code the models / have different URL fields. Just allow user configuration of the completion URL and model. For authentication, maybe also allow the user to send a custom header. Having code that requires the user to self-host or use a third party service as part of the plugin itself is not worth it |
Beta Was this translation helpful? Give feedback.
-
Yea configuration for |
Beta Was this translation helpful? Give feedback.
-
Yes, I implement a for self-setup gpt-server, you can add auth-token with the same logic. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
It would be fantastic to find a way to incorporate this in upstream CopilotChat.nvim. I understand the name of the plugin makes it sound like Github Copilot specific, but supporting other LLMs would extend its utility to a wider audience, potentially attracting more contributors etc. |
Beta Was this translation helpful? Give feedback.
-
Add Support for Gemini and Other AI Support
Just add the other AI suppliers for copilot chat, like gemini, groq, does anybody need this?
I also implement a server with fastapi to support the same api interface
https://github.com/bruceunx/CopilotChat.nvim
Is it OK to send a PR or leave it here?
Beta Was this translation helpful? Give feedback.
All reactions