-
-
Notifications
You must be signed in to change notification settings - Fork 394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement]: Support GitHub Copilot chat #380
Comments
I just checked the latest GitHub Copilot language server agent.js, it looks like it now supports some chat related requests. So it seems to be possible now. But the language server code is obfuscated, I can't guarantee that it can be done right now. |
Thanks for getting back so quickly. |
There actually is a url for the API, but I am not sure if it's appropriate to include it into the app since it's not publicly available. https://copilot-proxy.githubusercontent.com/v1/chat/completions For anyone interested, you can always try to wrap in another API that is OpenAI compatible. I honestly don't know how the bearer token is generated. The auth token is stored at the GitHub Copilot/Support directory inside the app's application support folder, the files are hidden. For the actual request that is sent, please MITM the requests to the domain name. You may have to install "Mac CA VSCode" to allow node to read certificates in your key chain. |
All right, the bearer token is from https://api.github.com/copilot_internal/v2/token You can send your auth token in the authorization header field as |
Interesting.. I did not try it out yet, however if the API is private and you need to reverse-engineer with MITM to figure it out then it might not be worth the trouble since the API could change at any point. Do you know if a public API will be available at some point? |
I don't know. But I don't think that they will release an API endpoint for that, since all the heavy work of Copilot X Chat happens inside the extension. |
Guess you are right.. I could not help my self so I tried it out anyways... The respose from https://api.githubcopilot.com/chat/completions does not make a lot of sense to me though..
|
This is the stream result. Every data-choices[0]-delta contains a token of the final result. Actually it looks very much the same as the OpenAI output, so it will not take too much effort to wrap it behind an OpenAI compatible API, I guess. |
Ah ok. This is pretty new territory for me... In MITM the responses does seem to contain the actual text though...
|
This is the request body, you have to send the whole conversation to LLM to generate the result. |
Argh.. of cause.. I have no idea how i confused the request body with the response body. I need more coffee! ☕ |
BTW the only "special" header required in the call is |
This issue is stale because it has been open for 30 days with no activity. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
I just ran into this issue, do you know if there's any official way of achieving this @intitni, or at least an idea of how complicated it would be to add support for it and possible limitations? |
@BalestraPatrick As mentioned in #405, the plan is to implement it (if possible) as an extension for Copilot for Xcode. The investigation will start after the custom chat panel and custom chat service are implemented. The good news is that the language server is a js file that is, at least, readable. But the bad news is the file is obfuscated. Currently, I can only tell that there exist some new request types prefixed with "conversation". But they don't seem enough to support chat service (?).
The request types are not even similar to that of the OpenAI's API. So I can't tell if it is possible. The case of Codeium is better since they have opened-source their Visual Studio extension. |
Looks like someone has successfully converted GitHub Copilot Chat (The private API behind it) to an OpenAI API. https://github.com/aaamoon/copilot-gpt4-service Use at your own risk. |
I have deployed and running "copilot-gpt4-service", how should I set up the chat model of CopilotForXcode? |
@moonclock I don't think you need an API key. Please try selecting No API Key and leaving the model name blank. |
I set the picture as follows(I choose model name as GPT 4.0, because model name cannot be empty), and click test, server returns 401 |
@moonclock Please check the FAQ section in that repo for details. |
👍 Thanks for your reply, I'll try again |
I tried to follow the links provided here, but they led to nowhere since this repo has been removed https://github.com/aaamoon/copilot-gpt4-service. In AppCode of Jetbrains the integration works seamlessly. When I purchased the plus version I hoped there soon would be something similar for Xcode. It is not an option for many corporate users to use a ChatGPT api key since this will run in a private instance and IT won't provide an API key. Also we are not allowed to use our private ChatGPT account. |
@brainray I have never said that GitHub Copilot Chat "will" be available soon. It's not something that works like they provide an API and I call the API somewhere.
You can still find the secret behind copilot-gpt4-service in this issue if you want to use it urgently. If your company allows Codeium, we are working with Codeium to bring Codeium chat to the app, too. |
@intitni Thanks for the quick answer and clarification. Well, strange behaviour of them, but okay, it is as it is. Thanks for all of your work anyway ❤️ |
We now have a proof-of-concept implementation in 0.33.5 beta. |
Before Requesting
What feature do you want?
It would be great with support for the new copilot chat which is part of Copilot X.
Update
A proof-of-concept implementation is available since 0.33.5 beta. But it still needs more works.
The text was updated successfully, but these errors were encountered: