Skip to content

Commit

Permalink
Add support for copilot extension agents
Browse files Browse the repository at this point in the history
https://docs.github.com/en/copilot/building-copilot-extensions/about-building-copilot-extensions

- Change @buffer and @Buffers to #buffer and #buffers
- Add support for @agent agent selection
- Add support for config.agent for specifying default agent
- Add :CopilotChatAgents for listing agents (and showing selected agent)
- Remove :CopilotChatModel, instead show which model is selected in :CopilotChatModels
- Remove early errors from curl so we can actually get response body for the error
- Add info to README about models, agents and contexts

Closes #466

Signed-off-by: Tomas Slusny <[email protected]>
  • Loading branch information
deathbeam committed Nov 16, 2024
1 parent a1d97c7 commit c98301c
Show file tree
Hide file tree
Showing 5 changed files with 308 additions and 117 deletions.
42 changes: 38 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ Verify "[Copilot chat in the IDE](https://github.com/settings/copilot)" is enabl
- `:CopilotChatLoad <name>?` - Load chat history from file
- `:CopilotChatDebugInfo` - Show debug information
- `:CopilotChatModels` - View and select available models. This is reset when a new instance is made. Please set your model in `init.lua` for persistence.
- `:CopilotChatModel` - View the currently selected model.
- `:CopilotChatAgents` - View and select available agents. This is reset when a new instance is made. Please set your agent in `init.lua` for persistence.

#### Commands coming from default prompts

Expand All @@ -122,6 +122,39 @@ Verify "[Copilot chat in the IDE](https://github.com/settings/copilot)" is enabl
- `:CopilotChatTests` - Please generate tests for my code
- `:CopilotChatCommit` - Write commit message for the change with commitizen convention

### Models, Agents and Contexts

#### Models

You can list available models with `:CopilotChatModels` command. Model determines the AI model used for the chat.
Default models are:

- `gpt-4o` - This is the default Copilot Chat model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Gpt-4o is hosted on Azure.
- `claude-3.5-sonnet` - This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. GitHub Copilot uses Claude 3.5 Sonnet hosted on Amazon Web Services.
- `o1-preview` - This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the gpt-4o model. You can make 10 requests to this model per day. o1-preview is hosted on Azure.
- `o1-mini` - This is the faster version of the o1-preview model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. You can make 50 requests to this model per day. o1-mini is hosted on Azure.

For more information about models, see [here](https://docs.github.com/en/copilot/using-github-copilot/asking-github-copilot-questions-in-your-ide#ai-models-for-copilot-chat)
You can use more models from [here](https://github.com/marketplace/models) by using `@models` agent from [here](https://github.com/marketplace/models-github) (example: `@models Using Mistral-small, what is 1 + 11`)

#### Agents

Agents are used to determine the AI agent used for the chat. You can list available agents with `:CopilotChatAgents` command.
You can set the agent in the prompt by using `@` followed by the agent name.
Default "noop" agent is `copilot`.

For more information about extension agents, see [here](https://docs.github.com/en/copilot/using-github-copilot/using-extensions-to-integrate-external-tools-with-copilot-chat)
You can install more agents from [here](https://github.com/marketplace?type=apps&copilot_app=true)

#### Contexts

Contexts are used to determine the context of the chat.
You can set the context in the prompt by using `#` followed by the context name.
Supported contexts are:

- `buffers` - Includes all open buffers in chat context
- `buffer` - Includes only the current buffer in chat context

### API

```lua
Expand Down Expand Up @@ -202,8 +235,10 @@ Also see [here](/lua/CopilotChat/config.lua):
allow_insecure = false, -- Allow insecure server connections

system_prompt = prompts.COPILOT_INSTRUCTIONS, -- System prompt to use
model = 'gpt-4o', -- GPT model to use, see ':CopilotChatModels' for available models
temperature = 0.1, -- GPT temperature
model = 'gpt-4o', -- Default model to use, see ':CopilotChatModels' for available models
agent = 'copilot', -- Default agent to use, see ':CopilotChatAgents' for available agents (can be specified manually in prompt via @).
context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via #).
temperature = 0.1, -- GPT result temperature

question_header = '## User ', -- Header to use for user questions
answer_header = '## Copilot ', -- Header to use for AI answers
Expand All @@ -218,7 +253,6 @@ Also see [here](/lua/CopilotChat/config.lua):
clear_chat_on_new_prompt = false, -- Clears chat on every new prompt
highlight_selection = true, -- Highlight selection in the source buffer when in the chat window

context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via @).
history_path = vim.fn.stdpath('data') .. '/copilotchat_history', -- Default path to stored history
callback = nil, -- Callback to use when ask response is received

Expand Down
10 changes: 6 additions & 4 deletions lua/CopilotChat/config.lua
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,8 @@ local select = require('CopilotChat.select')
---@field allow_insecure boolean?
---@field system_prompt string?
---@field model string?
---@field agent string?
---@field context string?
---@field temperature number?
---@field question_header string?
---@field answer_header string?
Expand All @@ -80,7 +82,6 @@ local select = require('CopilotChat.select')
---@field auto_insert_mode boolean?
---@field clear_chat_on_new_prompt boolean?
---@field highlight_selection boolean?
---@field context string?
---@field history_path string?
---@field callback fun(response: string, source: CopilotChat.config.source)?
---@field selection nil|fun(source: CopilotChat.config.source):CopilotChat.config.selection?
Expand All @@ -94,8 +95,10 @@ return {
allow_insecure = false, -- Allow insecure server connections

system_prompt = prompts.COPILOT_INSTRUCTIONS, -- System prompt to use
model = 'gpt-4o', -- GPT model to use, see ':CopilotChatModels' for available models
temperature = 0.1, -- GPT temperature
model = 'gpt-4o', -- Default model to use, see ':CopilotChatModels' for available models
agent = 'copilot', -- Default agent to use, see ':CopilotChatAgents' for available agents (can be specified manually in prompt via @).
context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via #).
temperature = 0.1, -- GPT result temperature

question_header = '## User ', -- Header to use for user questions
answer_header = '## Copilot ', -- Header to use for AI answers
Expand All @@ -110,7 +113,6 @@ return {
clear_chat_on_new_prompt = false, -- Clears chat on every new prompt
highlight_selection = true, -- Highlight selection

context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via @).
history_path = vim.fn.stdpath('data') .. '/copilotchat_history', -- Default path to stored history
callback = nil, -- Callback to use when ask response is received

Expand Down
116 changes: 109 additions & 7 deletions lua/CopilotChat/copilot.lua
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
---@field end_row number?
---@field system_prompt string?
---@field model string?
---@field agent string?
---@field temperature number?
---@field on_progress nil|fun(response: string):nil

Expand All @@ -29,6 +30,7 @@
---@field load fun(self: CopilotChat.Copilot, name: string, path: string):table
---@field running fun(self: CopilotChat.Copilot):boolean
---@field list_models fun(self: CopilotChat.Copilot):table
---@field list_agents fun(self: CopilotChat.Copilot):table

local async = require('plenary.async')
local log = require('plenary.log')
Expand Down Expand Up @@ -340,6 +342,7 @@ local Copilot = class(function(self, proxy, allow_insecure)
self.sessionid = nil
self.machineid = machine_id()
self.models = nil
self.agents = nil
self.claude_enabled = false
self.current_job = nil
self.request_args = {
Expand All @@ -362,9 +365,6 @@ local Copilot = class(function(self, proxy, allow_insecure)
'--no-keepalive', -- Don't reuse connections
'--tcp-nodelay', -- Disable Nagle's algorithm for faster streaming
'--no-buffer', -- Disable output buffering for streaming
'--fail', -- Return error on HTTP errors (4xx, 5xx)
'--silent', -- Don't show progress meter
'--show-error', -- Show errors even when silent
},
}
end)
Expand Down Expand Up @@ -461,6 +461,39 @@ function Copilot:fetch_models()
return out
end

function Copilot:fetch_agents()
if self.agents then
return self.agents
end

local response, err = curl_get(
'https://api.githubcopilot.com/agents',
vim.tbl_extend('force', self.request_args, {
headers = self:authenticate(),
})
)

if err then
error(err)
end

if response.status ~= 200 then
error('Failed to fetch agents: ' .. tostring(response.status))
end

local agents = vim.json.decode(response.body)['agents']
local out = {}
for _, agent in ipairs(agents) do
out[agent['slug']] = agent
end

out['copilot'] = { name = 'Copilot', default = true }

log.info('Agents fetched')
self.agents = out
return out
end

function Copilot:enable_claude()
if self.claude_enabled then
return true
Expand Down Expand Up @@ -510,6 +543,7 @@ function Copilot:ask(prompt, opts)
local selection = opts.selection or {}
local system_prompt = opts.system_prompt or prompts.COPILOT_INSTRUCTIONS
local model = opts.model or 'gpt-4o-2024-05-13'
local agent = opts.agent or 'copilot'
local temperature = opts.temperature or 0.1
local on_progress = opts.on_progress
local job_id = uuid()
Expand All @@ -522,10 +556,21 @@ function Copilot:ask(prompt, opts)
log.debug('Filename: ' .. filename)
log.debug('Filetype: ' .. filetype)
log.debug('Model: ' .. model)
log.debug('Agent: ' .. agent)
log.debug('Temperature: ' .. temperature)

local models = self:fetch_models()
local capabilities = models[model] and models[model].capabilities
local agents = self:fetch_agents()
local agent_config = agents[agent]
if not agent_config then
error('Agent not found: ' .. agent)
end
local model_config = models[model]
if not model_config then
error('Model not found: ' .. model)
end

local capabilities = model_config.capabilities
local max_tokens = capabilities.limits.max_prompt_tokens -- FIXME: Is max_prompt_tokens the right limit?
local max_output_tokens = capabilities.limits.max_output_tokens
local tokenizer = capabilities.tokenizer
Expand Down Expand Up @@ -582,6 +627,7 @@ function Copilot:ask(prompt, opts)
local errored = false
local finished = false
local full_response = ''
local full_references = ''

local function finish_stream(err, job)
if err then
Expand Down Expand Up @@ -631,6 +677,22 @@ function Copilot:ask(prompt, opts)
return
end

if content.copilot_references then
for _, reference in ipairs(content.copilot_references) do
local metadata = reference.metadata
if metadata and metadata.display_name and metadata.display_url then
full_references = full_references
.. '\n'
.. '['
.. metadata.display_name
.. ']'
.. '('
.. metadata.display_url
.. ')'
end
end
end

if not content.choices or #content.choices == 0 then
return
end
Expand Down Expand Up @@ -668,8 +730,13 @@ function Copilot:ask(prompt, opts)
self:enable_claude()
end

local url = 'https://api.githubcopilot.com/chat/completions'
if not agent_config.default then
url = 'https://api.githubcopilot.com/agents/' .. agent .. '?chat'
end

local response, err = curl_post(
'https://api.githubcopilot.com/chat/completions',
url,
vim.tbl_extend('force', self.request_args, {
headers = self:authenticate(),
body = temp_file(body),
Expand All @@ -694,6 +761,25 @@ function Copilot:ask(prompt, opts)
end

if response.status ~= 200 then
if response.status == 401 then
local ok, content = pcall(vim.json.decode, response.body, {
luanil = {
object = true,
array = true,
},
})

if ok and content.authorize_url then
error(
'Failed to authenticate. Visit following url to authorize '
.. content.slug
.. ':\n'
.. content.authorize_url
)
return
end
end

error('Failed to get response: ' .. tostring(response.status) .. '\n' .. response.body)
return
end
Expand All @@ -708,6 +794,14 @@ function Copilot:ask(prompt, opts)
return
end

if full_references ~= '' then
full_references = '\n\n**`References:`**' .. full_references
full_response = full_response .. full_references
if on_progress then
on_progress(full_references)
end
end

log.trace('Full response: ' .. full_response)
log.debug('Last message: ' .. vim.inspect(last_message))

Expand All @@ -727,10 +821,10 @@ function Copilot:ask(prompt, opts)
end

--- List available models
---@return table
function Copilot:list_models()
local models = self:fetch_models()

-- Group models by version and shortest ID
local version_map = {}
for id, model in pairs(models) do
local version = model.version
Expand All @@ -739,10 +833,18 @@ function Copilot:list_models()
end
end

-- Map to IDs and sort
local result = vim.tbl_values(version_map)
table.sort(result)
return result
end

--- List available agents
---@return table
function Copilot:list_agents()
local agents = self:fetch_agents()

local result = vim.tbl_keys(agents)
table.sort(result)
return result
end

Expand Down
Loading

0 comments on commit c98301c

Please sign in to comment.