CopilotChat.nvim is a Neovim plugin that brings GitHub Copilot Chat capabilities directly into your editor. It provides:
- 🤖 GitHub Copilot Chat integration with official model support (GPT-4o, Claude 3.7 Sonnet, Gemini 2.0 Flash, and more)
- 💻 Rich workspace context powered by smart embeddings system
- 🔒 Explicit data sharing - only sends what you specifically request, either as resource or selection (by default visual selection)
- 🔌 Modular provider architecture supporting both official and custom LLM backends (Ollama, Gemini, Mistral.ai and more)
- 📝 Interactive chat UI with completion, diffs and quickfix integration
- 🎯 Powerful prompt system with composable templates and sticky prompts
- 🔄 Extensible function calling system for granular workspace understanding (buffers, files, git diffs, URLs, and more)
- ⚡ Efficient token usage with tiktoken token counting and history management
- Neovim 0.10.0+ - Older versions are not officially supported
- curl - Version 8.0.0+ recommended for best compatibility
- Copilot chat in the IDE enabled in GitHub settings
Warning
For Neovim < 0.11.0, add noinsert
or noselect
to your completeopt
otherwise chat autocompletion will not work.
For best autocompletion experience, also add popup
to your completeopt
(even on Neovim 0.11.0+).
-
tiktoken_core - For accurate token counting
- Arch Linux: Install
luajit-tiktoken-bin
orlua51-tiktoken-bin
from AUR - Via luarocks:
sudo luarocks install --lua-version 5.1 tiktoken_core
- Manual: Download from lua-tiktoken releases and save as
tiktoken_core.so
in your Lua path
- Arch Linux: Install
-
git - For git diff context features
-
ripgrep - For improved search performance
-
lynx - For improved URL context features
For various plugin pickers to work correctly, you need to replace vim.ui.select
with your desired picker (as the default vim.ui.select
is very basic). Here are some examples:
- fzf-lua - call
require('fzf-lua').register_ui_select()
- telescope - setup
telescope-ui-select.nvim
plugin - snacks.picker - enable
ui_select
config - mini.pick - set
vim.ui.select = require('mini.pick').ui_select
Plugin features that use picker:
:CopilotChatPrompts
- for selecting prompts:CopilotChatModels
- for selecting models#<function>:<input>
- for selecting function input
return {
{
"CopilotC-Nvim/CopilotChat.nvim",
dependencies = {
{ "github/copilot.vim" }, -- or zbirenbaum/copilot.lua
{ "nvim-lua/plenary.nvim", branch = "master" }, -- for curl, log and async functions
},
build = "make tiktoken", -- Only on MacOS or Linux
opts = {
-- See Configuration section for options
},
-- See Commands section for default commands if you want to lazy load on them
},
}
See @jellydn for configuration
Similar to the lazy setup, you can use the following configuration:
call plug#begin()
Plug 'github/copilot.vim'
Plug 'nvim-lua/plenary.nvim'
Plug 'CopilotC-Nvim/CopilotChat.nvim'
call plug#end()
lua << EOF
require("CopilotChat").setup {
-- See Configuration section for options
}
EOF
- Put the files in the right place
mkdir -p ~/.config/nvim/pack/copilotchat/start
cd ~/.config/nvim/pack/copilotchat/start
git clone https://github.com/github/copilot.vim
git clone https://github.com/nvim-lua/plenary.nvim
git clone https://github.com/CopilotC-Nvim/CopilotChat.nvim
- Add to your configuration (e.g.
~/.config/nvim/init.lua
)
require("CopilotChat").setup {
-- See Configuration section for options
}
See @deathbeam for configuration
Commands are used to control the chat interface:
Command | Description |
---|---|
:CopilotChat <input>? |
Open chat with optional input |
:CopilotChatOpen |
Open chat window |
:CopilotChatClose |
Close chat window |
:CopilotChatToggle |
Toggle chat window |
:CopilotChatStop |
Stop current output |
:CopilotChatReset |
Reset chat window |
:CopilotChatSave <name>? |
Save chat history |
:CopilotChatLoad <name>? |
Load chat history |
:CopilotChatPrompts |
View/select prompt templates |
:CopilotChatModels |
View/select available models |
:CopilotChat<PromptName> |
Use specific prompt template |
Default mappings in the chat interface:
Insert | Normal | Action |
---|---|---|
<Tab> |
- | Trigger/accept completion menu for tokens |
<C-c> |
q |
Close the chat window |
<C-l> |
<C-l> |
Reset and clear the chat window |
<C-s> |
<CR> |
Submit the current prompt |
- | grr |
Toggle sticky prompt for line under cursor |
- | grx |
Clear all sticky prompts in prompt |
<C-y> |
<C-y> |
Accept nearest diff |
- | gj |
Jump to section of nearest diff |
- | gqa |
Add all answers from chat to quickfix list |
- | gqd |
Add all diffs from chat to quickfix list |
- | gy |
Yank nearest diff to register |
- | gd |
Show diff between source and nearest diff |
- | gi |
Show info about current chat |
- | gc |
Show current chat context |
- | gh |
Show help message |
The mappings can be customized by setting the mappings
table in your configuration. Each mapping can have:
normal
: Key for normal modeinsert
: Key for insert mode
For example, to change the submit prompt mapping or show_diff full diff option:
{
mappings = {
submit_prompt = {
normal = '<Leader>s',
insert = '<C-s>'
}
show_diff = {
full_diff = true
}
}
}
Predefined prompt templates for common tasks. Reference them with /PromptName
in chat, use :CopilotChat<PromptName>
or :CopilotChatPrompts
to select them:
Prompt | Description |
---|---|
Explain |
Write an explanation for the selected code |
Review |
Review the selected code |
Fix |
Rewrite the code with bug fixes |
Optimize |
Optimize code for performance and readability |
Docs |
Add documentation comments to the code |
Tests |
Generate tests for the code |
Commit |
Write commit message using commitizen convention |
Define your own prompts in the configuration:
{
prompts = {
MyCustomPrompt = {
prompt = 'Explain how it works.',
system_prompt = 'You are very good at explaining stuff',
mapping = '<leader>ccmc',
description = 'My custom prompt description',
}
}
}
System prompts define the AI model's behavior. Reference them with /PROMPT_NAME
in chat:
Prompt | Description |
---|---|
COPILOT_BASE |
All prompts should be built on top of this |
COPILOT_INSTRUCTIONS |
Base instructions |
COPILOT_EXPLAIN |
Adds coding tutor behavior |
COPILOT_REVIEW |
Adds code review behavior with diagnostics |
Define your own system prompts in the configuration (similar to prompts
):
{
prompts = {
Yarrr = {
system_prompt = 'You are fascinated by pirates, so please respond in pirate speak.',
},
NiceInstructions = {
system_prompt = 'You are a nice coding tutor, so please respond in a friendly and helpful manner.' .. require('CopilotChat.config.prompts').COPILOT_BASE.system_prompt,
}
}
}
Sticky prompts persist across chat sessions. They're useful for maintaining model or resource selection. They work as follows:
- Prefix text with
>
using markdown blockquote syntax - The prompt will be copied at the start of every new chat prompt
- Edit sticky prompts freely while maintaining the
>
prefix
Examples:
> #glob:`*.lua`
> List all files in the workspace
> @models Using Mistral-small
> What is 1 + 11
You can also set default sticky prompts in the configuration:
{
sticky = {
'#glob:*.lua',
}
}
You can control which AI model to use in three ways:
- List available models with
:CopilotChatModels
- Set model in prompt with
$model_name
- Configure default model via
model
config key
For supported models, see:
- Copilot Chat Models
- GitHub Marketplace Models (experimental, limited usage)
Functions provide additional information and behaviour to the chat.
Tools can be organized into groups by setting the group
property. Tools assigned to a group are not automatically made available to the LLM - they must be explicitly activated.
To use grouped tools in your prompt, include @group_name
in your message. This allows the LLM to access and use all tools in that group during the current interaction.
Add tools using #tool_name[:input]
syntax:
Function | Input Support | Description |
---|---|---|
buffer |
✓ (name) | Retrieves content from a specific buffer |
buffers |
✓ (scope) | Fetches content from multiple buffers (listed/visible) |
diagnostics |
✓ (scope) | Collects code diagnostics (errors, warnings) |
file |
✓ (path) | Reads content from a specified file path |
gitdiff |
✓ (sha) | Retrieves git diff information (unstaged/staged/sha) |
gitstatus |
- | Retrieves git status information |
glob |
✓ (pattern) | Lists filenames matching a pattern in workspace |
grep |
✓ (pattern) | Searches for a pattern across files in workspace |
quickfix |
- | Includes content of files in quickfix list |
register |
✓ (register) | Provides access to specified Vim register |
url |
✓ (url) | Fetches content from a specified URL |
Examples:
> #buffer:init.lua
> #buffers:visible
> #diagnostics:current
> #file:path/to/file.js
> #git:staged
> #glob:`**/*.lua`
> #grep:`function setup`
> #quickfix
> #register:+
> #url:https://example.com
Define your own functions in the configuration with input handling and schema:
{
functions = {
birthday = {
description = "Retrieves birthday information for a person",
schema = {
type = 'object',
required = { 'name' },
properties = {
name = {
type = 'string',
enum = { 'Alice', 'Bob', 'Charlie' },
description = "Person's name",
},
},
},
resolve = function(input)
return {
{
type = 'text',
data = input.name .. ' birthday info',
}
}
end
}
}
}
For external functions implementations, see the discussion page.
Selections determine the source content for chat interactions.
Available selections are located in local select = require("CopilotChat.select")
:
Selection | Description |
---|---|
visual |
Current visual selection |
buffer |
Current buffer content |
line |
Current line content |
unnamed |
Unnamed register (last deleted/changed/yanked content) |
You can set a default selection in the configuration:
{
-- Uses visual selection or falls back to buffer
selection = function(source)
return select.visual(source) or select.buffer(source)
end
}
Providers are modules that implement integration with different AI providers.
copilot
- Default GitHub Copilot provider used for chatgithub_models
- Provider for GitHub Marketplace modelscopilot_embeddings
- Provider for Copilot embeddings, not standalone
Custom providers can implement these methods:
{
-- Optional: Disable provider
disabled?: boolean,
-- Optional: Embeddings provider name or function
embed?: string|function,
-- Optional: Get extra request headers with optional expiration time
get_headers?(): table<string,string>, number?,
-- Optional: Get API endpoint URL
get_url?(opts: CopilotChat.Provider.options): string,
-- Optional: Prepare request input
prepare_input?(inputs: table<CopilotChat.Provider.input>, opts: CopilotChat.Provider.options): table,
-- Optional: Prepare response output
prepare_output?(output: table, opts: CopilotChat.Provider.options): CopilotChat.Provider.output,
-- Optional: Get available models
get_models?(headers: table): table<CopilotChat.Provider.model>,
}
For external providers (Ollama, LM Studio, Mistral.ai), see the providers discussion page.
Below are all available configuration options with their default values:
{
-- Shared config starts here (can be passed to functions at runtime and configured via setup function)
system_prompt = 'COPILOT_INSTRUCTIONS', -- System prompt to use (can be specified manually in prompt via /).
model = 'gpt-4o-2024-11-20', -- Default model to use, see ':CopilotChatModels' for available models (can be specified manually in prompt via $).
group = nil, -- Default group of tools or array of groups to use (can be specified manually in prompt via @).
sticky = nil, -- Default sticky prompt or array of sticky prompts to use at start of every new chat (can be specified manually in prompt via >).
temperature = 0.1, -- GPT result temperature
headless = false, -- Do not write to chat buffer and use history (useful for using custom processing)
stream = nil, -- Function called when receiving stream updates (returned string is appended to the chat buffer)
callback = nil, -- Function called when full response is received (retuned string is stored to history)
remember_as_sticky = true, -- Remember model as sticky prompts when asking questions
-- default selection
-- see select.lua for implementation
selection = select.visual,
-- default window options
window = {
layout = 'vertical', -- 'vertical', 'horizontal', 'float', 'replace', or a function that returns the layout
width = 0.5, -- fractional width of parent, or absolute width in columns when > 1
height = 0.5, -- fractional height of parent, or absolute height in rows when > 1
-- Options below only apply to floating windows
relative = 'editor', -- 'editor', 'win', 'cursor', 'mouse'
border = 'single', -- 'none', single', 'double', 'rounded', 'solid', 'shadow'
row = nil, -- row position of the window, default is centered
col = nil, -- column position of the window, default is centered
title = 'Copilot Chat', -- title of chat window
footer = nil, -- footer of chat window
zindex = 1, -- determines if window is on top or below other floating windows
},
show_help = true, -- Shows help message as virtual lines when waiting for user input
highlight_selection = true, -- Highlight selection
highlight_headers = true, -- Highlight headers in chat, disable if using markdown renderers (like render-markdown.nvim)
references_display = 'virtual', -- 'virtual', 'write', Display references in chat as virtual text or write to buffer
auto_follow_cursor = true, -- Auto-follow cursor in chat
auto_insert_mode = false, -- Automatically enter insert mode when opening window and on new prompt
insert_at_end = false, -- Move cursor to end of buffer when inserting text
clear_chat_on_new_prompt = false, -- Clears chat on every new prompt
-- Static config starts here (can be configured only via setup function)
debug = false, -- Enable debug logging (same as 'log_level = 'debug')
log_level = 'info', -- Log level to use, 'trace', 'debug', 'info', 'warn', 'error', 'fatal'
proxy = nil, -- [protocol://]host[:port] Use this proxy
allow_insecure = false, -- Allow insecure server connections
chat_autocomplete = true, -- Enable chat autocompletion (when disabled, requires manual `mappings.complete` trigger)
log_path = vim.fn.stdpath('state') .. '/CopilotChat.log', -- Default path to log file
history_path = vim.fn.stdpath('data') .. '/copilotchat_history', -- Default path to stored history
question_header = '# User ', -- Header to use for user questions
answer_header = '# Copilot ', -- Header to use for AI answers
error_header = '# Error ', -- Header to use for errors
separator = '───', -- Separator to use in chat
-- default providers
-- see config/providers.lua for implementation
providers = {
copilot = {
},
github_models = {
},
copilot_embeddings = {
},
},
-- default tools
-- see config/tools.lua for implementation
tools = {
buffer = {
},
buffers = {
},
file = {
},
glob = {
},
grep = {
},
quickfix = {
},
diagnostics = {
},
gitdiff = {
},
gitstatus = {
},
url = {
},
register = {
},
},
-- default prompts
-- see config/prompts.lua for implementation
prompts = {
Explain = {
prompt = 'Write an explanation for the selected code as paragraphs of text.',
system_prompt = 'COPILOT_EXPLAIN',
},
Review = {
prompt = 'Review the selected code.',
system_prompt = 'COPILOT_REVIEW',
},
Fix = {
prompt = 'There is a problem in this code. Identify the issues and rewrite the code with fixes. Explain what was wrong and how your changes address the problems.',
},
Optimize = {
prompt = 'Optimize the selected code to improve performance and readability. Explain your optimization strategy and the benefits of your changes.',
},
Docs = {
prompt = 'Please add documentation comments to the selected code.',
},
Tests = {
prompt = 'Please generate tests for my code.',
},
Commit = {
prompt = 'Write commit message for the change with commitizen convention. Keep the title under 50 characters and wrap message at 72 characters. Format as a gitcommit code block.',
sticky = '#git:staged',
},
},
-- default mappings
-- see config/mappings.lua for implementation
mappings = {
complete = {
insert = '<Tab>',
},
close = {
normal = 'q',
insert = '<C-c>',
},
reset = {
normal = '<C-l>',
insert = '<C-l>',
},
submit_prompt = {
normal = '<CR>',
insert = '<C-s>',
},
toggle_sticky = {
normal = 'grr',
},
clear_stickies = {
normal = 'grx',
},
accept_diff = {
normal = '<C-y>',
insert = '<C-y>',
},
jump_to_diff = {
normal = 'gj',
},
quickfix_answers = {
normal = 'gqa',
},
quickfix_diffs = {
normal = 'gqd',
},
yank_diff = {
normal = 'gy',
register = '"', -- Default register to use for yanking
},
show_diff = {
normal = 'gd',
full_diff = false, -- Show full diff instead of unified diff when showing diff window
},
show_info = {
normal = 'gc',
},
show_help = {
normal = 'gh',
},
},
}
Types of copilot buffers:
copilot-chat
- Main chat buffercopilot-overlay
- Overlay buffers (e.g. help, info, diff)
You can set local options for plugin buffers like this:
vim.api.nvim_create_autocmd('BufEnter', {
pattern = 'copilot-*',
callback = function()
-- Set buffer-local options
vim.opt_local.relativenumber = false
vim.opt_local.number = false
vim.opt_local.conceallevel = 0
end
})
Types of copilot highlights:
CopilotChatHeader
- Header highlight in chat bufferCopilotChatSeparator
- Separator highlight in chat bufferCopilotChatStatus
- Status and spinner in chat bufferCopilotChatHelp
- Help messages in chat buffer (help, references)CopilotChatSelection
- Selection highlight in source bufferCopilotChatKeyword
- Keyword highlight in chat buffer (e.g. prompts, tools)CopilotChatAnnotation
- Annotation highlight in chat buffer (file headers, tool call headers, tool call body)
local chat = require("CopilotChat")
-- Basic Chat Functions
chat.ask(prompt, config) -- Ask a question with optional config
chat.response() -- Get the last response text
chat.resolve_prompt() -- Resolve prompt references
chat.resolve_tools() -- Resolve tools that are available for automatic use by LLM
chat.resolve_model() -- Resolve model from prompt (WARN: async, requires plenary.async.run)
-- Window Management
chat.open(config) -- Open chat window with optional config
chat.close() -- Close chat window
chat.toggle(config) -- Toggle chat window visibility with optional config
chat.reset() -- Reset the chat
chat.stop() -- Stop current output
-- Source Management
chat.get_source() -- Get the current source buffer and window
chat.set_source(winnr) -- Set the source window
-- Selection Management
chat.get_selection() -- Get the current selection
chat.set_selection(bufnr, start_line, end_line, clear) -- Set or clear selection
-- Prompt & Model Management
chat.select_prompt(config) -- Open prompt selector with optional config
chat.select_model() -- Open model selector
chat.prompts() -- Get all available prompts
-- Completion
chat.trigger_complete() -- Trigger completion in chat window
chat.complete_info() -- Get completion info for custom providers
chat.complete_items() -- Get completion items (WARN: async, requires plenary.async.run)
-- History Management
chat.save(name, history_path) -- Save chat history
chat.load(name, history_path) -- Load chat history
-- Configuration
chat.setup(config) -- Update configuration
chat.log_level(level) -- Set log level (debug, info, etc.)
You can also access the chat window UI methods through the chat.chat
object:
local window = require("CopilotChat").chat
-- Chat UI State
window:visible() -- Check if chat window is visible
window:focused() -- Check if chat window is focused
-- Content Management
window:get_prompt() -- Get current prompt from chat window
window:set_prompt(prompt) -- Set prompt in chat window
window:add_sticky(sticky) -- Add sticky prompt to chat window
window:append(text) -- Append text to chat window
window:clear() -- Clear chat window content
window:finish() -- Finish writing to chat window
-- Navigation
window:follow() -- Move cursor to end of chat content
window:focus() -- Focus the chat window
-- Advanced Features
window:get_closest_section() -- Get section closest to cursor
window:get_closest_block() -- Get code block closest to cursor
window:overlay(opts) -- Show overlay with specified options
-- Open chat, ask a question and handle response
require("CopilotChat").open()
require("CopilotChat").ask("#buffer Explain this code", {
callback = function(response)
vim.notify("Got response: " .. response:sub(1, 50) .. "...")
return response
end,
})
-- Save and load chat history
require("CopilotChat").save("my_debugging_session")
require("CopilotChat").load("my_debugging_session")
-- Use custom sticky and model
require("CopilotChat").ask("How can I optimize this?", {
model = "gpt-4o",
sticky = {"#buffer", "#git:staged"}
})
For more examples, see the examples wiki page.
To set up the environment:
- Clone the repository:
git clone https://github.com/CopilotC-Nvim/CopilotChat.nvim
cd CopilotChat.nvim
- Install development dependencies:
# Install pre-commit hooks
make install-pre-commit
To run tests:
make test
- Fork the repository
- Create your feature branch
- Make your changes
- Run tests and lint checks
- Submit a pull request
See CONTRIBUTING.md for detailed guidelines.
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind are welcome!