Currently supports: Anthropic, Ollama and OpenAI adapters
Important
This plugin is provided as-is and is primarily developed for my own workflows. As such, I offer no guarantees of regular updates or support and I expect the plugin's API to change regularly. Bug fixes and feature enhancements will be implemented at my discretion, and only if they align with my personal use-cases. Feel free to fork the project and customize it to your needs, but please understand my involvement in further development will be intermittent. To be notified of breaking changes in the plugin, please subscribe to this issue.
- 💬 A Copilot Chat experience in Neovim
- 🔌 Support for OpenAI, Anthropic and Ollama
- 🚀 Inline code creation and refactoring
- 🤖 Variables, Agents and Workflows to improve LLM output
- ✨ Built in prompts for LSP errors and code advice
- 🏗️ Create your own custom prompts for Neovim
- 💾 Save and restore your chats
- 💪 Async execution for improved performance
Chat.Buffer.mp4
Inline.Coding.mp4
- The
curllibrary installed - Neovim 0.9.2 or greater
- (Optional) An API key for your chosen LLM
Install the plugin with your preferred package manager:
{
"olimorris/codecompanion.nvim",
dependencies = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
"nvim-telescope/telescope.nvim", -- Optional
{
"stevearc/dressing.nvim", -- Optional: Improves the default Neovim UI
opts = {},
},
},
config = true
}use({
"olimorris/codecompanion.nvim",
config = function()
require("codecompanion").setup()
end,
requires = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
"nvim-telescope/telescope.nvim", -- Optional
"stevearc/dressing.nvim" -- Optional: Improves the default Neovim UI
}
})The default configuration can be found in the config.lua file. You can change any of the defaults by calling the setup function. For example:
require("codecompanion").setup({
opts = {
send_code = false
}
})Adapters
Warning
Depending on your chosen adapter, you may need to set an API key.
The plugin uses adapters to connect the plugins to LLMs. Currently the plugin supports:
- Anthropic (
anthropic) - Requires an API key - Ollama (
ollama) - OpenAI (
openai) - Requires an API key
Strategies are the different ways that a user can interact with the plugin. The chat and agent strategies harness a buffer to allow direct conversation with the LLM. The inline strategy allows for output from the LLM to be written directly into a pre-existing Neovim buffer.
To specify a different adapter to the defaults, simply change the strategies.* table:
require("codecompanion").setup({
strategies = {
chat = {
adapter = "ollama",
},
inline = {
adapter = "ollama",
},
agent = {
adapter = "anthropic",
},
},
})Tip
To create your own adapter please refer to the ADAPTERS guide.
Configuring environment variables
You can customise an adapter's configuration as follows:
require("codecompanion").setup({
adapters = {
anthropic = function()
return require("codecompanion.adapters").use("anthropic", {
env = {
api_key = "ANTHROPIC_API_KEY_1"
},
})
end,
},
strategies = {
chat = {
adapter = "anthropic",
},
},
})In the example above, we're using the base of the Anthropic adapter but changing the name of the default API key which it uses.
Having API keys in plain text in your shell is not always safe. Thanks to this PR, you can run commands from within the configuration:
require("codecompanion").setup({
adapters = {
openai = function()
return require("codecompanion.adapters").use("openai", {
env = {
api_key = "cmd:op read op://personal/OpenAI/credential --no-newline",
},
})
end,
strategies = {
chat = {
adapter = "openai",
},
},
},
})In this example, we're using the 1Password CLI to read an OpenAI credential.
Configuring adapter settings
LLMs have many settings such as model, temperature and max_tokens. In an adapter, these sit within a schema table and can be configured during setup:
require("codecompanion").setup({
adapters = {
llama3 = function()
return require("codecompanion.adapters").use("ollama", {
schema = {
model = {
default = "llama3:latest",
},
num_ctx = {
default = 16384,
},
num_predict = {
default = -1,
},
},
})
end,
},
})Tip
Refer to your chosen adapter to see the settings available.
Highlight Groups
The plugin sets the following highlight groups during setup:
CodeCompanionChatHeader- The headers in the chat bufferCodeCompanionChatSeparator- Separator between headings in the chat bufferCodeCompanionChatTokens- Virtual text in the chat buffer showing the token countCodeCompanionChatTool- Tools in the chat bufferCodeCompanionChatVariable- Variables in the chat bufferCodeCompanionVirtualText- All other virtual text in the plugin
Tip
You can change which highlight group these link to in your configuration.
Inline Prompting
Inline.Prompting.mp4
To start interacting with the plugin you can run :CodeCompanion <your prompt> from the command line. You can also make a visual selection in Neovim and run :'<,'>CodeCompanion <your prompt> to send it as context. The plugin will initially use an LLM to classify your prompt in order to determine where in Neovim to place the response. You can find more about the classificiations in the inline prompting section.
For convenience, you can also call default prompts from the command line via slash commands:
/explain- Explain how selected code in a buffer works/tests- Generate unit tests for selected code/fix- Fix the selected code/buffer- Send the current buffer to the LLM alongside a prompt/lsp- Explain the LSP diagnostics for the selected code/commit- Generate a commit message
Running :'<,'>CodeCompanion /fix will trigger the plugin to start following the fix prompt as defined in the config. Some of the slash commands can also take custom prompts. For example, running :'<,'>CodeCompanion /buffer refactor this code sends the whole buffer as context alongside a prompt to refactor the selected code.
There are also keymaps available to accept or reject edits from the LLM in the inline prompting section.
Chat Buffer
The chat buffer is where you'll likely spend most of your time when interacting with the plugin. Running :CodeCompanionChat or :'<,'>CodeCompanionChat will open up a chat buffer where you can converse directly with an LLM. As a convenience, you can use :CodeCompanionToggle to toggle the visibility of a chat buffer.
When in the chat buffer you have access to the following variables:
#buffer- Share the current buffer's content with the LLM. You can also specify line numbers with#buffer:8-20#buffers- Share all current open buffers with the LLM#editor- Share the buffers and lines that you see in the editor's viewport#lsp- Share LSP information and code for the current buffer
Note
When in the chat buffer, the ? keymap brings up all of the available keymaps, variables and tools available to you.
Agents / Tools
Agents.mp4
The plugin also supports LLMs acting as agents by the calling of external tools. In the video above, we're asking an LLM to execute the contents of the buffer via the @code_runner tool, all from within a chat buffer.
When in the chat buffer you have access to the following tools:
@code_runner- The LLM can trigger the running of any code from within a Docker container@rag- The LLM can browse and search the internet for real-time information to supplement its response@buffer_editor- The LLM can edit code in a Neovim buffer by searching and replacing blocks
Important
Agents are currently at an alpha stage right now and I'm using the term agent and tool interchangeably.
Action Palette
The :CodeCompanionActions command will open the Action Palette, giving you access to all of the functionality in the plugin. The Prompts section is where the default prompts and your custom ones can be accessed from. You'll notice that some prompts have a slash command in their description such as /commit. This enables you to trigger them from the command line by doing :CodeCompanion /commit. Some of these prompts also have keymaps assigned to them (which can be overwritten!) which offers an even easier route to triggering them.
Note
Some actions will only be visible in the Action Palette if you're in Visual mode.
List of commands
Below is the full list of commands that are available in the plugin:
CodeCompanionActions- To open the Action PaletteCodeCompanion- Inline prompting of the pluginCodeCompanion <slash_cmd>- Inline prompting of the plugin with a slash command e.g./commitCodeCompanionChat- To open up a new chat bufferCodeCompanionChat <adapter>- To open up a new chat buffer with a specific adapterCodeCompanionToggle- To toggle a chat bufferCodeCompanionAdd- To add visually selected chat to the current chat buffer
Suggested workflow
For an optimum workflow, I recommend the following options:
vim.api.nvim_set_keymap("n", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("n", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "ga", "<cmd>CodeCompanionAdd<cr>", { noremap = true, silent = true })
-- Expand 'cc' into 'CodeCompanion' in the command line
vim.cmd([[cab cc CodeCompanion]])A RECIPES guide has been created to show you how you can add your own prompts to the Action Palette.
The chat buffer is where you can converse with an LLM, directly from Neovim. It behaves as a regular markdown buffer with some clever additions. When the buffer is written (or "saved"), autocmds trigger the sending of its content to the LLM in the form of prompts. These prompts are segmented by H1 headers: user, system and assistant. When a response is received, it is then streamed back into the buffer. The result is that you experience the feel of conversing with your LLM from within Neovim.
As noted in the Getting Started section, there are a number of variables that you can make use of whilst in the chat buffer. Use # to bring up the completion menu to see the available options.
Keymaps
When in the chat buffer, there are number of keymaps available to you:
?- Bring up the help menu<CR>|<C-s>- Send the buffer to the LLM<C-c>- Close the bufferq- Cancel the request from the LLMga- Change the adaptergx- Clear the buffer's contentsgx- Add a codeblockgs- Save the chat to disk}- Move to the next chat{- Move to the previous chat[- Move to the next header]- Move to the previous header
Saved Chats
Chat buffers are not saved to disk by default, but can be by pressing gs in the buffer. Saved chats can then be restored via the Action Palette and the Load saved chats action.
Settings
If display.chat.show_settings is set to true, at the very top of the chat buffer will be the adapter's model parameters which can be changed to tweak the response from the LLM. You can find more detail by moving the cursor over them.
Open Chats
From the Action Palette, the Open Chats action enables users to easily navigate between their open chat buffers. A chat buffer can be deleted (and removed from memory) by pressing <C-c>.
Note
If send_code = false then this will take precedent and no code will be sent to the LLM
Inline prompts can be triggered via the CodeCompanion <your prompt> command. As mentioned in the Getting Started section, you can also leverage visual selections and slash commands like '<,'>CodeCompanion /buffer what does this code do?, where the slash command points to a default prompt and any words after that act as a custom prompt to the LLM.
One of the challenges with inline editing is determining how the LLM's response should be handled in the buffer. If you've prompted the LLM to "create a table of 5 common text editors" then you may wish for the response to be placed after the cursor's position in the current buffer. However, if you asked the LLM to "refactor this function" then you'd expect the response to overwrite a visual selection. The plugin will use the inline LLM you've specified in your config to determine if the response should follow any of the placements below:
- after - after the visual selection/cursor
- before - before the visual selection/cursor
- new - in a new buffer
- replace - replacing the visual selection
- chat - in a chat buffer
There are also keymaps available to you after an inline edit has taken place:
ga- Accept an inline editgr- Reject an inline edit
Note
Please see the RECIPES guide in order to add your own prompts to the Action Palette and as a slash command.
The plugin comes with a number of default prompts (as per the config) which can be called via keymaps and/or slash commands. These prompts have been carefully curated to mimic those in GitHub's Copilot Chat.
As outlined by Andrew Ng in Agentic Design Patterns Part 3, Tool Use, LLMs can act as agents by leveraging external tools. Andrew notes some common examples such as web searching or code execution that have obvious benefits when using LLMs.
In the plugin, agents are simply context that's given to an LLM via a system prompt. This gives it knowledge and a defined schema which it can include in its response for the plugin to parse, execute and feedback on. Agents can be added as a participant in a chat buffer by using the @ key.
More information on how agents work and how you can create your own can be found in the AGENTS guide.
Warning
Workflows may result in the significant consumption of tokens if you're using an external LLM.
As outlined by Andrew Ng, agentic workflows have the ability to dramatically improve the output of an LLM. Infact, it's possible for older models like GPT 3.5 to outperform newer models (using traditional zero-shot inference). Andrew discussed how an agentic workflow can be utilised via multiple prompts that invoke the LLM to self reflect. Implementing Andrew's advice, the plugin supports this notion via the use of workflows. At various stages of a pre-defined workflow, the plugin will automatically prompt the LLM without any input or triggering required from the user.
Currently, the plugin comes with the following workflows:
- Adding a new feature
- Refactoring code
Of course you can add new workflows by following the RECIPES guide.
Hooks / User events
The plugin fires the following events during its lifecycle:
CodeCompanionRequest- Fired during the API request. Outputsdata.statuswith a value ofstartedorfinishedCodeCompanionChatSaved- Fired after a chat has been saved to diskCodeCompanionChat- Fired at various points during the chat buffer. Comes with the following attributes:data.action = hide_buffer- For when a chat buffer is hidden
CodeCompanionInline- Fired during the inline API request alongsideCodeCompanionRequest. Outputsdata.statuswith a value ofstartedorfinishedanddata.placementwith the placement of the text from the LLMCodeCompanionAgent- Fired when an agent is running. Outputsdata.statuswith a value ofstartedorsuccess/failure
Events can be hooked into as follows:
local group = vim.api.nvim_create_augroup("CodeCompanionHooks", {})
vim.api.nvim_create_autocmd({ "User" }, {
pattern = "CodeCompanionInline",
group = group,
callback = function(args)
if args.data.status == "finished" then
-- Format the buffer after the inline request has completed
require("conform").format({ bufnr = args.buf })
end
end,
})Statuslines
You can incorporate a visual indication to show when the plugin is communicating with an LLM in your Neovim configuration. Below are examples for two popular statusline plugins.
lualine.nvim:
local M = require("lualine.component"):extend()
M.processing = false
M.spinner_index = 1
local spinner_symbols = {
"⠋",
"⠙",
"⠹",
"⠸",
"⠼",
"⠴",
"⠦",
"⠧",
"⠇",
"⠏",
}
local spinner_symbols_len = 10
-- Initializer
function M:init(options)
M.super.init(self, options)
local group = vim.api.nvim_create_augroup("CodeCompanionHooks", {})
vim.api.nvim_create_autocmd({ "User" }, {
pattern = "CodeCompanionRequest",
group = group,
callback = function(request)
self.processing = (request.data.status == "started")
end,
})
end
-- Function that runs every time statusline is updated
function M:update_status()
if self.processing then
self.spinner_index = (self.spinner_index % spinner_symbols_len) + 1
return spinner_symbols[self.spinner_index]
else
return nil
end
end
return Mheirline.nvim:
local CodeCompanion = {
static = {
processing = false,
},
update = {
"User",
pattern = "CodeCompanionRequest",
callback = function(self, args)
self.processing = (args.data.status == "started")
vim.cmd("redrawstatus")
end,
},
{
condition = function(self)
return self.processing
end,
provider = " ",
hl = { fg = "yellow" },
},
}Legendary.nvim
The plugin also supports the amazing legendary.nvim plugin. Simply enable it in your config:
require('legendary').setup({
extensions = {
codecompanion = true,
},
})I am open to contributions but they will be implemented at my discretion. Feel free to open up a discussion before embarking on a big PR and please make sure you've read the CONTRIBUTING.md guide.
- Steven Arcangeli for his genius creation of the chat buffer and his feedback
- Wtf.nvim for the LSP assistant action
- ChatGPT.nvim for the calculation of tokens


