Skip to content

local LLM chat for vim with dynamic warmup, forks and multi-model support

License

Notifications You must be signed in to change notification settings

okuvshynov/vimqq

Repository files navigation

Vim quick question (vim-qq)

Undergoing major experimental changes for now

AI plugin for Vim/NeoVim with focus on local evaluation, flexible context and aggressive cache warmup to hide latency.

vimqq_thor_dst.mp4

Features (including experimental)

  • Support for both remote models through paid APIs (Claude, Deepseek) and local models via llama.cpp server;
  • automated KV cache warmup for local model evaluation;
  • dynamic warmup on typing - in case of long questions, it is a good idea to prefill cache for the question itself;
  • human-readable hierarchical project indexing;
  • llm agents in different roles: engineer, reviewer, etc.
  • fully closing the loop and implementing complex features E2E

About

local LLM chat for vim with dynamic warmup, forks and multi-model support

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published