Continue
9.2/10Best OSS BYO-LLM extension for VS Code and JetBrains
The OSS BYO-LLM extension for VS Code and JetBrains with auditable code and no middleman markup on token costs.
| Plan | Monthly | What you get |
|---|---|---|
| OSS (free) | Free | Apache 2 licensed extension for VS Code and JetBrains with BYO LLM (any provider) and custom slash commands and rules |
| Continue Hub (free) | Free | Free shared assistant directory with custom assistants and contexts, public hub of recipes and rules, and sync across machines |
Continue is the Apache 2 OSS AI coding extension, launched 2023 in San Francisco. The wedge against Aider is the IDE form factor: Continue is a VS Code or JetBrains extension where Aider is a CLI. The wedge against Copilot is the open-source license and BYO-LLM freedom; the dev controls model selection and pays the provider directly with no middleman markup.
The OSS extension is Apache 2 licensed for VS Code and JetBrains, supports BYO LLM (OpenAI, Anthropic, Mistral, Ollama for local inference), and ships with custom slash commands and rules. Continue Hub is a free shared assistant directory with custom assistants and contexts, a public hub of recipes and rules, and sync across machines. All tiers are free; the typical is null and the price weight renormalizes across feature, free-tier, and fit axes.
The catch is the BYO-LLM operational tax. Users manage API keys and billing, the UX is less polished than Cursor or Copilot, and the community is smaller than Microsoft-backed tools. For OSS-aligned devs comfortable running their own model keys, the cost transparency and auditability outweigh the rough edges.
Pros
- Apache 2 OSS license; codebase auditable + extensible
- BYO LLM (OpenAI, Anthropic, Mistral, Ollama, etc.)
- No middleman markup on token costs
- Continue Hub for shared assistants + recipes
- VS Code + JetBrains plugins with custom slash commands
Cons
- BYO LLM means user manages API keys + billing
- Less polished UX than Cursor or GitHub Copilot
Best for: OSS-aligned devs, BYO-LLM enthusiasts, local-inference users running Ollama, and any team wanting auditable extension code.
- Code privacy
- 10
- Completion latency
- 8
- Daily UX
- 7
- Value
- 10
- Support
- 7