CodeContinue
LLM powered Sublime Text plugin to autocomplete inline code
Details
Installs
- Total 1
- Win 1
- Mac 0
- Linux 0
| Feb 18 | Feb 17 | Feb 16 | Feb 15 | Feb 14 | Feb 13 | Feb 12 | Feb 11 | Feb 10 | Feb 9 | Feb 8 | Feb 7 | Feb 6 | Feb 5 | Feb 4 | Feb 3 | Feb 2 | Feb 1 | Jan 31 | Jan 30 | Jan 29 | Jan 28 | Jan 27 | Jan 26 | Jan 25 | Jan 24 | Jan 23 | Jan 22 | Jan 21 | Jan 20 | Jan 19 | Jan 18 | Jan 17 | Jan 16 | Jan 15 | Jan 14 | Jan 13 | Jan 12 | Jan 11 | Jan 10 | Jan 9 | Jan 8 | Jan 7 | Jan 6 | Jan 5 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Windows | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Mac | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Linux | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Readme
- Source
- raw.githubusercontent.com
CodeContinue - AI Code Completion for Sublime Text

An LLM-powered Sublime Text plugin that provides intelligent inline code completion suggestions using OpenAI-compatible APIs. Check out CodeContinue blog post here.
Features
- Fast inline code completion powered by your choice of LLM
- Simple keyboard shortcuts: Just
Enterto suggest,Tabto accept (⚠️ Note: Keybindings are not enabled by default) - Context-aware suggestions based on surrounding code
- Configurable for multiple languages (Python, C++, JavaScript, etc.)
- Works with any OpenAI-compatible API endpoint
Installation
Option 1: Install via Package Control (coming soon)
We provide cross-platform installers for Windows, macOS, and Linux.
- Install package control
- Open command pallete via
Ctrl + Shift + PorCmd + Shift + Pon Mac - Type “Install Package Control”
- Select the context menue
- Open command pallete via
- Install the Package
- Open command pallete and search for codeContinue
- Press Enter to install
- Configure codeContinue
- After installing codeContinue, a setup wizard appears automatically
- Enter API end point and model name. Check Configuration for details.
Option 2: Manual Install (For Developers)
If you prefer manual setup, clone the repo and just use either of the CLI based method or GUI method.
- Python CLI Installer
python install.py
Interactive command-line installer. Detects Sublime Text automatically, walks you through configuration.
- GUI Installer
python install_gui.py
Graphical installer with Tkinter. Pre-loads existing settings, configurable interface.
Configuration
Click to expand configuration options
Settings are saved automatically. Reconfigure Anytime with Ctrl+Shift+P. Following are configuration options.
endpoint: Your OpenAI-compatible API endpoint (v1 format) - Required
- OpenAI:
https://api.openai.com/v1/chat/completions - Local server:
http://localhost:8000/v1/chat/completions - Other providers: Use their v1-compatible endpoint
- OpenAI:
model: The model to use for completions - Required
- Tested models so far are as follows:
gpt-oss-20bQwen/Qwen2.5-Coder-1.5B-Instruct
api_key: Authentication key (optional)
- Only needed if your endpoint requires it
- For OpenAI:
sk-... - Leave blank if not needed
max_context_lines: Number of lines of context to send (default: 30)
- Increase for more context, decrease for faster responses
timeout_ms: Request timeout in milliseconds (default: 20000)
- Increase if using slower endpoints
trigger_language: Array of language scopes to enable the plugin
- Examples:
python,cpp,javascript,typescript,java,go,rust, etc.
- Examples:
debug: Enable debug logging to console (default:
false)- Set to
trueto see detailed logs inView → Show Console - Plugin is silent by default
- Set to
API Authentication
For endpoints requiring authentication (like OpenAI):
Using the Configure command:
- Press
Ctrl+Shift+P→ “CodeContinue: Configure” - When prompted for API Key, enter your key (e.g.,
sk-...for OpenAI) - Settings are saved automatically
- Press
Or edit settings directly:
- Open
Preferences > Package Settings > CodeContinue > Settings - Add your API key: “json { "api_key”: “sk-your-api-key-here” }
- Open
## Requirements
- Sublime Text 4
- Internet connection (for API access)
- Access to an OpenAI-compatible API endpoint
## Troubleshooting
<details>
<summary>Click to expand troubleshooting tips</summary>
### Setup wizard not appearing
- Restart Sublime Text: File -> Exit, then reopen
- Check Sublime Text console (View -> Show Console) for errors
- Manually run: `Preferences > Package Settings > CodeContinue > Settings`
- Or use `Ctrl+Shift+P` → "CodeContinue: Configure"
### Suggestions not appearing
- Check that your language is in the `trigger_language` list
- Verify your API endpoint is accessible and correct
- Enable debug logging: set `"debug": true` in settings, then check `View → Show Console`
- Try `Ctrl+Shift+P` -> "CodeContinue: Configure" to verify settings
- Make sure you have an active API key if required
### Timeout errors
- Increase `timeout_ms` in settings (default: 20000ms)
- Try a faster model or local endpoint
- Check your internet connection
- Verify your API key is valid
### Authentication/Connection errors
- Verify your API key is correct
- Make sure endpoint URL is exactly right (copy-paste to avoid typos)
- Check if the endpoint is currently running/available
- Use `Ctrl+Shift+P` -> "CodeContinue: Configure" to update credentials
### Keybindings not working
- Make sure you've set up keybindings (they're not enabled by default)
- Go to `Preferences > Package Settings > CodeContinue > Key Bindings`
- Check for conflicts with other packages
- Try alternative key combinations
</details>
## Advanced Configuration
<details>
<summary>Click to expand advanced configuration examples</summary>
CodeContinue works with any OpenAI-compatible v1 API. Examples:
**OpenAI:**
endpoint: https://api.openai.com/v1/chat/completions model: gpt-3.5-turbo or gpt-4 api_key: sk-…
**Local LLM (LLaMA, Mistral, etc.):**
endpoint: http://localhost:8000/v1/chat/completions model: (whatever model you're running) api_key: (usually not needed)
**Hugging Face Inference API:**
endpoint: https://api-inference.huggingface.co/v1/chat/completions model: HuggingFaceH4/zephyr-7b-beta apikey: hf…
**Other providers:**
Any endpoint supporting OpenAI's v1 chat completion format will work.
</details>
## License
See [LICENSE](LICENSE) file for details.