OpenAI completion
First class Sublime Text AI assistant with o1, o3-mini and ollama support!
Details
Installs
- Total 6K
- Win 3K
- Mac 2K
- Linux 1K
Feb 26 | Feb 25 | Feb 24 | Feb 23 | Feb 22 | Feb 21 | Feb 20 | Feb 19 | Feb 18 | Feb 17 | Feb 16 | Feb 15 | Feb 14 | Feb 13 | Feb 12 | Feb 11 | Feb 10 | Feb 9 | Feb 8 | Feb 7 | Feb 6 | Feb 5 | Feb 4 | Feb 3 | Feb 2 | Feb 1 | Jan 31 | Jan 30 | Jan 29 | Jan 28 | Jan 27 | Jan 26 | Jan 25 | Jan 24 | Jan 23 | Jan 22 | Jan 21 | Jan 20 | Jan 19 | Jan 18 | Jan 17 | Jan 16 | Jan 15 | Jan 14 | Jan 13 | Jan 12 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Windows | 5 | 10 | 8 | 4 | 9 | 6 | 8 | 6 | 9 | 12 | 2 | 5 | 10 | 10 | 14 | 7 | 14 | 10 | 9 | 9 | 22 | 9 | 7 | 9 | 6 | 2 | 7 | 7 | 8 | 8 | 9 | 6 | 3 | 5 | 5 | 3 | 7 | 9 | 4 | 6 | 7 | 7 | 7 | 4 | 6 | 1 |
Mac | 9 | 10 | 5 | 9 | 6 | 10 | 4 | 9 | 15 | 6 | 3 | 4 | 5 | 9 | 7 | 6 | 6 | 2 | 5 | 10 | 11 | 8 | 9 | 11 | 7 | 9 | 12 | 12 | 15 | 16 | 13 | 4 | 1 | 7 | 1 | 10 | 5 | 10 | 5 | 2 | 2 | 7 | 4 | 5 | 2 | 4 |
Linux | 4 | 3 | 3 | 6 | 4 | 1 | 1 | 4 | 4 | 1 | 1 | 7 | 6 | 3 | 7 | 5 | 5 | 0 | 5 | 12 | 4 | 3 | 4 | 2 | 6 | 1 | 11 | 5 | 8 | 8 | 4 | 1 | 0 | 0 | 5 | 4 | 5 | 2 | 0 | 1 | 2 | 4 | 1 | 3 | 1 | 1 |
Readme
- Source
- raw.githubusercontent.com
OpenAI Sublime Text Plugin
tldr;
Cursor level of AI assistance for Sublime Text. I mean it.
Works with all OpenAI'ish API: llama.cpp server, ollama or whatever third party LLM hosting. Claude API support coming soon.
[!NOTE] 5.0.0 release is around the corner! Check out release notes for details.
Features
- Chat mode powered by whatever model you'd like.
- gpt-o3-mini and gpt-o1 support.
- llama.cpp's server, ollama and all the rest OpenAI'ish API compatible.
- Dedicated chats histories and assistant settings for a projects.
- Ability to send whole files or their parts as a context expanding.
- Phantoms Get non-disruptive inline right in view answers from the model.
- Markdown syntax with code languages syntax highlight (Chat mode only).
- Server Side Streaming (SSE) streaming support.
- Status bar various info: model name, mode, sent/received tokens.
- Proxy support.
Requirements
- Sublime Text 4
- llama.cpp, ollama installed OR
- Remote llm service provider API key, e.g. OpenAI
- Atropic API key [coming soon].
Installation
- Install the Sublime Text Package Control plugin if you haven't done this before.
- Open the command palette and type
Package Control: Install Package
. - Type
OpenAI
and pressEnter
.
[!NOTE] Highly recommended complimentary packages: - https://github.com/SublimeText-Markdown/MarkdownCodeExporter - https://sublimetext-markdown.github.io/MarkdownEditing
Usage
AI Assistance use case
ChatGPT mode works the following way:
- Select some text or even the whole tabs to include them in request
- Run either
OpenAI: Chat Model Select
orOpenAI: Chat Model Select With Tabs
commands. - Input a request in input window if any.
- The model will print a response in output panel by default, but you can switch that to a separate tab with
OpenAI: Open in Tab
. - To get an existing chat in a new window run
OpenAI: Refresh Chat
. - To reset history
OpenAI: Reset Chat History
command to rescue.
[!NOTE] You suggested to bind at least
OpenAI: New Message
,OpenAI: Chat Model Select
andOpenAI: Show output panel
in sake for convenience, you can do that in plugin settings.
Chat history management
You can separate a chat history and assistant settings for a given project by appending the following snippet to its settings:
{
"settings": {
"ai_assistant": {
"cache_prefix": "/absolute/path/to/project/"
}
}
}
Additional request context management
You can add a few things to your request: - multi-line selection within a single file - multiple files within a single View Group
To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line).
To append the whole file(s) to request you should super+button1
on them to make whole tabs of them to become visible in a single view group and then run OpenAI: Add Sheets to Context
command. Sheets can be deselected with the same command.
You can check the numbers of added sheets in the status bar and on "OpenAI: Chat Model Select"
command call in the preview section.
Image handling
Image handle can be called by OpenAI: Handle Image
command.
It expects an absolute path to image to be selected in a buffer or stored in clipboard on the command call (smth like /Users/username/Documents/Project/image.png
). In addition command can be passed by input panel to proceed the image with special treatment. png
and jpg
images are only supported.
[!NOTE] Currently plugin expects the link or the list of links separated by a new line to be selected in buffer or stored in clipboard only.
In-buffer llm use case
Phantom use case
Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.
- [optional] Select some text to pass in context in to manipulate with.
- Pick
Phantom
as an output mode in quick panelOpenAI: Chat Model Select
. - You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
- You can hit
ctrl+c
to stop prompting same as with inpanel
mode.
Other features
Open Source models support (llama.cpp, ollama)
- Replace
"url"
setting of a given model to point to whatever host you're server running on (e.g.http://localhost:8080/v1/chat/completions
). - Provide a
"token"
if your provider required one. - Tweak
"chat_model"
to a model of your choice and you're set.
[!NOTE] You can set both
url
andtoken
either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session.
Settings
The OpenAI Completion plugin has a settings file where you can set your OpenAI API key. This is required for the most of providers to work. To set your API key, open the settings within Preferences
-> Package Settings
-> OpenAI
-> Settings
and paste your API key in the token property, as follows:
{
"token": "sk-your-token",
}
Advertisement disabling
To disable advertisement you have to add "advertisement": false
line into an assistant setting where you wish it to be disabled.
Key bindings
You can bind keys for a given plugin command in Preferences
-> Package Settings
-> OpenAI
-> Key Bindings
. For example you can bind “New Message” including active tabs as context command like this:
{
"keys": [ "super+k", "super+'" ],
"command": "openai", // or "openai_panel"
"args": { "files_included": true }
},
Proxy support
You can setup it up by overriding the proxy property in the OpenAI completion
settings like follow:
"proxy": {
"address": "127.0.0.1", // required
"port": 9898, // required
"username": "account",
"password": "sOmEpAsSwOrD"
}
Disclaimers
[!WARNING] All selected code will be sent to the OpenAI servers (if not using custom API provider) for processing, so make sure you have all necessary permissions to do so.
[!NOTE] Dedicated to GPT3.5 that one the one who initially written at 80% of this back then. This was felt like a pure magic!