OpenAI completion
First class Sublime Text AI assistant with GPT-o1 and ollama support!
Details
Installs
- Total 6K
- Win 3K
- Mac 2K
- Linux 983
Jan 27 | Jan 26 | Jan 25 | Jan 24 | Jan 23 | Jan 22 | Jan 21 | Jan 20 | Jan 19 | Jan 18 | Jan 17 | Jan 16 | Jan 15 | Jan 14 | Jan 13 | Jan 12 | Jan 11 | Jan 10 | Jan 9 | Jan 8 | Jan 7 | Jan 6 | Jan 5 | Jan 4 | Jan 3 | Jan 2 | Jan 1 | Dec 31 | Dec 30 | Dec 29 | Dec 28 | Dec 27 | Dec 26 | Dec 25 | Dec 24 | Dec 23 | Dec 22 | Dec 21 | Dec 20 | Dec 19 | Dec 18 | Dec 17 | Dec 16 | Dec 15 | Dec 14 | Dec 13 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Windows | 2 | 6 | 3 | 5 | 5 | 3 | 7 | 9 | 4 | 6 | 7 | 7 | 7 | 4 | 6 | 1 | 5 | 8 | 6 | 5 | 4 | 6 | 3 | 9 | 3 | 9 | 4 | 7 | 4 | 4 | 3 | 4 | 4 | 1 | 6 | 2 | 1 | 4 | 4 | 3 | 8 | 9 | 3 | 1 | 1 | 5 |
Mac | 0 | 4 | 1 | 7 | 1 | 10 | 5 | 10 | 5 | 2 | 2 | 7 | 4 | 5 | 2 | 4 | 4 | 6 | 8 | 6 | 5 | 3 | 6 | 4 | 2 | 3 | 4 | 3 | 3 | 0 | 3 | 7 | 3 | 11 | 4 | 4 | 3 | 4 | 7 | 6 | 6 | 4 | 11 | 0 | 3 | 8 |
Linux | 1 | 1 | 0 | 0 | 5 | 4 | 5 | 2 | 0 | 1 | 2 | 4 | 1 | 3 | 1 | 1 | 2 | 3 | 4 | 2 | 6 | 1 | 3 | 2 | 2 | 0 | 1 | 4 | 2 | 2 | 1 | 2 | 2 | 1 | 0 | 5 | 2 | 5 | 2 | 1 | 1 | 1 | 1 | 0 | 4 | 3 |
Readme
- Source
- raw.githubusercontent.com
OpenAI Sublime Text Plugin
tldr;
Cursor level of AI assistance for Sublime Text. I mean it.
Works with all OpenAI'ish API: llama.cpp server, ollama or whatever third party LLM hosting.
Features
- Code manipulation (append, insert and edit) selected code with OpenAI models.
- Phantoms Get non-disruptive inline right in view answers from the model.
- Chat mode powered by whatever model you'd like.
- gpt-o1 support.
- llama.cpp's server, Ollama and all the rest OpenAI'ish API compatible.
- Dedicated chats histories and assistant settings for a projects.
- Ability to send whole files or their parts as a context expanding.
- Markdown syntax with code languages syntax highlight (Chat mode only).
- Server Side Streaming (SSE) (i.e. you don't have to wait for ages till GPT-4 print out something).
- Status bar various info: model name, mode, sent/received tokens.
- Proxy support.
ChatGPT completion demo
video sped up to 1.7x
video sped up to 1.7x
Requirements
- Sublime Text 4
- llama.cpp, ollama installed OR
- Remote llm service provider API key, e.g. OpenAI
Installation
- Install the Sublime Text Package Control plugin if you haven't done this before.
- Open the command palette and type
Package Control: Install Package
. - Type
OpenAI
and pressEnter
.
[!NOTE] Highly recommended complimentary packages: - https://github.com/SublimeText-Markdown/MarkdownCodeExporter - https://sublimetext-markdown.github.io/MarkdownEditing
Usage
AI Assistance use case
ChatGPT mode works the following way:
- Select some text or even the whole tabs to include them in request
- Run either
OpenAI: Chat Model Select
orOpenAI: Chat Model Select With Tabs
commands. - Input a request in input window if any.
- The model will print a response in output panel by default, but you can switch that to a separate tab with
OpenAI: Open in Tab
. - To get an existing chat in a new window run
OpenAI: Refresh Chat
. - To reset history
OpenAI: Reset Chat History
command to rescue.
[!NOTE] You suggested to bind at least
OpenAI: New Message
,OpenAI: Chat Model Select
andOpenAI: Show output panel
in sake for convenience, you can do that in plugin settings.
Chat history management
You can separate a chat history and assistant settings for a given project by appending the following snippet to its settings:
{
"settings": {
"ai_assistant": {
"cache_prefix": "your_project_name"
}
}
}
Additional request context management
You can add a few things to your request: - multi-line selection within a single file - multiple files within a single View Group
To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line).
To send the whole file(s) in advance to request you should super+button1
on them to make all tabs of them to become visible in a single view group and then run [New Message|Chat Model] with Sheets
command as shown on the screen below. Pay attention, that in given example only README.md
and 4.0.0.md
will be sent to a server, but not a content of the AI chat
.
[!NOTE] It's also doesn't matter whether the file persists on a disc or it's just a virtual buffer with a text in it, if they're selected, their content will be send either way.
Image handling
Image handle can be called by OpenAI: Handle Image
command.
It expects an absolute path to image to be selected in a buffer or stored in clipboard on the command call (smth like /Users/username/Documents/Project/image.png
). In addition command can be passed by input panel to proceed the image with special treatment. png
and jpg
images are only supported.
[!NOTE] Currently plugin expects the link or the list of links separated by a new line to be selected in buffer or stored in clipboard only.
In-buffer llm use case
Phantom use case
Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.
- You can set
"prompt_mode": "phantom"
for AI assistant in its settings. - [optional] Select some text to pass in context in to manipulate with.
- Hit
OpenAI: New Message
orOpenAI: Chat Model Select
and ask whatever you'd like in popup input pane. - Phantom will appear below the cursor position or the beginning of the selection while the streaming LLM answer occurs.
- You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
- You can hit
ctrl+c
to stop prompting same as with inpanel
mode.
[!IMPORTANT] Yet this is a standalone mode, i.e. an existing chat history won't be sent to a server on a run.
[!NOTE] A more detailed manual, including various assistant configuration examples, can be found within the plugin settings.
[!WARNING] The following in buffer commands are deprecated and will be removed in 5.0 release. 1. [DEPRECATED] You can pick one of the following modes:
append
,replace
,insert
. They're quite self-descriptive. They should be set up in assistant settings to take effect. 2. [DEPRECATED] Select some text (they're useless otherwise) to manipulate with and hitOpenAI: New Message
. 4. [DEPRECATED] The plugin will response accordingly with appending, replacing or inserting some text.
Other features
Open Source models support (llama.cpp, ollama)
- Replace
"url"
setting of a given model to point to whatever host you're server running on (e.g."http://localhost:8080"
). [Optional] Provide aTemporarily mandatory, see warning below."token"
if your provider required one.- Tweak
"chat_model"
to a model of your choice and you're set.
[!WARNING] Due to a known issue, a token value of 10 or more characters is currently required even for unsecured servers. More details here.
[!NOTE] You can set both
url
andtoken
either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session.
Settings
The OpenAI Completion plugin has a settings file where you can set your OpenAI API key. This is required for the most of providers to work. To set your API key, open the settings within Preferences
-> Package Settings
-> OpenAI
-> Settings
and paste your API key in the token property, as follows:
{
"token": "sk-your-token",
}
Advertisement disabling
To disable advertisement you have to add "advertisement": false
line into an assistant setting where you wish it to be disabled.
Key bindings
You can bind keys for a given plugin command in Preferences
-> Package Settings
-> OpenAI
-> Key Bindings
. For example you can bind “New Message” including active tabs as context command like this:
{
"keys": [ "super+k", "super+'" ],
"command": "openai", // or "openai_panel"
"args": { "files_included": true }
},
Proxy support
You can setup it up by overriding the proxy property in the OpenAI completion
settings like follow:
"proxy": {
"address": "127.0.0.1", // required
"port": 9898, // required
"username": "account",
"password": "sOmEpAsSwOrD"
}
Disclaimers
[!WARNING] All selected code will be sent to the OpenAI servers (if not using custom API provider) for processing, so make sure you have all necessary permissions to do so.
[!NOTE] This one was initially written at 80% by a GPT3.5 back then. I was there mostly for debugging purposes, rather than digging in into ST API. This is a pure magic, I swear!