ctrl+shift+p filters: :st2 :st3 :win :osx :linux
Browse

Open​AI completion

First class Sublime Text AI assistant with o1, o3-mini, gpt-4.5 and ollama support!

Details

Installs

  • Total 7K
  • Win 3K
  • Mac 3K
  • Linux 1K
Apr 2 Apr 1 Mar 31 Mar 30 Mar 29 Mar 28 Mar 27 Mar 26 Mar 25 Mar 24 Mar 23 Mar 22 Mar 21 Mar 20 Mar 19 Mar 18 Mar 17 Mar 16 Mar 15 Mar 14 Mar 13 Mar 12 Mar 11 Mar 10 Mar 9 Mar 8 Mar 7 Mar 6 Mar 5 Mar 4 Mar 3 Mar 2 Mar 1 Feb 28 Feb 27 Feb 26 Feb 25 Feb 24 Feb 23 Feb 22 Feb 21 Feb 20 Feb 19 Feb 18 Feb 17 Feb 16
Windows 1 7 5 2 0 5 8 4 10 8 4 1 5 9 6 7 5 9 7 5 11 5 5 9 2 3 4 12 10 5 7 5 4 5 8 9 10 8 4 9 6 8 6 9 12 2
Mac 0 3 15 1 1 7 5 7 1 9 6 2 4 4 11 2 3 2 4 7 6 6 10 8 4 1 3 8 11 5 6 0 0 9 9 13 10 5 9 6 10 4 9 15 6 3
Linux 1 4 2 4 3 1 4 4 2 1 3 1 2 8 3 3 5 2 1 2 4 2 1 3 5 2 2 1 4 1 1 1 3 2 4 4 3 3 6 4 1 1 4 4 1 1
01020Apr 2Mar 30Mar 27Mar 24Mar 21Mar 18Mar 15Mar 12Mar 9Mar 6Mar 3Feb 28Feb 25Feb 22Feb 19Feb 16Apr 2 Linux: 1 Mac: 0 Windows: 1Apr 1 Linux: 4 Mac: 3 Windows: 7Mar 31 Linux: 2 Mac: 15 Windows: 5Mar 30 Linux: 4 Mac: 1 Windows: 2Mar 29 Linux: 3 Mac: 1 Windows: 0Mar 28 Linux: 1 Mac: 7 Windows: 5Mar 27 Linux: 4 Mac: 5 Windows: 8Mar 26 Linux: 4 Mac: 7 Windows: 4Mar 25 Linux: 2 Mac: 1 Windows: 10Mar 24 Linux: 1 Mac: 9 Windows: 8Mar 23 Linux: 3 Mac: 6 Windows: 4Mar 22 Linux: 1 Mac: 2 Windows: 1Mar 21 Linux: 2 Mac: 4 Windows: 5Mar 20 Linux: 8 Mac: 4 Windows: 9Mar 19 Linux: 3 Mac: 11 Windows: 6Mar 18 Linux: 3 Mac: 2 Windows: 7Mar 17 Linux: 5 Mac: 3 Windows: 5Mar 16 Linux: 2 Mac: 2 Windows: 9Mar 15 Linux: 1 Mac: 4 Windows: 7Mar 14 Linux: 2 Mac: 7 Windows: 5Mar 13 Linux: 4 Mac: 6 Windows: 11Mar 12 Linux: 2 Mac: 6 Windows: 5Mar 11 Linux: 1 Mac: 10 Windows: 5Mar 10 Linux: 3 Mac: 8 Windows: 9Mar 9 Linux: 5 Mac: 4 Windows: 2Mar 8 Linux: 2 Mac: 1 Windows: 3Mar 7 Linux: 2 Mac: 3 Windows: 4Mar 6 Linux: 1 Mac: 8 Windows: 12Mar 5 Linux: 4 Mac: 11 Windows: 10Mar 4 Linux: 1 Mac: 5 Windows: 5Mar 3 Linux: 1 Mac: 6 Windows: 7Mar 2 Linux: 1 Mac: 0 Windows: 5Mar 1 Linux: 3 Mac: 0 Windows: 4Feb 28 Linux: 2 Mac: 9 Windows: 5Feb 27 Linux: 4 Mac: 9 Windows: 8Feb 26 Linux: 4 Mac: 13 Windows: 9Feb 25 Linux: 3 Mac: 10 Windows: 10Feb 24 Linux: 3 Mac: 5 Windows: 8Feb 23 Linux: 6 Mac: 9 Windows: 4Feb 22 Linux: 4 Mac: 6 Windows: 9Feb 21 Linux: 1 Mac: 10 Windows: 6Feb 20 Linux: 1 Mac: 4 Windows: 8Feb 19 Linux: 4 Mac: 9 Windows: 6Feb 18 Linux: 4 Mac: 15 Windows: 9Feb 17 Linux: 1 Mac: 6 Windows: 12Feb 16 Linux: 1 Mac: 3 Windows: 2Apr 2 Linux: 1 Mac: 0 Windows: 1Apr 1 Linux: 4 Mac: 3 Windows: 7Mar 31 Linux: 2 Mac: 15 Windows: 5Mar 30 Linux: 4 Mac: 1 Windows: 2Mar 29 Linux: 3 Mac: 1 Windows: 0Mar 28 Linux: 1 Mac: 7 Windows: 5Mar 27 Linux: 4 Mac: 5 Windows: 8Mar 26 Linux: 4 Mac: 7 Windows: 4Mar 25 Linux: 2 Mac: 1 Windows: 10Mar 24 Linux: 1 Mac: 9 Windows: 8Mar 23 Linux: 3 Mac: 6 Windows: 4Mar 22 Linux: 1 Mac: 2 Windows: 1Mar 21 Linux: 2 Mac: 4 Windows: 5Mar 20 Linux: 8 Mac: 4 Windows: 9Mar 19 Linux: 3 Mac: 11 Windows: 6Mar 18 Linux: 3 Mac: 2 Windows: 7Mar 17 Linux: 5 Mac: 3 Windows: 5Mar 16 Linux: 2 Mac: 2 Windows: 9Mar 15 Linux: 1 Mac: 4 Windows: 7Mar 14 Linux: 2 Mac: 7 Windows: 5Mar 13 Linux: 4 Mac: 6 Windows: 11Mar 12 Linux: 2 Mac: 6 Windows: 5Mar 11 Linux: 1 Mac: 10 Windows: 5Mar 10 Linux: 3 Mac: 8 Windows: 9Mar 9 Linux: 5 Mac: 4 Windows: 2Mar 8 Linux: 2 Mac: 1 Windows: 3Mar 7 Linux: 2 Mac: 3 Windows: 4Mar 6 Linux: 1 Mac: 8 Windows: 12Mar 5 Linux: 4 Mac: 11 Windows: 10Mar 4 Linux: 1 Mac: 5 Windows: 5Mar 3 Linux: 1 Mac: 6 Windows: 7Mar 2 Linux: 1 Mac: 0 Windows: 5Mar 1 Linux: 3 Mac: 0 Windows: 4Feb 28 Linux: 2 Mac: 9 Windows: 5Feb 27 Linux: 4 Mac: 9 Windows: 8Feb 26 Linux: 4 Mac: 13 Windows: 9Feb 25 Linux: 3 Mac: 10 Windows: 10Feb 24 Linux: 3 Mac: 5 Windows: 8Feb 23 Linux: 6 Mac: 9 Windows: 4Feb 22 Linux: 4 Mac: 6 Windows: 9Feb 21 Linux: 1 Mac: 10 Windows: 6Feb 20 Linux: 1 Mac: 4 Windows: 8Feb 19 Linux: 4 Mac: 9 Windows: 6Feb 18 Linux: 4 Mac: 15 Windows: 9Feb 17 Linux: 1 Mac: 6 Windows: 12Feb 16 Linux: 1 Mac: 3 Windows: 2Apr 2 Linux: 1 Mac: 0 Windows: 1Apr 1 Linux: 4 Mac: 3 Windows: 7Mar 31 Linux: 2 Mac: 15 Windows: 5Mar 30 Linux: 4 Mac: 1 Windows: 2Mar 29 Linux: 3 Mac: 1 Windows: 0Mar 28 Linux: 1 Mac: 7 Windows: 5Mar 27 Linux: 4 Mac: 5 Windows: 8Mar 26 Linux: 4 Mac: 7 Windows: 4Mar 25 Linux: 2 Mac: 1 Windows: 10Mar 24 Linux: 1 Mac: 9 Windows: 8Mar 23 Linux: 3 Mac: 6 Windows: 4Mar 22 Linux: 1 Mac: 2 Windows: 1Mar 21 Linux: 2 Mac: 4 Windows: 5Mar 20 Linux: 8 Mac: 4 Windows: 9Mar 19 Linux: 3 Mac: 11 Windows: 6Mar 18 Linux: 3 Mac: 2 Windows: 7Mar 17 Linux: 5 Mac: 3 Windows: 5Mar 16 Linux: 2 Mac: 2 Windows: 9Mar 15 Linux: 1 Mac: 4 Windows: 7Mar 14 Linux: 2 Mac: 7 Windows: 5Mar 13 Linux: 4 Mac: 6 Windows: 11Mar 12 Linux: 2 Mac: 6 Windows: 5Mar 11 Linux: 1 Mac: 10 Windows: 5Mar 10 Linux: 3 Mac: 8 Windows: 9Mar 9 Linux: 5 Mac: 4 Windows: 2Mar 8 Linux: 2 Mac: 1 Windows: 3Mar 7 Linux: 2 Mac: 3 Windows: 4Mar 6 Linux: 1 Mac: 8 Windows: 12Mar 5 Linux: 4 Mac: 11 Windows: 10Mar 4 Linux: 1 Mac: 5 Windows: 5Mar 3 Linux: 1 Mac: 6 Windows: 7Mar 2 Linux: 1 Mac: 0 Windows: 5Mar 1 Linux: 3 Mac: 0 Windows: 4Feb 28 Linux: 2 Mac: 9 Windows: 5Feb 27 Linux: 4 Mac: 9 Windows: 8Feb 26 Linux: 4 Mac: 13 Windows: 9Feb 25 Linux: 3 Mac: 10 Windows: 10Feb 24 Linux: 3 Mac: 5 Windows: 8Feb 23 Linux: 6 Mac: 9 Windows: 4Feb 22 Linux: 4 Mac: 6 Windows: 9Feb 21 Linux: 1 Mac: 10 Windows: 6Feb 20 Linux: 1 Mac: 4 Windows: 8Feb 19 Linux: 4 Mac: 9 Windows: 6Feb 18 Linux: 4 Mac: 15 Windows: 9Feb 17 Linux: 1 Mac: 6 Windows: 12Feb 16 Linux: 1 Mac: 3 Windows: 2

Readme

Source
raw.​githubusercontent.​com

Star on GitHub Package Control

[!WARNING] Package Control do not fetches any updates for a two weeks for now and there's nothing I can do with that, so if you want to use the latest version of this package you have to clone it and install it manually.

OpenAI Sublime Text Plugin

tldr;

Cursor level of AI assistance for Sublime Text. I mean it.

Works with all OpenAI'ish API: llama.cpp server, ollama or whatever third party LLM hosting. Claude API support coming soon.

[!NOTE] 5.0.0 release is around the corner! Check out release notes for details.

Features

  • Chat mode powered by whatever model you'd like.
  • o3-mini and o1 support.
  • gpt-4.5-preview support
  • llama.cpp's server, ollama and all the rest OpenAI'ish API compatible.
  • Dedicated chats histories and assistant settings for a projects.
  • Ability to send whole files or their parts as a context expanding.
  • Phantoms Get non-disruptive inline right in view answers from the model.
  • Markdown syntax with code languages syntax highlight (Chat mode only).
  • Server Side Streaming (SSE) streaming support.
  • Status bar various info: model name, mode, sent/received tokens.
  • Proxy support.

Requirements

  • Sublime Text 4
  • llama.cpp, ollama installed OR
  • Remote llm service provider API key, e.g. OpenAI
  • Anthropic API key [coming soon].

Installation

  1. Install the Sublime Text Package Control plugin if you haven't done this before.
  2. Open the command palette and type Package Control: Install Package.
  3. Type OpenAI and press Enter.

[!NOTE] Highly recommended complimentary packages: - https://github.com/SublimeText-Markdown/MarkdownCodeExporter - https://sublimetext-markdown.github.io/MarkdownEditing

Usage

AI Assistance use case

ChatGPT mode works the following way:

  1. Select some text or even the whole tabs to include them in request
  2. Run either OpenAI: Chat Model Select or OpenAI: Chat Model Select With Tabs commands.
  3. Input a request in input window if any.
  4. The model will print a response in output panel by default, but you can switch that to a separate tab with OpenAI: Open in Tab.
  5. To get an existing chat in a new window run OpenAI: Refresh Chat.
  6. To reset history OpenAI: Reset Chat History command to rescue.

[!NOTE] You suggested to bind at least OpenAI: New Message, OpenAI: Chat Model Select and OpenAI: Show output panel in sake for convenience, you can do that in plugin settings.

Chat history management

You can separate a chat history and assistant settings for a given project by appending the following snippet to its settings:

{
    "settings": {
        "ai_assistant": {
            "cache_prefix": "/absolute/path/to/project/"
        }
    }
}

Additional request context management

You can add a few things to your request: - multi-line selection within a single file - multiple files within a single View Group

To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line).

To append the whole file(s) to request you should super+button1 on them to make whole tabs of them to become visible in a single view group and then run OpenAI: Add Sheets to Context command. Sheets can be deselected with the same command.

You can check the numbers of added sheets in the status bar and on "OpenAI: Chat Model Select" command call in the preview section.

Image handling

Image handle can be called by OpenAI: Handle Image command.

It expects an absolute path to image to be selected in a buffer or stored in clipboard on the command call (smth like /Users/username/Documents/Project/image.png). In addition command can be passed by input panel to proceed the image with special treatment. png and jpg images are only supported.

[!NOTE] Currently plugin expects the link or the list of links separated by a new line to be selected in buffer or stored in clipboard only.

In-buffer llm use case

Phantom use case

Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.

  1. [optional] Select some text to pass in context in to manipulate with.
  2. Pick Phantom as an output mode in quick panel OpenAI: Chat Model Select.
  3. You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
  4. You can hit ctrl+c to stop prompting same as with in panel mode.

Other features

Open Source models support (llama.cpp, ollama)

  1. Replace "url" setting of a given model to point to whatever host you're server running on (e.g.http://localhost:8080/v1/chat/completions).
  2. Provide a "token" if your provider required one.
  3. Tweak "chat_model" to a model of your choice and you're set.

[!NOTE] You can set both url and token either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session.

Settings

The OpenAI Completion plugin has a settings file where you can set your OpenAI API key. This is required for the most of providers to work. To set your API key, open the settings within Preferences -> Package Settings -> OpenAI -> Settings and paste your API key in the token property, as follows:

{
    "token": "sk-your-token",
}

Advertisement disabling

To disable advertisement you have to add "advertisement": false line into an assistant setting where you wish it to be disabled.

Key bindings

You can bind keys for a given plugin command in Preferences -> Package Settings -> OpenAI -> Key Bindings. For example you can bind “New Message” including active tabs as context command like this:

{
    "keys": [ "super+k", "super+'" ],
    "command": "openai", // or "openai_panel"
    "args": { "files_included": true }
},

Proxy support

You can setup it up by overriding the proxy property in the OpenAI completion settings like follow:

"proxy": {
    "address": "127.0.0.1", // required
    "port": 9898, // required
    "username": "account",
    "password": "sOmEpAsSwOrD"
}

Disclaimers

[!WARNING] All selected code will be sent to the OpenAI servers (if not using custom API provider) for processing, so make sure you have all necessary permissions to do so.

[!NOTE] Dedicated to GPT3.5 that one the one who initially written at 80% of this back then. This was felt like a pure magic!