VaultChat

pending

by Kevin Pulikkottil

Multi-provider AI chat with file reading, editing, creation, and deletion. Supports Anthropic, OpenAI, Google Gemini, OpenRouter, and Ollama.

Updated 9d agoMITDiscovered via Obsidian Unofficial Plugins
View on GitHub

VaultChat

Multi-provider AI chat inside Obsidian with file reading, editing, creation, and deletion. Supports Anthropic (Claude), OpenAI, Google Gemini, OpenRouter, and Ollama.

API keys: You need a standard API key from each provider you want to use. Claude Code OAuth tokens (sk-ant-oat01-...) do not work. Anthropic blocks them from third-party API calls.


Features

  • Multi-provider - switch between Anthropic, OpenAI, Gemini, OpenRouter, and Ollama from a single dropdown
  • File reading - attach any vault file to the conversation with the + button; the AI sees the full contents
  • File editing - the AI proposes edits in a diff format; auto-applied with Confirm/Revert, or manual Apply (configurable)
  • File creation - ask the AI to create new notes in any folder
  • File deletion - the AI can propose file deletions with a double-confirmation safety prompt
  • Vault awareness - the AI sees your full file tree so it uses correct paths
  • Chat history - persistent sessions stored per-vault, grouped by date, searchable, and resumable
  • Stop button - cancel any streaming response mid-generation
  • Include current note - one-click toggle to send your active note as context
  • Streaming - real-time token streaming from all providers
  • Dynamic model lists - Ollama shows installed models; OpenRouter fetches all available models

Providers

ProviderAPI key sourceNotes
Anthropicconsole.anthropic.comUses x-api-key header
OpenAIplatform.openai.comBearer token
Google Geminiaistudio.google.comOpenAI-compatible endpoint
OpenRouteropenrouter.aiAccess 100+ models with one key
OllamaNo key neededRuns locally; set your base URL

Installation

From Obsidian (recommended)

  1. Open Settings > Community Plugins
  2. Click Browse and search for VaultChat
  3. Click Install, then Enable

Build from source

If you prefer to install manually or want to contribute:

git clone https://github.com/kpulik/VaultChat
cd VaultChat
npm install
npm run build

Then copy the built files into your vault:

mkdir -p /path/to/your/vault/.obsidian/plugins/VaultChat
cp main.js manifest.json styles.css /path/to/your/vault/.obsidian/plugins/VaultChat/

Restart Obsidian. The plugin will appear in Settings > Community Plugins.

Ollama setup

Ollama lets you run models locally. No API key, no cost, no data leaving your machine.

1. Install Ollama

Download from ollama.com and run the installer. On Mac it runs as a menu bar app that starts automatically.

2. Pull a model

Open Terminal and run one of these:

ollama pull llama3.2        # 2GB, good general purpose
ollama pull llama3.2:1b     # 1GB, fastest/smallest
ollama pull mistral         # 4GB, strong reasoning
ollama pull codellama       # 4GB, code-focused
ollama pull gemma3          # 5GB, Google's open model
ollama pull qwen2.5         # 4GB, strong at multilingual

To see what you have installed: ollama list

3. Open VaultChat

Select Ollama from the provider dropdown. The model list will auto-populate from your installed models. The base URL defaults to http://localhost:11434. Only change it if you're running Ollama on a different machine.

Note: Ollama must be running for the model list to load. If the dropdown shows "Fetch failed", open the Ollama app or run ollama serve in Terminal, then hit the refresh button.


Usage

  • Click the bot icon in the left ribbon to open the chat panel
  • Select your provider and model from the dropdowns in the header
  • Use the + button to attach vault files as context. The AI reads their full contents
  • Check Include current note to also send your active note
  • Press Enter to send, Cmd+Enter for a new line
  • Click Stop to cancel a streaming response at any time
  • Hover over any assistant message to see Copy and Insert buttons
  • When the AI proposes file edits, you'll see a diff with Confirm/Revert (auto-apply mode) or Preview/Apply (manual mode)
  • File deletions always require a double confirmation before anything is removed
  • Click the menu icon in the header to open chat history. Sessions are grouped by date and show attached files

Settings

Go to Settings > VaultChat to configure:

  • API keys for each provider (stored locally, obfuscated after entry)
  • Default model per provider
  • Custom base URL per provider (for proxies or self-hosted endpoints)
  • Context window (num_ctx) for Ollama - controls RAM usage (default 4096 tokens)
  • System prompt - customize the AI's behavior
  • Max tokens - maximum response length
  • Auto-apply edits - toggle between auto-apply (with Confirm/Revert) and manual Apply mode

Development

npm run dev   # watch mode, rebuilds on every save

For live reloading inside Obsidian, install the Hot Reload community plugin and symlink the project folder into your vault's plugins directory:

ln -s /path/to/VaultChat /path/to/vault/.obsidian/plugins/VaultChat

Open Obsidian's developer tools with Cmd+Option+I to debug.

License

MIT

For plugin developers

Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.