Local LLM Hub

pending

by takeshy

Chat with local LLMs (Ollama, LM Studio) with local embeddings RAG, file encryption, edit history, slash commands, and workflow automation.

24 starsUpdated 2d agoMITDiscovered via Obsidian Unofficial Plugins
View on GitHub

Local LLM Hub for Obsidian

Your company's security policy blocks cloud APIs. But you refuse to give up AI-powered note automation.

Local LLM Hub brings the full power of Gemini Helper's workflow automation, RAG, MCP integration, and agent skills to a completely local environment. Ollama, LM Studio, vLLM, or AnythingLLM — your data never leaves your machine.

Workflow Execution


Why Local?

Every byte stays on your machine. No API keys sent to the cloud. No vault contents uploaded anywhere. This isn't a privacy "option" — it's the architecture.

WhatWhere it stays
Chat historyMarkdown files in your vault
RAG indexLocal embeddings in workspace folder
LLM requestslocalhost only (Ollama / LM Studio / vLLM / AnythingLLM)
MCP serversLocal child processes via stdio
Encrypted filesEncrypted/decrypted locally
Edit historyIn-memory (cleared on restart)

If you use Gemini Helper at home but need something for work — this is it. Same workflow engine, same UX, zero cloud dependency.


Workflow Automation — The Core Feature

Describe what you want in plain language. The AI builds the workflow. No YAML knowledge required.

Create Workflows & Skills with AI

Create Workflow with AI

  1. Open the Workflow tab → select + New (AI)
  2. Describe: "Convert the current page into an infographic and save it"
  3. Check "Create as agent skill" if you want to create an agent skill instead of a standalone workflow
  4. Click Generate — done

Don't have a powerful local model? Click Copy Prompt, paste into Claude/GPT/Gemini, paste the response back, and click Apply.

Create Skill with External LLM

Modify with AI

Load any workflow, click AI Modify, describe the change. Reference execution history to debug failures.

Modify Workflow with AI

Visual Node Editor

23 node types across 12 categories:

CategoryNodes
Variablesvariable, set
Controlif, while
LLMcommand
Datahttp, json
Notesnote, note-read, note-search, note-list, folder-list, open
Filesfile-explorer, file-save
Promptsprompt-file, prompt-selection, dialog
Compositionworkflow (sub-workflows)
RAGrag-sync
Scriptscript (sandboxed JavaScript)
Externalobsidian-command
Utilitysleep

Workflow Panel

Event Triggers & Hotkeys

  • Event triggers — auto-run workflows on file create / modify / delete / rename / open
  • Hotkey support — assign keyboard shortcuts to any named workflow
  • Execution history — review past runs with step-by-step details

See WORKFLOW_NODES.md for the complete node reference.


AI Chat

Streaming chat with your local LLM. Thinking display, file attachments, @ mentions for vault notes, multiple sessions.

Chat with RAG

Vault Tools (Function Calling)

Models with function calling support (Qwen, Llama 3.1+, Mistral) can directly interact with your vault:

read_note · create_note · update_note · rename_note · create_folder · search_notes · list_notes · list_folders · get_active_note · propose_edit · execute_javascript

Three modes — All, No Search, Off — selectable from the input area.

Tool Settings

MCP Servers

Connect local MCP servers to extend the AI with external tools. MCP tools are merged with vault tools and routed via function calling — all running as local child processes.

Chat with MCP

RAG (Local Embeddings)

Index your vault with a local embedding model (e.g. nomic-embed-text). Relevant notes are automatically included as context. Everything computed and stored locally.

Agent Skills

Inject reusable instructions into the system prompt via SKILL.md files. Activate per conversation. Skills can also expose workflows that the AI can invoke as tools during chat.

Create skills the same way as workflows — select + New (AI), check "Create as agent skill", and describe what you want. The AI generates both the SKILL.md instructions and the workflow.

Agent Skills

See SKILLS.md for details.

Slash Commands & Compact History

  • Custom prompt templates triggered by /
  • /compact to compress long conversations while preserving context

File Encryption

Password-protect sensitive notes. Encrypted files are invisible to AI chat tools but accessible to workflows with password prompt — ideal for storing API keys or credentials.

Edit History

Automatic tracking of AI-made changes with diff view and one-click restore.


Setup

Requirements

Quick Start

  1. Install and start your LLM server
  2. Open plugin settings → select framework (Ollama / LM Studio / vLLM / AnythingLLM)
  3. Set the server URL (defaults pre-filled)
  4. Fetch and select your chat model
  5. Click Verify connection

LLM Settings

RAG Setup

  1. Enable RAG in settings
  2. Fetch and select the embedding model
  3. Configure target folders (optional — defaults to entire vault)
  4. Click Sync to build the index

RAG Settings

MCP Server Setup

  1. Settings → MCP serversAdd server
  2. Configure: name, command (e.g. npx), arguments, optional env vars
  3. Toggle on — connects automatically via stdio

MCP & Encryption Settings

Workspace Settings

Workspace Settings

Supported Frameworks

FrameworkChat EndpointStreamingThinkingFunction Calling
Ollama/api/chat (native)Real-timemessage.thinking fieldtools parameter
LM Studio (OpenAI compatible)/v1/chat/completionsSSE<think> tagstools parameter
vLLM/v1/chat/completionsSSE<think> tagstools parameter
AnythingLLM/v1/openai/chat/completionsSSE<think> tagstools parameter

Using Cloud LLMs (OpenAI, Gemini, etc.)

The "LM Studio (OpenAI compatible)" framework works with any OpenAI-compatible API endpoint, including cloud services:

ServiceBase URLAPI Key
OpenAIhttps://api.openai.comYour OpenAI API key
Google Geminihttps://generativelanguage.googleapis.com/v1beta/openaiYour Gemini API key

RAG with cloud LLMs: Cloud LLMs cannot use local embedding models directly. To use RAG, configure the Embedding server URL in RAG settings to point to a local Ollama instance (e.g. http://localhost:11434) and select an embedding model like nomic-embed-text.


Installation

BRAT (Recommended)

  1. Install BRAT plugin
  2. Open BRAT settings → "Add Beta plugin"
  3. Enter: https://github.com/takeshy/obsidian-local-llm-hub
  4. Enable the plugin in Community plugins settings

Manual

  1. Download main.js, manifest.json, styles.css from releases
  2. Create local-llm-hub folder in .obsidian/plugins/
  3. Copy files and enable in Obsidian settings

From Source

git clone https://github.com/takeshy/obsidian-local-llm-hub
cd obsidian-local-llm-hub
npm install
npm run build

Gemini Helper との関係 / Relationship to Gemini Helper

This plugin is the local-only sibling of obsidian-gemini-helper. Same workflow engine, same UX patterns, but designed for environments where cloud APIs are not an option.

Gemini HelperLocal LLM Hub
LLM BackendGoogle Gemini API / CLIOllama / LM Studio / vLLM / AnythingLLM / OpenAI-compatible APIs
Data destinationGoogle serverslocalhost only
Workflow engine✅ (same architecture)
RAGGoogle File SearchLocal embeddings
MCP✅ (stdio only)
Agent Skills
Image generation✅ (Gemini)
Web search✅ (Google)
CostFree / Pay-per-useFree forever (your hardware)

Choose Gemini Helper when you want cutting-edge cloud models. Choose Local LLM Hub when privacy is non-negotiable.

For plugin developers

Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.