Lingxi
pendingby zzyong24
Skill-driven AI chat plugin — chat with AI in the sidebar, auto-archive conversations as structured notes. Supports DeepSeek, Qwen, Doubao, Kimi, Zhipu and more Chinese LLMs.
Lingxi
🧠 Skill-driven AI Chat Plugin — Make Obsidian Your AI Second Brain
Chat with AI in the Obsidian sidebar, drive your creative workflow with Skill templates, and automatically archive conversations as structured notes.
Optimized for Chinese LLMs. All data stored locally in your Vault — zero cloud dependency.
📖 Table of Contents
- ✨ Features
- 📸 Preview
- 🚀 Getting Started
- 🤖 LLM Integration Guide
- 🎯 Scenes & Skill System
- 🖼️ Image Input
- ☁️ Multi-device Sync: Remotely Save + Tencent Cloud COS
- ⚙️ Settings Overview
- 🛠️ Developer Guide
- ❓ FAQ
- 📄 License
✨ Features
| Feature | Description |
|---|---|
| 🗣️ Chat & Archive | Conversations are automatically archived as structured notes |
| ⚡ Skill-driven | Define AI behavior through Markdown Skill templates — flexible and extensible |
| 🎭 Scene Management | Organize Skills and Rules by scene, with intelligent scoring-based matching |
| 🛠️ Everything via Chat | Create, update, delete Skills and Scenes through natural conversation — powered by Function Calling |
| 💾 Conversation Persistence | Chat history is auto-saved to disk and seamlessly restored after restart |
| 🇨🇳 Chinese LLM First | Built-in support for DeepSeek / Qwen / Doubao / Kimi / Zhipu — ready out of the box |
| 🖼️ Image Recognition | Paste or drag images to send to vision models for analysis |
| 📡 Streaming Output | Real-time typewriter-style AI responses, with stop generation support |
| 🔒 Fully Local Data | All data stored in your Obsidian Vault — zero cloud dependency |
| 📱 Multi-device Sync | Sync across desktop and mobile with Remotely Save + COS |
| 🔍 Knowledge Retrieval (RAG) | Automatically retrieves relevant knowledge from your Vault notes, enabling AI to answer based on your notes |
📸 Preview
Chat Interface
📷 Coming soon: Chat interface screenshot
Scene Switching
📷 Coming soon: Scene switching screenshot
Settings Panel
📷 Coming soon: Settings panel screenshot
🚀 Getting Started
Installation
Option 1: Community Plugin Marketplace (Recommended)
- Open Obsidian → Settings → Community Plugins
- Turn off Restricted Mode (if enabled)
- Click Browse, search for "Lingxi"
- Click Install, then Enable
Option 2: Manual Installation
-
Go to GitHub Releases and download the latest
main.js,styles.css, andmanifest.json -
Create the plugin directory in your Vault and place the files:
YourVault/.obsidian/plugins/lingxi/
├── main.js
├── styles.css
└── manifest.json
- Restart Obsidian → Settings → Community Plugins → Enable "Lingxi"
Option 3: BRAT Installation (Beta Testing)
- Install the BRAT plugin
- Go to BRAT Settings → Add Beta Plugin → Enter this project's GitHub repo URL
- BRAT will automatically download and install the latest version
Configure Model API Key
- Open Obsidian → Settings → Lingxi
- Expand the model provider you want to use (e.g., DeepSeek)
- Enter your API Key (see the guides below for each model)
- Click "Test Connection" to verify ✅
- Select the configured model in the "Default Model" section
💡 You only need to configure at least one model to get started. We recommend starting with DeepSeek for the best cost-performance ratio.
🤖 LLM Integration Guide
This plugin supports all LLM services compatible with the OpenAI API format. Below are detailed integration guides for 5 major Chinese LLM providers.
1. DeepSeek
🌟 Recommended — Excellent cost-performance ratio, strong Chinese language capabilities, 128K context
Get API Key:
- Visit DeepSeek Platform
- Register/Login
- Go to "API Keys" page
- Click "Create API Key", copy and save
Plugin Configuration:
| Setting | Value |
|---|---|
| Base URL | https://api.deepseek.com/v1 |
| Default Model | deepseek-chat |
| API Key | Your created key |
Available Models:
| Model | Description | Context |
|---|---|---|
deepseek-chat | General chat, best value | 128K |
deepseek-reasoner | Deep reasoning model | 128K |
Pricing: New users receive free credits. Regular usage costs approximately ¥1/million tokens (input) — extremely affordable.
2. Qwen (Alibaba Cloud)
Alibaba Cloud's LLM, multiple tiers available, generous free quota
Get API Key:
- Visit Alibaba Cloud Dashscope
- Register/Login with an Alibaba Cloud account
- Activate "Model Service Lingji"
- Go to "API-KEY Management" page
- Click "Create New API-KEY", copy and save
Plugin Configuration:
| Setting | Value |
|---|---|
| Base URL | https://dashscope.aliyuncs.com/compatible-mode/v1 |
| Default Model | qwen-plus |
| API Key | Your created key |
Available Models:
| Model | Description | Context |
|---|---|---|
qwen-turbo | Lightweight and fast | 128K |
qwen-plus | Balanced choice | 128K |
qwen-max | Most powerful | 32K |
qwen-vl-plus | Vision model (image support) | 32K |
Pricing: Free quota available. qwen-turbo has the most generous free tier.
3. Doubao (ByteDance)
ByteDance's LLM, accessed through Volcengine Ark platform
Get API Key:
- Visit Volcengine Console
- Register/Login with a Volcengine account
- Activate "Model Inference" service
- Create a key in "API Key Management"
- Create an inference endpoint in "Online Inference", note the Endpoint ID
⚠️ Note: Doubao uses your Endpoint ID (like
ep-xxx) as the model name, not the generic model name.
Plugin Configuration:
| Setting | Value |
|---|---|
| Base URL | https://ark.cn-beijing.volces.com/api/v3 |
| Default Model | Your Endpoint ID (e.g., ep-20240xxxxx) |
| API Key | Your created key |
Available Models:
| Model | Description |
|---|---|
| Doubao-pro-32k | General chat, 32K context |
| Doubao-pro-128k | Long text chat, 128K context |
| Doubao-lite-32k | Lightweight and fast |
| Doubao-vision-pro | Vision model (image support) |
Pricing: New users receive free credits, competitively priced.
4. Kimi (Moonshot AI)
Known for ultra-long context support up to 200K tokens, ideal for long document processing
Get API Key:
- Visit Moonshot Platform
- Register/Login
- Go to "API Key Management"
- Click "Create", copy and save
Plugin Configuration:
| Setting | Value |
|---|---|
| Base URL | https://api.moonshot.cn/v1 |
| Default Model | moonshot-v1-32k |
| API Key | Your created key |
Available Models:
| Model | Description | Context |
|---|---|---|
moonshot-v1-8k | Lightweight and fast | 8K |
moonshot-v1-32k | Balanced choice | 32K |
moonshot-v1-128k | Ultra-long context | 128K |
Pricing: Free quota available, pay-as-you-go.
5. Zhipu GLM
Tsinghua-backed LLM, GLM-4 series with comprehensive capabilities
Get API Key:
- Visit Zhipu Open Platform
- Register/Login
- Go to "API Keys" page
- Click "Create API Key", copy and save
Plugin Configuration:
| Setting | Value |
|---|---|
| Base URL | https://open.bigmodel.cn/api/paas/v4 |
| Default Model | glm-4 |
| API Key | Your created key |
Available Models:
| Model | Description | Context |
|---|---|---|
glm-4 | Flagship model | 128K |
glm-4-flash | Fast version (free) | 128K |
glm-4v | Vision model (image support) | 2K |
Pricing: glm-4-flash is completely free, great for daily use.
6. Other Models
This plugin is compatible with all OpenAI API format model services, including but not limited to:
| Service | Base URL Example | Description |
|---|---|---|
| OpenAI | https://api.openai.com/v1 | Requires international network access |
| OpenRouter | https://openrouter.ai/api/v1 | Multi-model aggregation proxy |
| Ollama (Local) | http://localhost:11434/v1 | Locally deployed open-source models |
| LM Studio (Local) | http://localhost:1234/v1 | Locally deployed open-source models |
| Custom Proxy | Your proxy URL | Self-hosted API proxy service |
Simply modify any provider's Base URL and Model Name in the settings panel to connect.
🎯 Scenes & Skill System
Lingxi's core feature is its Scene-based Skill System — define AI roles, behaviors, and output formats through Markdown files, organized by scenes.
Directory Structure
YourVault/
└── skills-scenes/ # Scene root directory (configurable in settings)
├── _global_rules/ # 🌐 Global Rules (shared across all scenes)
│ └── general-assistant-rules.md
├── _scenes_index.md # 📋 Scene index (helps AI auto-match scenes)
│
├── 系统管理/ # ⚙️ Scene: System Management (built-in)
│ ├── _scene.md # Scene description
│ ├── _rules/
│ └── _skills/
│ └── 系统/
│ └── skill-manager.md # Built-in Skill Manager (CRUD Skills & Scenes via chat)
│
├── content-creation/ # 📱 Scene: Content Creation
│ ├── _scene.md # Scene description + workflow + Skills overview
│ ├── _rules/ # Scene-level rules
│ │ ├── persona.md
│ │ ├── methodology.md
│ │ └── ...
│ └── _skills/ # Scene-level Skills
│ ├── topic-management/
│ │ ├── topic-discovery.md
│ │ └── topic-refinement.md
│ ├── content-writing/
│ │ ├── draft-generation.md
│ │ └── content-rewriting.md
│ └── ...
│
├── content-creation/ # 📁 Archive: content creation scene output (auto-created)
│ ├── topic-management/ # ← Skill output_folder
│ ├── drafts/
│ └── ...
│
├── learning/ # 📁 Archive: learning scene output
│ ├── 深度反思/
│ ├── 知识卡片/
│ └── ...
│
└── AI笔记/ # 📁 Archive: default folder for unmatched content
System Prompt Loading Strategy
The plugin uses a multi-layer stacking approach to build the complete System Prompt:
┌─────────────────────────────┐
│ Layer 0: Current Time │ ← Auto-injected (prevents AI time hallucination)
│ Layer 1: Global Rules │ ← _global_rules/*.md (loaded for every conversation)
│ Layer 2: Scene Rules │ ← content-creation/_rules/*.md (loaded when scene is selected)
│ Layer 3: Skill Prompt │ ← topic-discovery.md's System Prompt section
│ Layer 4: RAG Knowledge Context│ ← Related content retrieved from Vault notes
└─────────────────────────────┘
↓
Final System Prompt
Skill File Specification
Each Skill is a .md file containing Frontmatter metadata and a System Prompt:
---
name: Topic Discovery
description: Help you discover content ideas from trends, comments, and competitors
trigger_keywords: ["topic", "ideas", "trends"]
category: Topic Management
output_folder: 选题管理
output_template: note
model_preference: text
---
## System Prompt
You are a senior content strategist...
(Full system prompt)
## Output Format
Please output in the following structure:
1. Topic Direction
2. Specific Angle
3. Viral Potential Score
Frontmatter Field Reference:
| Field | Type | Required | Description |
|---|---|---|---|
name | string | ✅ | Display name of the Skill |
description | string | ✅ | Brief description |
trigger_keywords | string[] | ❌ | Keywords for auto-matching |
category | string | ❌ | Category name (used for panel grouping) |
output_folder | string | ❌ | Archive target folder |
output_template | card / note / raw | ❌ | Output template type |
model_preference | text / vision / any | ❌ | Model preference |
Using Skills
Method 1: Auto-matching by Keywords
Simply type a message containing Skill keywords (e.g., "Help me find trending topics for today"), and the plugin will automatically match the "Topic Discovery" Skill, responding with the corresponding System Prompt and output format.
Method 2: Scene Quick Entry
Click scene buttons on the welcome page (e.g., "📱 Content Creation") to quickly enter the corresponding scene context. Scene Rules and Skills are activated automatically.
Keyword Matching Mechanism:
The plugin uses a scoring mechanism for keyword matching, rather than simple first-match:
- More matching keywords in user message = higher score
- Longer keywords have higher weight (avoids short-word false triggers)
- System automatically selects the highest-scoring Scene/Skill
- Minimum keyword length is 2 characters; single characters won't trigger matching
Managing Skills and Scenes via Chat
Lingxi includes a built-in Skill Manager (powered by Function Calling) that lets you perform all management operations through natural language conversation, without manually creating files:
Supported Operations:
| Example | Operation |
|---|---|
| "Create a reading notes skill" | Guided skill creation |
| "Update keywords for topic discovery" | Update specific skill |
| "Delete XX skill" | Delete skill after confirmation |
| "List my skills" | List all skills |
| "Create a health management scene" | Create new scene |
| "Show all scenes" | List scene overview |
Guided Dialogue Flow:
User: Help me create a reading notes skill
Lingxi: I've designed this Skill for you, please check:
📌 Basic Info
- Name: Deep Reading Notes
- Scene: 📚 Learning
- Category: Reading Management
- Description: Guide you through deep reading and output structured notes
- Keywords: ["reading notes", "book notes", "deep reading"]
🧠 System Prompt Preview
> You are an experienced reading coach who guides users
> through deep reading and produces structured notes...
Reply "Confirm" to create, or tell me what to adjust 😊
User: Confirm
Lingxi: ✅ Skill "Deep Reading Notes" has been created!
File: skills-scenes/学习/_skills/阅读管理/深度阅读笔记.md
💡 The Skill Manager automatically infers name, category, keywords, and designs professional System Prompts — just describe your needs and confirm.
🖼️ Image Input
Send images to vision models for analysis:
- Paste images: Directly
Ctrl/Cmd + Vto paste images from clipboard - Drag & drop: Drag image files into the input area
- Images are previewed as thumbnails; multiple images supported
- Images are sent to the model in base64 format
⚠️ Image functionality requires selecting a vision-capable model in "Default Vision Model", such as
qwen-vl-plus,glm-4v, ordoubao-vision-pro.
☁️ Multi-device Sync: Remotely Save + Tencent Cloud COS
Use the Remotely Save plugin + Tencent Cloud Object Storage (COS) to sync your Obsidian Vault across desktop, mobile, and tablet in real-time.
Why COS
| Solution | Pros | Cons |
|---|---|---|
| iCloud | Native support | Apple devices only, no Android/Windows |
| OneDrive | Microsoft native | Slow in China, frequent sync failures |
| Tencent Cloud COS | Fast in China, stable, affordable | Requires simple setup |
| Alibaba Cloud OSS | Stable in China | Similar setup, also a good option |
Pricing: Personal note storage is minimal. COS standard storage costs about ¥0.1/GB/month — less than ¥1/month for 10GB of notes. New users typically get free quota.
Setup Steps
-
Create a COS Bucket at Tencent Cloud COS Console
- Set access permission to Private Read/Write ⚠️
- Note the bucket name and region
-
Create API Credentials at Tencent Cloud API Key Management
- Note the SecretId and SecretKey
-
Install & Configure Remotely Save
- Install from Obsidian Community Plugins
- Set remote service type to S3 or S3-compatible
- Configure S3 Endpoint:
cos.{your-region}.myqcloud.com - Fill in your credentials and bucket name
- Click "Check" to test the connection
-
Multi-device Usage
- Install Obsidian + Remotely Save on each device
- Configure the same COS connection info
- First sync will pull all data from the cloud
💡 Tip: Lingxi settings (including API Keys) are stored in
.obsidian/plugins/lingxi/data.jsonand will sync to all devices via Remotely Save. Configure once, use everywhere.
Sync Tips
- Avoid editing the same file on multiple devices simultaneously to prevent conflicts
- Before first sync, ensure the new device's Vault is empty (or contains only default files)
- Plugin files (
.obsidian/plugins/) are also synced, so all installed plugins transfer automatically - If sync conflicts occur, Remotely Save keeps both versions for manual resolution
- Enable "Sync on Startup" to ensure you always get the latest data
⚙️ Settings Overview
| Category | Setting | Type | Default |
|---|---|---|---|
| Models | Provider API Keys | Password | Empty |
| Provider Base URLs | Text | Pre-filled with official URLs | |
| Default Text Model | Dropdown | deepseek:deepseek-chat | |
| Default Vision Model | Dropdown | Empty | |
| Scenes | Scene Root Directory | Text | skills-scenes |
| Archive | Default Archive Folder | Text | AI笔记 |
| Auto-archive AI Replies | Toggle | On (archives when Skill matched or long reply) | |
| Knowledge Retrieval | Enable RAG | Toggle | Off |
| Embedding Provider | Dropdown | Empty | |
| Embedding Model | Text | text-embedding-v3 | |
| Retrieval Top K | Slider | 3 | |
| Similarity Threshold | Slider | 0.3 | |
| Interface | Send Shortcut | Dropdown | Enter |
| Streaming Output | Toggle | On | |
| Temperature | Slider | 0.7 | |
| Context Messages | Slider | 20 |
🛠️ Developer Guide
Tech Stack
| Layer | Choice |
|---|---|
| Language | TypeScript |
| UI | React 18 |
| Model Integration | OpenAI-compatible API |
| Data Storage | Local Vault Markdown |
| Build Tool | esbuild |
Local Development
# Clone the project
git clone https://github.com/zzyong24/obsidian-lingxi.git
cd obsidian-lingxi
# Install dependencies
npm install
# Development mode (auto-rebuild on file changes)
npm run dev
# Production build
npm run build
Debugging
-
Create a symlink to your Vault's plugin directory:
ln -s /path/to/obsidian-lingxi /path/to/vault/.obsidian/plugins/lingxi -
Run
npm run dev -
In Obsidian, press
Cmd/Ctrl + Rto reload the plugin -
Press
Cmd + Option + I(macOS) orCtrl + Shift + I(Windows) to open Developer Tools
Project Structure
src/
├── main.ts # Plugin entry point
├── types.ts # Global type definitions + default settings
├── constants.ts # Constants
├── settings.ts # Settings management
├── providers/
│ ├── OpenAICompatible.ts # OpenAI-compatible Provider (covers all models)
│ └── ProviderRegistry.ts # Provider registration & management
├── skills/
│ ├── SceneManager.ts # Scene-based Skill/Rules management (scoring match + incremental hot update)
│ ├── SkillFileOperator.ts # Skill/Scene file CRUD operator
│ └── ToolCallHandler.ts # Function Calling tool registration & execution
├── search/
│ ├── EmbeddingService.ts # Embedding vectorization service
│ ├── VectorStore.ts # Local vector storage (JSON)
│ └── RAGManager.ts # RAG retrieval orchestrator
├── conversation/
│ └── ConversationManager.ts # Conversation context management (with disk persistence)
├── archive/
│ └── AutoArchiver.ts # Auto-archive to Vault
├── utils/
│ └── markdown.ts # Markdown utility functions
└── ui/
├── ChatView.tsx # Obsidian sidebar view
├── Chat.tsx # Chat main component (with Tool Call loop & stop generation)
├── MessageBubble.tsx # Message bubble
├── InputArea.tsx # Input area (text + image + note reference)
├── ModelSelector.tsx # Model switcher
├── SettingsPanel.tsx # Settings panel (React)
└── SettingsTab.tsx # Obsidian PluginSettingTab wrapper
❓ FAQ
Q: "Please configure a model API Key first"?
Go to Settings → Lingxi, configure a valid API Key for at least one model provider, and select that provider in the "Default Model" section.
Q: Streaming output not working / response stuck?
- Try disabling "Streaming Output" in settings and use non-streaming mode
- Check network proxy settings (VPN may interfere)
- Verify your API Key is valid and has remaining quota
- Open Developer Console (
Cmd+Option+I) to check error logs
Q: Skill files not being loaded?
- Confirm files are in the scene directory specified in settings (default
skills-scenes/) - Files must be in
.mdformat - Files must contain valid Frontmatter (YAML header wrapped in
---) - Files must contain a
## System Promptsection - Open Developer Console and check
[Lingxi]related logs
Q: Where are archived notes?
When a Skill is matched, archives go to sceneName/output_folder (e.g., content-creation/topic-management/). When no Skill is matched, archives go to the default archive folder (default AI笔记/). All paths are configurable.
Q: How to use locally deployed models (Ollama, etc.)?
In the settings panel, change any provider's Base URL to your local service address (e.g., http://localhost:11434/v1), as long as the service is compatible with the OpenAI API format.
Q: Remotely Save sync failed?
- Check if the COS SecretId / SecretKey are correct
- Verify the bucket name includes the APPID suffix (full name)
- Confirm S3 Endpoint format is
cos.{region}.myqcloud.com - Check if the Tencent Cloud account has outstanding balance
- Click Remotely Save's "Check" button to re-test the connection
Q: Plugin settings lost after multi-device sync?
Lingxi settings are stored in .obsidian/plugins/lingxi/data.json. Make sure Remotely Save is not excluding the .obsidian directory. Restart Obsidian after sync completes to load the new settings.
🙏 Acknowledgements
- Obsidian — Powerful local-first knowledge management tool
- Obsidian Copilot — Architectural inspiration for Lingxi
- Remotely Save — Core plugin for multi-device sync
- DeepSeek, Qwen, Doubao, Kimi, Zhipu — Excellent Chinese LLM providers
📄 License
If Lingxi helps you, please consider giving it a ⭐ Star!
For plugin developers
Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.