Lingxi

pending

by zzyong24

Skill-driven AI chat plugin — chat with AI in the sidebar, auto-archive conversations as structured notes. Supports DeepSeek, Qwen, Doubao, Kimi, Zhipu and more Chinese LLMs.

6 starsUpdated 18d agoMITDiscovered via Obsidian Unofficial Plugins
View on GitHub

🇨🇳 中文文档

Lingxi

🧠 Skill-driven AI Chat Plugin — Make Obsidian Your AI Second Brain

Version License Platform Models

Chat with AI in the Obsidian sidebar, drive your creative workflow with Skill templates, and automatically archive conversations as structured notes.
Optimized for Chinese LLMs. All data stored locally in your Vault — zero cloud dependency.


📖 Table of Contents


✨ Features

FeatureDescription
🗣️ Chat & ArchiveConversations are automatically archived as structured notes
Skill-drivenDefine AI behavior through Markdown Skill templates — flexible and extensible
🎭 Scene ManagementOrganize Skills and Rules by scene, with intelligent scoring-based matching
🛠️ Everything via ChatCreate, update, delete Skills and Scenes through natural conversation — powered by Function Calling
💾 Conversation PersistenceChat history is auto-saved to disk and seamlessly restored after restart
🇨🇳 Chinese LLM FirstBuilt-in support for DeepSeek / Qwen / Doubao / Kimi / Zhipu — ready out of the box
🖼️ Image RecognitionPaste or drag images to send to vision models for analysis
📡 Streaming OutputReal-time typewriter-style AI responses, with stop generation support
🔒 Fully Local DataAll data stored in your Obsidian Vault — zero cloud dependency
📱 Multi-device SyncSync across desktop and mobile with Remotely Save + COS
🔍 Knowledge Retrieval (RAG)Automatically retrieves relevant knowledge from your Vault notes, enabling AI to answer based on your notes

📸 Preview

Chat Interface

📷 Coming soon: Chat interface screenshot

Scene Switching

📷 Coming soon: Scene switching screenshot

Settings Panel

📷 Coming soon: Settings panel screenshot


🚀 Getting Started

Installation

Option 1: Community Plugin Marketplace (Recommended)

  1. Open Obsidian → Settings → Community Plugins
  2. Turn off Restricted Mode (if enabled)
  3. Click Browse, search for "Lingxi"
  4. Click Install, then Enable

Option 2: Manual Installation

  1. Go to GitHub Releases and download the latest main.js, styles.css, and manifest.json

  2. Create the plugin directory in your Vault and place the files:

YourVault/.obsidian/plugins/lingxi/
   ├── main.js
   ├── styles.css
   └── manifest.json
  1. Restart Obsidian → Settings → Community Plugins → Enable "Lingxi"

Option 3: BRAT Installation (Beta Testing)

  1. Install the BRAT plugin
  2. Go to BRAT Settings → Add Beta Plugin → Enter this project's GitHub repo URL
  3. BRAT will automatically download and install the latest version

Configure Model API Key

  1. Open Obsidian → Settings → Lingxi
  2. Expand the model provider you want to use (e.g., DeepSeek)
  3. Enter your API Key (see the guides below for each model)
  4. Click "Test Connection" to verify ✅
  5. Select the configured model in the "Default Model" section

💡 You only need to configure at least one model to get started. We recommend starting with DeepSeek for the best cost-performance ratio.


🤖 LLM Integration Guide

This plugin supports all LLM services compatible with the OpenAI API format. Below are detailed integration guides for 5 major Chinese LLM providers.

1. DeepSeek

🌟 Recommended — Excellent cost-performance ratio, strong Chinese language capabilities, 128K context

Get API Key:

  1. Visit DeepSeek Platform
  2. Register/Login
  3. Go to "API Keys" page
  4. Click "Create API Key", copy and save

Plugin Configuration:

SettingValue
Base URLhttps://api.deepseek.com/v1
Default Modeldeepseek-chat
API KeyYour created key

Available Models:

ModelDescriptionContext
deepseek-chatGeneral chat, best value128K
deepseek-reasonerDeep reasoning model128K

Pricing: New users receive free credits. Regular usage costs approximately ¥1/million tokens (input) — extremely affordable.


2. Qwen (Alibaba Cloud)

Alibaba Cloud's LLM, multiple tiers available, generous free quota

Get API Key:

  1. Visit Alibaba Cloud Dashscope
  2. Register/Login with an Alibaba Cloud account
  3. Activate "Model Service Lingji"
  4. Go to "API-KEY Management" page
  5. Click "Create New API-KEY", copy and save

Plugin Configuration:

SettingValue
Base URLhttps://dashscope.aliyuncs.com/compatible-mode/v1
Default Modelqwen-plus
API KeyYour created key

Available Models:

ModelDescriptionContext
qwen-turboLightweight and fast128K
qwen-plusBalanced choice128K
qwen-maxMost powerful32K
qwen-vl-plusVision model (image support)32K

Pricing: Free quota available. qwen-turbo has the most generous free tier.


3. Doubao (ByteDance)

ByteDance's LLM, accessed through Volcengine Ark platform

Get API Key:

  1. Visit Volcengine Console
  2. Register/Login with a Volcengine account
  3. Activate "Model Inference" service
  4. Create a key in "API Key Management"
  5. Create an inference endpoint in "Online Inference", note the Endpoint ID

⚠️ Note: Doubao uses your Endpoint ID (like ep-xxx) as the model name, not the generic model name.

Plugin Configuration:

SettingValue
Base URLhttps://ark.cn-beijing.volces.com/api/v3
Default ModelYour Endpoint ID (e.g., ep-20240xxxxx)
API KeyYour created key

Available Models:

ModelDescription
Doubao-pro-32kGeneral chat, 32K context
Doubao-pro-128kLong text chat, 128K context
Doubao-lite-32kLightweight and fast
Doubao-vision-proVision model (image support)

Pricing: New users receive free credits, competitively priced.


4. Kimi (Moonshot AI)

Known for ultra-long context support up to 200K tokens, ideal for long document processing

Get API Key:

  1. Visit Moonshot Platform
  2. Register/Login
  3. Go to "API Key Management"
  4. Click "Create", copy and save

Plugin Configuration:

SettingValue
Base URLhttps://api.moonshot.cn/v1
Default Modelmoonshot-v1-32k
API KeyYour created key

Available Models:

ModelDescriptionContext
moonshot-v1-8kLightweight and fast8K
moonshot-v1-32kBalanced choice32K
moonshot-v1-128kUltra-long context128K

Pricing: Free quota available, pay-as-you-go.


5. Zhipu GLM

Tsinghua-backed LLM, GLM-4 series with comprehensive capabilities

Get API Key:

  1. Visit Zhipu Open Platform
  2. Register/Login
  3. Go to "API Keys" page
  4. Click "Create API Key", copy and save

Plugin Configuration:

SettingValue
Base URLhttps://open.bigmodel.cn/api/paas/v4
Default Modelglm-4
API KeyYour created key

Available Models:

ModelDescriptionContext
glm-4Flagship model128K
glm-4-flashFast version (free)128K
glm-4vVision model (image support)2K

Pricing: glm-4-flash is completely free, great for daily use.


6. Other Models

This plugin is compatible with all OpenAI API format model services, including but not limited to:

ServiceBase URL ExampleDescription
OpenAIhttps://api.openai.com/v1Requires international network access
OpenRouterhttps://openrouter.ai/api/v1Multi-model aggregation proxy
Ollama (Local)http://localhost:11434/v1Locally deployed open-source models
LM Studio (Local)http://localhost:1234/v1Locally deployed open-source models
Custom ProxyYour proxy URLSelf-hosted API proxy service

Simply modify any provider's Base URL and Model Name in the settings panel to connect.


🎯 Scenes & Skill System

Lingxi's core feature is its Scene-based Skill System — define AI roles, behaviors, and output formats through Markdown files, organized by scenes.

Directory Structure

YourVault/
└── skills-scenes/                    # Scene root directory (configurable in settings)
    ├── _global_rules/                # 🌐 Global Rules (shared across all scenes)
    │   └── general-assistant-rules.md
    ├── _scenes_index.md              # 📋 Scene index (helps AI auto-match scenes)
    │
    ├── 系统管理/                      # ⚙️ Scene: System Management (built-in)
    │   ├── _scene.md                 # Scene description
    │   ├── _rules/
    │   └── _skills/
    │       └── 系统/
    │           └── skill-manager.md  # Built-in Skill Manager (CRUD Skills & Scenes via chat)
    │
    ├── content-creation/             # 📱 Scene: Content Creation
    │   ├── _scene.md                 # Scene description + workflow + Skills overview
    │   ├── _rules/                   # Scene-level rules
    │   │   ├── persona.md
    │   │   ├── methodology.md
    │   │   └── ...
    │   └── _skills/                  # Scene-level Skills
    │       ├── topic-management/
    │       │   ├── topic-discovery.md
    │       │   └── topic-refinement.md
    │       ├── content-writing/
    │       │   ├── draft-generation.md
    │       │   └── content-rewriting.md
    │       └── ...
│
├── content-creation/                 # 📁 Archive: content creation scene output (auto-created)
│   ├── topic-management/             #     ← Skill output_folder
│   ├── drafts/
│   └── ...
│
├── learning/                         # 📁 Archive: learning scene output
│   ├── 深度反思/
│   ├── 知识卡片/
│   └── ...
│
└── AI笔记/                           # 📁 Archive: default folder for unmatched content

System Prompt Loading Strategy

The plugin uses a multi-layer stacking approach to build the complete System Prompt:

┌─────────────────────────────┐
│  Layer 0: Current Time         │  ← Auto-injected (prevents AI time hallucination)
│  Layer 1: Global Rules         │  ← _global_rules/*.md (loaded for every conversation)
│  Layer 2: Scene Rules          │  ← content-creation/_rules/*.md (loaded when scene is selected)
│  Layer 3: Skill Prompt         │  ← topic-discovery.md's System Prompt section
│  Layer 4: RAG Knowledge Context│  ← Related content retrieved from Vault notes
└─────────────────────────────┘
                ↓
         Final System Prompt

Skill File Specification

Each Skill is a .md file containing Frontmatter metadata and a System Prompt:

---
name: Topic Discovery
description: Help you discover content ideas from trends, comments, and competitors
trigger_keywords: ["topic", "ideas", "trends"]
category: Topic Management
output_folder: 选题管理
output_template: note
model_preference: text
---

## System Prompt

You are a senior content strategist...

(Full system prompt)

## Output Format

Please output in the following structure:
1. Topic Direction
2. Specific Angle
3. Viral Potential Score

Frontmatter Field Reference:

FieldTypeRequiredDescription
namestringDisplay name of the Skill
descriptionstringBrief description
trigger_keywordsstring[]Keywords for auto-matching
categorystringCategory name (used for panel grouping)
output_folderstringArchive target folder
output_templatecard / note / rawOutput template type
model_preferencetext / vision / anyModel preference

Using Skills

Method 1: Auto-matching by Keywords

Simply type a message containing Skill keywords (e.g., "Help me find trending topics for today"), and the plugin will automatically match the "Topic Discovery" Skill, responding with the corresponding System Prompt and output format.

Method 2: Scene Quick Entry

Click scene buttons on the welcome page (e.g., "📱 Content Creation") to quickly enter the corresponding scene context. Scene Rules and Skills are activated automatically.

Keyword Matching Mechanism:

The plugin uses a scoring mechanism for keyword matching, rather than simple first-match:

  1. More matching keywords in user message = higher score
  2. Longer keywords have higher weight (avoids short-word false triggers)
  3. System automatically selects the highest-scoring Scene/Skill
  4. Minimum keyword length is 2 characters; single characters won't trigger matching

Managing Skills and Scenes via Chat

Lingxi includes a built-in Skill Manager (powered by Function Calling) that lets you perform all management operations through natural language conversation, without manually creating files:

Supported Operations:

ExampleOperation
"Create a reading notes skill"Guided skill creation
"Update keywords for topic discovery"Update specific skill
"Delete XX skill"Delete skill after confirmation
"List my skills"List all skills
"Create a health management scene"Create new scene
"Show all scenes"List scene overview

Guided Dialogue Flow:

User: Help me create a reading notes skill

Lingxi: I've designed this Skill for you, please check:

        📌 Basic Info
        - Name: Deep Reading Notes
        - Scene: 📚 Learning
        - Category: Reading Management
        - Description: Guide you through deep reading and output structured notes
        - Keywords: ["reading notes", "book notes", "deep reading"]

        🧠 System Prompt Preview
        > You are an experienced reading coach who guides users
        > through deep reading and produces structured notes...

        Reply "Confirm" to create, or tell me what to adjust 😊

User: Confirm

Lingxi: ✅ Skill "Deep Reading Notes" has been created!
        File: skills-scenes/学习/_skills/阅读管理/深度阅读笔记.md

💡 The Skill Manager automatically infers name, category, keywords, and designs professional System Prompts — just describe your needs and confirm.


🖼️ Image Input

Send images to vision models for analysis:

  • Paste images: Directly Ctrl/Cmd + V to paste images from clipboard
  • Drag & drop: Drag image files into the input area
  • Images are previewed as thumbnails; multiple images supported
  • Images are sent to the model in base64 format

⚠️ Image functionality requires selecting a vision-capable model in "Default Vision Model", such as qwen-vl-plus, glm-4v, or doubao-vision-pro.


☁️ Multi-device Sync: Remotely Save + Tencent Cloud COS

Use the Remotely Save plugin + Tencent Cloud Object Storage (COS) to sync your Obsidian Vault across desktop, mobile, and tablet in real-time.

Why COS

SolutionProsCons
iCloudNative supportApple devices only, no Android/Windows
OneDriveMicrosoft nativeSlow in China, frequent sync failures
Tencent Cloud COSFast in China, stable, affordableRequires simple setup
Alibaba Cloud OSSStable in ChinaSimilar setup, also a good option

Pricing: Personal note storage is minimal. COS standard storage costs about ¥0.1/GB/month — less than ¥1/month for 10GB of notes. New users typically get free quota.

Setup Steps

  1. Create a COS Bucket at Tencent Cloud COS Console

    • Set access permission to Private Read/Write ⚠️
    • Note the bucket name and region
  2. Create API Credentials at Tencent Cloud API Key Management

    • Note the SecretId and SecretKey
  3. Install & Configure Remotely Save

    • Install from Obsidian Community Plugins
    • Set remote service type to S3 or S3-compatible
    • Configure S3 Endpoint: cos.{your-region}.myqcloud.com
    • Fill in your credentials and bucket name
    • Click "Check" to test the connection
  4. Multi-device Usage

    • Install Obsidian + Remotely Save on each device
    • Configure the same COS connection info
    • First sync will pull all data from the cloud

💡 Tip: Lingxi settings (including API Keys) are stored in .obsidian/plugins/lingxi/data.json and will sync to all devices via Remotely Save. Configure once, use everywhere.

Sync Tips

  1. Avoid editing the same file on multiple devices simultaneously to prevent conflicts
  2. Before first sync, ensure the new device's Vault is empty (or contains only default files)
  3. Plugin files (.obsidian/plugins/) are also synced, so all installed plugins transfer automatically
  4. If sync conflicts occur, Remotely Save keeps both versions for manual resolution
  5. Enable "Sync on Startup" to ensure you always get the latest data

⚙️ Settings Overview

CategorySettingTypeDefault
ModelsProvider API KeysPasswordEmpty
Provider Base URLsTextPre-filled with official URLs
Default Text ModelDropdowndeepseek:deepseek-chat
Default Vision ModelDropdownEmpty
ScenesScene Root DirectoryTextskills-scenes
ArchiveDefault Archive FolderTextAI笔记
Auto-archive AI RepliesToggleOn (archives when Skill matched or long reply)
Knowledge RetrievalEnable RAGToggleOff
Embedding ProviderDropdownEmpty
Embedding ModelTexttext-embedding-v3
Retrieval Top KSlider3
Similarity ThresholdSlider0.3
InterfaceSend ShortcutDropdownEnter
Streaming OutputToggleOn
TemperatureSlider0.7
Context MessagesSlider20

🛠️ Developer Guide

Tech Stack

LayerChoice
LanguageTypeScript
UIReact 18
Model IntegrationOpenAI-compatible API
Data StorageLocal Vault Markdown
Build Toolesbuild

Local Development

# Clone the project
git clone https://github.com/zzyong24/obsidian-lingxi.git
cd obsidian-lingxi

# Install dependencies
npm install

# Development mode (auto-rebuild on file changes)
npm run dev

# Production build
npm run build

Debugging

  1. Create a symlink to your Vault's plugin directory:

    ln -s /path/to/obsidian-lingxi /path/to/vault/.obsidian/plugins/lingxi
    
  2. Run npm run dev

  3. In Obsidian, press Cmd/Ctrl + R to reload the plugin

  4. Press Cmd + Option + I (macOS) or Ctrl + Shift + I (Windows) to open Developer Tools

Project Structure

src/
├── main.ts                    # Plugin entry point
├── types.ts                   # Global type definitions + default settings
├── constants.ts               # Constants
├── settings.ts                # Settings management
├── providers/
│   ├── OpenAICompatible.ts    # OpenAI-compatible Provider (covers all models)
│   └── ProviderRegistry.ts    # Provider registration & management
├── skills/
│   ├── SceneManager.ts        # Scene-based Skill/Rules management (scoring match + incremental hot update)
│   ├── SkillFileOperator.ts   # Skill/Scene file CRUD operator
│   └── ToolCallHandler.ts     # Function Calling tool registration & execution
├── search/
│   ├── EmbeddingService.ts    # Embedding vectorization service
│   ├── VectorStore.ts         # Local vector storage (JSON)
│   └── RAGManager.ts          # RAG retrieval orchestrator
├── conversation/
│   └── ConversationManager.ts # Conversation context management (with disk persistence)
├── archive/
│   └── AutoArchiver.ts        # Auto-archive to Vault
├── utils/
│   └── markdown.ts            # Markdown utility functions
└── ui/
    ├── ChatView.tsx           # Obsidian sidebar view
    ├── Chat.tsx               # Chat main component (with Tool Call loop & stop generation)
    ├── MessageBubble.tsx      # Message bubble
    ├── InputArea.tsx          # Input area (text + image + note reference)
    ├── ModelSelector.tsx      # Model switcher
    ├── SettingsPanel.tsx      # Settings panel (React)
    └── SettingsTab.tsx        # Obsidian PluginSettingTab wrapper

❓ FAQ

Q: "Please configure a model API Key first"?

Go to Settings → Lingxi, configure a valid API Key for at least one model provider, and select that provider in the "Default Model" section.

Q: Streaming output not working / response stuck?
  1. Try disabling "Streaming Output" in settings and use non-streaming mode
  2. Check network proxy settings (VPN may interfere)
  3. Verify your API Key is valid and has remaining quota
  4. Open Developer Console (Cmd+Option+I) to check error logs
Q: Skill files not being loaded?
  1. Confirm files are in the scene directory specified in settings (default skills-scenes/)
  2. Files must be in .md format
  3. Files must contain valid Frontmatter (YAML header wrapped in ---)
  4. Files must contain a ## System Prompt section
  5. Open Developer Console and check [Lingxi] related logs
Q: Where are archived notes?

When a Skill is matched, archives go to sceneName/output_folder (e.g., content-creation/topic-management/). When no Skill is matched, archives go to the default archive folder (default AI笔记/). All paths are configurable.

Q: How to use locally deployed models (Ollama, etc.)?

In the settings panel, change any provider's Base URL to your local service address (e.g., http://localhost:11434/v1), as long as the service is compatible with the OpenAI API format.

Q: Remotely Save sync failed?
  1. Check if the COS SecretId / SecretKey are correct
  2. Verify the bucket name includes the APPID suffix (full name)
  3. Confirm S3 Endpoint format is cos.{region}.myqcloud.com
  4. Check if the Tencent Cloud account has outstanding balance
  5. Click Remotely Save's "Check" button to re-test the connection
Q: Plugin settings lost after multi-device sync?

Lingxi settings are stored in .obsidian/plugins/lingxi/data.json. Make sure Remotely Save is not excluding the .obsidian directory. Restart Obsidian after sync completes to load the new settings.


🙏 Acknowledgements


📄 License

MIT License


If Lingxi helps you, please consider giving it a ⭐ Star!

For plugin developers

Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.