Spark Assistant
pendingby CrossGenAI
AI-powered automation with slash commands, chat widget, and intelligent workflows.
Spark Assistant
Transform Obsidian into an intelligent business operating system powered by AI.
Spark Assistant enables "markdown files triggering AI agents" - turning your Obsidian vault into a living, automated workspace where notes become actions, and simple text triggers complex workflows.
π Table of Contents
- π― What is Spark?
- π Quick Start
- π§ CLI Commands
- π Repository Structure
- π¨ Features
- ποΈ Architecture
- π Configuration
- π§ Development
- π Troubleshooting
- π Documentation
- π€ Contributing
- π Acknowledgments
- π§ Contact
π― What is Spark?
Spark provides powerful interfaces for AI interaction in Obsidian:
- Command Palette - Notion-style autocomplete for instant, atomic actions (
/summarize,@betty) - Chat Widget - Persistent conversational AI with full vault context (Cmd+K)
- Workflow Builder - Visual node editor for multi-step AI automations (Cmd+Shift+W)
- Automation Engine - File changes trigger automated workflows
Key Innovation: All powered by a file-based architecture. The plugin writes markdown, a engine watches and processes, results appear automatically. No complex APIs, no fragile integrationsβjust files.
π Quick Start
Prerequisites
Minimal requirements for fresh machines:
curlorwget(for downloading)bash(for running the script)tar(usually pre-installed)
That's it! No Node.js, npm, git, or other tools needed.
Everything else is auto-installed:
- β Node.js 18+ (via nvm)
- β npm (comes with Node.js)
- β Obsidian (optional - example vault included)
Development features (enable with DEV_MODE=1):
- π§ Hot Reload plugin (auto-reload on changes)
- π§ GitHub CLI (for contributing)
Note: API keys are managed securely in Obsidian plugin settings (
~/.spark/secrets.yaml, encrypted)
Installation
One-Command Install (Easiest):
Fresh machine? No problem! This installs everything:
# Install to example vault (for testing/development)
curl -fsSL https://raw.githubusercontent.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin/main/install.sh | bash
# Or install to your vault
curl -fsSL https://raw.githubusercontent.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin/main/install.sh | bash -s -- ~/Documents/MyVault
What it does:
- β Installs Node.js via nvm (if needed)
- β Downloads and builds Spark engine + plugin
- β Auto-starts engine (configures vault)
- β Ready for production use (add API key in plugin settings)
Engine Only (for Community Plugins users):
If you installed the Spark plugin from Obsidian Community Plugins, you only need the engine.
Option 1: Install from Plugin (Recommended)
The plugin will automatically detect if the engine is missing and prompt you to install it:
- A setup modal appears on first launch
- Or go to Settings β Spark β Engine and click "Install Spark Engine"
- The plugin can also auto-launch the engine when Obsidian starts
Option 2: Manual Installation
# Install engine via script
curl -fsSL https://raw.githubusercontent.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin/main/install-engine.sh | bash
# Start the engine
spark start ~/Documents/MyVault
For developers:
# Enable development features (hot reload + gh CLI)
DEV_MODE=1 curl -fsSL https://raw.githubusercontent.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin/main/install.sh | bash
Environment flags:
# Development mode (hot reload, gh CLI)
DEV_MODE=1 curl -fsSL https://raw.githubusercontent.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin/main/install.sh | bash
# Skip Node.js installation (if you have it)
SKIP_NODE=1 curl -fsSL https://raw.githubusercontent.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin/main/install.sh | bash
# Skip engine auto-start
AUTO_START=0 curl -fsSL https://raw.githubusercontent.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin/main/install.sh | bash
Development Setup (Clone First):
# 1. Clone repository
git clone https://github.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin.git
cd crossgen-spark
# 2. Run installer (sets up example-vault with hot reload)
./install.sh
# 3. Open example-vault in Obsidian
# - Plugins are auto-enabled (Spark + Hot Reload)
# - Add your API key in plugin settings (Settings β Spark β Advanced)
# - Ready for development!
# 5. Start engine
spark start example-vault
Manual Installation:
Click to expand manual installation steps
# 1. Clone repository
git clone https://github.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin.git
cd spark
# 2. Install and build engine
cd engine
npm install
npm run build
npm link
# 3. Install and build plugin
cd ../plugin
npm install
npm run build
# 4. Copy plugin to your vault
mkdir -p ~/Documents/MyVault/.obsidian/plugins/spark
cp -r dist/* ~/Documents/MyVault/.obsidian/plugins/spark/
# 5. Enable plugin in Obsidian
# Settings β Community plugins β Enable "Spark"
# 6. Add API key in plugin settings
# Settings β Spark β Advanced β Add your API key for each provider
# 7. Start engine
spark start ~/Documents/MyVault
First Steps
- Open
example-vaultin Obsidian - Type
@in any note to see available agents, type/to see available commands - Try
/summarizeor mention@betty - Press
Cmd+Kto open chat widget - For development:
cd plugin && npm run devfor hot reload
π§ CLI Commands
The spark CLI provides debugging and inspection tools:
# Engine control
spark start [vault-path] # Start watching vault (foreground)
spark start ~/vault & # Run in background
spark start ~/vault --debug & # Background with debug logging
nohup spark start ~/vault > ~/.spark/engine.log 2>&1 & # Persistent background
spark status # Show all running engines
spark status ~/vault # Check specific vault
spark stop ~/vault # Stop engine gracefully
spark stop ~/vault --force # Force stop (SIGKILL)
# Configuration
spark config [vault-path] # Validate configuration
spark inspect [vault-path] # Show vault info and config
# Debugging & History
spark history [vault-path] # Show processing history and stats
spark history ~/vault --limit 20 # Show last 20 events
spark history ~/vault --stats # Show statistics only
spark history ~/vault --clear # Clear history
# Testing
spark parse <content> # Test parser on text
spark parse "@betty review @file.md"
spark parse tasks/todo.md --file # Parse a file
# Info
spark version # Show version
spark --help # Show all commands
Global Registry: The engine maintains a registry at ~/.spark/registry.json to track all running engines across different vaults.
Running as a Background Service
Simple background process:
# Run in background
spark start ~/Documents/Vault &
# Check status
spark status
# Stop engine
spark stop ~/Documents/Vault
# Stop all engines
spark stop --all
π Repository Structure
spark/
βββ README.md # This file
βββ PRD.md # Original product requirements
βββ ARCHITECTURE_QUESTIONS.md # Architectural decisions
βββ DECISIONS_STATUS.md # Decision tracking
β
βββ specs/ # Detailed specifications & docs
β βββ PRODUCT_ARCHITECTURE.md # System architecture
β βββ MENTION_PARSER.md # Parsing @mentions and /commands
β βββ DEVELOPER_EXPERIENCE.md # DX roadmap and test coverage
β βββ CI_CD_SETUP.md # GitHub Actions setup
β βββ PLUGIN_PROGRESS.md # Plugin implementation tracking
β βββ ENGINE_PROGRESS.md # Engine implementation tracking
β βββ CONFIGURATION.md # Config system
β βββ FILE_FORMATS.md # Command/agent/trigger formats
β βββ PLUGIN_UI_SPEC.md # Plugin interface design
β βββ RESULT_AND_ERROR_HANDLING.md # Result/error handling
β βββ TRIGGER_SYSTEM_CLARIFIED.md # Trigger automation
β βββ IMPLEMENTATION_PLAN_PLUGIN.md # Plugin implementation (4-6 weeks)
β βββ IMPLEMENTATION_PLAN_ENGINE.md # Engine implementation (6-8 weeks)
β
βββ example-vault/ # Example Obsidian vault
β βββ .spark/ # Spark configuration
β β βββ config.yaml
β β βββ commands/
β β βββ agents/
β β βββ triggers/
β βββ emails/ # Example email automation
β βββ tasks/ # Example task management
β βββ README.md
β
βββ plugin/ # Obsidian plugin (UI layer)
β βββ src/
β β βββ main.ts
β β βββ settings.ts
β β βββ command-palette/
β β βββ chat/
β β βββ workflows/ # Workflow builder UI
β β β βββ WorkflowCanvas.tsx # React Flow canvas
β β β βββ WorkflowView.tsx # Obsidian ItemView
β β β βββ WorkflowListView.tsx # Workflow list/dashboard
β β β βββ WorkflowManager.ts # View management
β β β βββ WorkflowStorage.ts # Persistence layer
β β β βββ Sidebar.tsx # Properties/code/runs panel
β β β βββ MentionTextarea.tsx # @mention input component
β β β βββ types.ts # Shared types
β β β βββ nodes/ # Node components
β β β βββ PromptNode.tsx # AI prompt step
β β β βββ CodeNode.tsx # JavaScript code step
β β β βββ ConditionNode.tsx # Branching condition
β β βββ types/
β βββ dist/ # Build output
β βββ package.json
β
βββ engine/ # Node.js engine (intelligence layer)
βββ src/
β βββ cli.ts # CLI entry point
β βββ main.ts # Main orchestrator
β βββ cli/ # CLI utilities (registry, inspector)
β βββ config/ # Configuration management
β βββ watcher/ # File system watching
β βββ parser/ # Syntax parsing
β βββ context/ # Context loading
β βββ logger/ # Logging (Logger, DevLogger)
β βββ chat/ # Chat queue handler
β βββ workflows/ # Workflow execution engine
β β βββ WorkflowExecutor.ts # Queue processing, graph traversal
β β βββ PromptRunner.ts # AI prompt execution
β β βββ CodeRunner.ts # JavaScript code execution
β β βββ ConditionRunner.ts # Condition evaluation
β β βββ types.ts # Shared types
β βββ types/ # TypeScript types
βββ __tests__/ # Test suite
βββ package.json
π¨ Features
Slash Commands
Quick, inline actions triggered by typing /:
/summarize
/extract-tasks
/email-draft
How it works:
- Type
/in any note - Fuzzy search shows available commands
- Select and press Enter
- AI processes and writes result
Agent Mentions
Specialized AI personas with domain expertise:
@betty review @tasks/review-q4-finances.md and check if all data sources are accessible
@alice edit @emails/draft-client-proposal.md for clarity and professionalism
Available agents:
@betty- Senior Accountant & Financial Analyst (financial reporting, tax compliance, QuickBooks)@alice- Content Editor & Writing Coach (content editing, grammar, tone and voice)@bob- System Debugger & Context Validator (context validation, debugging with attitude)
Create your own! Add a new .md file to .spark/agents/ with YAML frontmatter and instructions. The engine picks up new agents instantly β no restart needed.
How it works:
- Type
@to see agents and files - Chain together: agents, files, folders, services, commands
- Engine parses and loads context based on proximity
- AI executes with full context
- Results appear in file with β
Chat Assistant
Persistent conversational AI with vault awareness:
Press Cmd+K
You: @betty review @tasks/review-q4-finances.md
Betty: I see the Q4 financial review task.
I'll need access to QuickBooks and finance data.
Let me check the required data sources...
You: @alice can you improve @emails/draft-client-proposal.md?
Alice: I'll review your proposal for clarity and tone.
Draft improvements will appear inline.
How it works:
- Press
Cmd+Kto open floating chat widget - Full conversation history maintained in
.spark/conversations/ - Real-time responses from engine via file system
- Mentions work same as in documents with auto-completion
- Can reference files, folders, and agents naturally
Workflow Builder
Visual workflow editor for creating multi-step AI automations:
Press Cmd+Shift+W or use "Spark: Open Workflows" command
Step Types:
| Step | Purpose | Example |
|---|---|---|
| Prompt | AI processing with @agent support | @betty analyze $input and suggest improvements |
| Code | JavaScript data transformation | return { total: input.items.reduce((a,b) => a+b, 0) }; |
| Condition | Branch logic with loop detection | input.score > 0.8 β true/false branches |
How it works:
- Create workflows with drag-and-drop nodes
- Connect nodes with edges (conditions support true/false branches)
- Use
@agentmentions in prompts to specify AI persona - Use
$inputand$contextvariables for data flow (type$for autocomplete) - Run workflow and monitor step execution in real-time
- View run history with input/output for each step
Architecture:
βββββββββββββββββββββββββββ
β PLUGIN (UI) β
β WorkflowCanvas β
β β’ React Flow editor β
β β’ Node properties β
β β’ Run history β
ββββββββββ¬βββββββββββββββββ
β Saves to .spark/workflows/{id}.json
β Queues to .spark/workflow-queue/{runId}.json
βΌ
βββββββββββββββββββββββββββ
β ENGINE (Execution) β
β WorkflowExecutor β
β β’ Graph traversal β
β β’ Loop detection β
β β’ Step runners β
βββββββββββββββββββββββββββ
File Structure:
.spark/
βββ workflows/ # Workflow definitions
β βββ {id}.json # Nodes, edges, settings
βββ workflow-runs/ # Execution history
β βββ {workflowId}/
β βββ {runId}.json # Step results, input/output
βββ workflow-queue/ # Pending executions
βββ {runId}.json # Queue items for engine
Loop Detection:
- Global cycle limit (default: 10) prevents infinite loops
- Per-condition
maxCyclessetting for controlled iteration - Visit counts tracked per node during execution
Automation Triggers (Planned)
File changes will trigger automated workflows:
Example: Kanban Email Automation
# .spark/triggers/email-automation.yaml
triggers:
- name: send_email_on_status_change
watch:
directory: "emails/"
frontmatter_field: status
from_value: draft
to_value: sent
instructions: |
1. Extract recipient from frontmatter
2. Format content as email
3. Send via $gmail
4. Update sent_date
5. Move to sent/ folder
User workflow:
- Create email in
emails/folder - Add frontmatter:
status: draft - Write email content
- When ready, change to
status: sent - Email automatically sent!
ποΈ Architecture
File-Based Event System
βββββββββββββββββββββββββββ
β OBSIDIAN PLUGIN β
β (UI Only) β
β β’ Command palette β
β β’ Chat widget β
β β’ Notifications β
ββββββββββ¬βββββββββββββββββ
β
β Writes raw text to files
βΌ
βββββββββββββββββββββββββββ
β FILE SYSTEM β
β (.md files in vault) β
ββββββββββ¬βββββββββββββββββ
β
β Watches for changes
βΌ
βββββββββββββββββββββββββββ
β SPARK ENGINE β
β (All Intelligence) β
β β’ Parse mentions β
β β’ Load context β
β β’ Call Claude API β
β β’ Write results β
βββββββββββββββββββββββββββ
Why this works:
- β Plugin can't crash engine
- β Engine can't crash Obsidian
- β Everything is inspectable (files)
- β Version control friendly
- β No complex IPC needed
Mention System
Universal syntax for referencing anything:
| Syntax | Type | Example |
|---|---|---|
@name | Agent | @betty |
@file.md | File | @agents.md |
@folder/ | Folder | @tasks/ |
/command | Command | /summarize |
$service | MCP Service | $gmail |
#tag | Tag | #urgent |
Context Priority:
- Mentioned files (highest priority)
- Current file (where command typed)
- Sibling files (same directory)
- Nearby files (by path distance)
- Other vault files (lowest priority)
π Configuration
Main Config
.spark/config.yaml - System configuration
version: 1
engine:
debounce_ms: 300
results:
add_blank_lines: true
ai:
defaultProvider: claude-agent
providers:
claude-client:
type: anthropic
model: claude-3-5-sonnet-20241022
maxTokens: 4096
temperature: 0.7
claude-agent:
type: anthropic
model: claude-sonnet-4-5-20250929
maxTokens: 4096
temperature: 0.7
# API keys are managed in plugin settings (~/.spark/secrets.yaml)
logging:
level: info
console: true
features:
slash_commands: true
chat_assistant: true
trigger_automation: true
Commands
.spark/commands/my-command.md - Define new slash commands
---
id: my-command
name: My Custom Command
description: What it does
context: current_file
output: inline
---
Instructions for AI to execute...
Agents
.spark/agents/my-agent.md - Define AI personas
---
name: MyAgent
role: What they do
expertise:
- Domain 1
- Domain 2
tools:
- service1
- service2
---
You are an expert in...
When doing tasks:
1. Step 1
2. Step 2
Triggers
.spark/triggers/my-automation.yaml - Define automated workflows
triggers:
- name: my_trigger
description: When this happens
watch:
directory: "folder/"
frontmatter_field: status
to_value: active
instructions: |
What to do when triggered...
priority: 10
π§ Development
Setup
Prerequisites:
- Node.js 18+
- npm or pnpm
- Git
Quick setup for development:
git clone https://github.com/CrossGen-AI-Public/crossgen-spark-obsidian-plugin.git
cd spark
# Install everything (engine + plugin)
./install.sh
# Or install to a specific vault
./install.sh ~/Documents/MyVault
Manual setup:
# Install dependencies separately
cd plugin && npm install --legacy-peer-deps
cd ../engine && npm install
Plugin Development
cd plugin
npm install
npm run dev # Hot reload with esbuild
# Quality checks
npm run check # Run all checks (format, lint, types)
npm run format # Auto-format code
npm run lint:fix # Auto-fix linting issues
Engine Development
cd engine
npm install
npm run dev # Watch mode
npm run check # Format, lint, types, tests
npm test # Run tests
Quality Standards
The repository enforces strict quality standards through automated checks:
CI/CD Pipeline
β Automated testing on every PR and push to main β Multi-version testing (Node 18.x and 20.x) β Coverage tracking in CI logs (79% current) β Build validation for both engine and plugin β Blocks merging if checks fail
See CI_CD_SETUP.md for 2-minute setup.
Pre-Commit Hooks
β Auto-fix formatting and linting issues locally β Validate types, tests, and code quality β Block commit if any check fails
Running Checks Manually
# Check everything before committing (auto-fixes formatting & linting)
cd plugin && npm run check # Plugin: format, lint, types
cd engine && npm run check # Engine: format, lint, types, tests
# Individual fixes
npm run format # Biome formatting
npm run lint:fix # Biome linting auto-fixes
Run npm run check before committing to ensure all checks pass.
π Troubleshooting
Engine not processing files
spark status # Check engine status
spark start ~/vault --debug # Restart with debug logging
Commands not appearing
- Check
.spark/commands/exists - Verify frontmatter format
- Reload Obsidian plugin
Claude API errors
spark config ~/vault # Check configuration
spark inspect ~/vault # Inspect engine state (includes API key status)
Plugin Debugging
- Open Obsidian Developer Tools:
Cmd+Option+I(Mac) orCtrl+Shift+I(Windows) - Console shows plugin logs
- Sources tab for breakpoints
- Reload plugin:
Cmd+Ror Settings β Reload Plugins
π Documentation
- Product Architecture - System design
- Workflow Builder - Visual workflow editor
- Plugin UI Spec - Command palette & chat
- Mention Parser - Parsing syntax
- Configuration - Config reference
- File Formats - Command/agent/trigger formats
- Developer Experience - Testing & DX
- Engine README - Engine-specific docs
π€ Contributing
- Fork the repository
- Create feature branch:
git checkout -b feature/name - Make changes, add tests
- Run
npm run checkin both plugin/ and engine/ - Commit:
git commit -m "feat: description" - Push and create PR
Code Standards:
- TypeScript strict mode
- No
anytypes (engine) - Biome (linting + formatting)
- Tests required (engine)
Code Standards
Enforced via pre-commit hooks:
- β TypeScript - All code in strict mode
- β
No
anytypes - Engine enforces explicit typing - β
Biome - Linting and formatting (strict rules, no unused vars - use
_prefixfor intentionally unused) - β Tests - Required for engine, all tests must pass
- β
Conventional commits -
feat:,fix:,docs:, etc.
Pre-commit checks will:
- Auto-fix formatting and linting issues
- Run type checking
- Run all tests (engine)
- Block commit if any check fails
Pro tip: Run npm run check before committing to catch issues early!
Areas to Contribute
- Plugin UI/UX - Improve command palette, chat widget, workflow builder
- Workflow Builder - New node types, execution features, templates
- Engine Performance - Optimize file watching, parsing
- Documentation - Examples, tutorials, guides
- Testing - Unit tests, integration tests (engine: 81 tests currently)
- Commands/Agents - New default commands and personas
- Bug Fixes - Check GitHub issues for open bugs
π Troubleshooting
Engine not processing files
# Check engine status
spark status
# View logs
tail -f ~/.spark/logs/engine.log
# Restart engine
spark stop
spark start ~/Documents/Vault
Commands not appearing in palette
- Check
.spark/commands/folder exists - Verify command files have proper frontmatter
- Reload Obsidian plugin
- Check plugin console for errors
Claude API errors
API keys are stored securely in ~/.spark/secrets.yaml (encrypted). To check:
spark inspect ~/vault # Shows API key status
cat ~/.spark/secrets.yaml # View encrypted secrets (Base64 encoded)
To troubleshoot:
- Check API key is set in plugin settings (Settings β Spark β Advanced)
- Verify provider configuration in
.spark/config.yaml - Check engine logs in
.spark/logs/
π Acknowledgments
- Anthropic - Claude AI platform
- Obsidian - Knowledge management platform
- MCP Protocol - Model Context Protocol for service integrations
π§ Contact
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Transform your notes into actions. Turn your vault into an AI-powered operating system.
Built with β€οΈ for power users who want their tools to work for them.
For plugin developers
Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.