Calcifer

pending

by Anish S.

AI assistant with RAG, chat, auto-tagging, note organization, and memory for your vault.

2 starsUpdated 2mo agoMITDiscovered via Obsidian Unofficial Plugins
View on GitHub

🔥 Calcifer

AI-Powered Assistant for Obsidian

Version Obsidian License Platform TypeScript PRs Welcome


Your intelligent AI companion that understands your vault.

RAG-powered chat · Smart auto-tagging · Tool calling · Semantic search · Persistent memory


Features · Installation · Configuration · Commands · Privacy


Features

AI Chat with Vault Context

  • Chat interface in the right sidebar
  • Retrieves relevant notes as context for answers
  • Shows sources for every response
  • Supports Ollama and OpenAI-compatible APIs

Smart Auto-Tagging

  • Suggests tags based on note content
  • Uses existing vault tags as reference
  • Configurable auto-apply or suggest-only mode

Tool Calling

  • Create, move, rename, and delete notes
  • Create folders and organize vault structure
  • Configurable confirmation for destructive actions

Memory System

  • Remembers facts about you across conversations
  • Stored locally in plugin data (not sent to cloud)
  • Manageable through settings modal

Note Organization

  • Suggests appropriate folders for notes
  • Based on content similarity to existing notes
  • LLM-enhanced folder recommendations

Semantic Search

  • Full vault indexing with embeddings
  • Find notes by meaning, not just keywords
  • Automatic re-indexing on file changes

Screenshots

Auto Tag Suggestions

Tag Suggestions

Tool Calling

Tool Calling


Requirements

  • Obsidian v1.0.0 or higher
  • AI API endpoint (one of the following):
ProviderDescription
OllamaLocal or remote — complete privacy
OpenAIOr compatible API (Azure OpenAI, etc.)

Installation

From Community Plugins (Coming Soon)

  1. Open SettingsCommunity plugins
  2. Search for "Calcifer"
  3. Click Install, then Enable

Manual Installation

# 1. Download from the latest release:
#    main.js, manifest.json, styles.css

# 2. Create the plugin folder
mkdir -p <vault>/.obsidian/plugins/calcifer/

# 3. Copy files and enable in Settings → Community plugins

Configuration

1. Add an API Endpoint

  1. Open SettingsCalcifer
  2. Click "Add Ollama" or "Add OpenAI"
  3. Configure the endpoint:
Ollama Configuration
Base URL:        http://localhost:11434
Chat Model:      llama3.2
Embedding Model: nomic-embed-text
OpenAI Configuration
Base URL:        https://api.openai.com
API Key:         sk-...
Chat Model:      gpt-4o-mini
Embedding Model: text-embedding-3-small
  1. Click "Test" to verify connection
  2. Enable the endpoint

2. Index Your Vault

Use command: Calcifer: Re-index Vault
Or enable automatic background indexing in settings.

3. Start Chatting

  • Click the bot icon in the left ribbon
  • Or use command: Calcifer: Open Chat

Commands

CommandDescription
Open ChatOpen the chat sidebar
Re-index VaultRebuild the embedding index
Stop IndexingStop the current indexing process
Clear Embedding IndexDelete all embeddings
Index Current FileIndex only the active file
Show StatusDisplay indexing stats and provider health
Show MemoriesOpen the memories management modal
Suggest Tags for Current NoteGet AI tag suggestions
Suggest Folder for Current NoteGet folder placement suggestions

Settings Reference

Embedding Settings
SettingDefaultDescription
Enable EmbeddingfalseToggle automatic indexing (enable after configuring provider)
Batch Size1Concurrent embedding requests
Chunk Size1000Characters per text chunk
Chunk Overlap200Overlap between chunks
Debounce Delay5000Milliseconds to wait before indexing changed files
Exclude Patternstemplates/**Glob patterns to skip
RAG Settings
SettingDefaultDescription
Top K Results5Context chunks to retrieve
Minimum Score0.5Similarity threshold (0-1)
Include FrontmattertrueAdd metadata to context
Max Context Length8000Total context character limit
Chat Settings
SettingDefaultDescription
System Prompt(built-in)Customize assistant behavior
Include Chat HistorytrueSend previous messages
Max History Messages10History limit
Temperature0.7Response creativity (0-2)
Max Tokens2048Response length limit
Tool Calling Settings
SettingDefaultDescription
Enable Tool CallingtrueAllow AI to perform vault operations
Require ConfirmationfalseAsk before executing destructive tools
Memory Settings
SettingDefaultDescription
Enable MemorytrueStore persistent memories
Max Memories100Storage limit
Include in ContexttrueSend memories with queries
Auto-Tagging Settings
SettingDefaultDescription
Enable Auto-TagfalseActivate tagging feature (opt-in)
Modesuggestauto (apply) or suggest (show modal)
Max Suggestions5Tags per note
Use Existing TagstruePrefer vault tags
Confidence Threshold0.8Auto-apply threshold (0-1)
Organization Settings
SettingDefaultDescription
Enable Auto-OrganizetrueActivate folder suggestions
Modesuggestauto (move) or suggest (ask)
Confidence Threshold0.9Auto-move threshold (0-1)
Performance Settings
SettingDefaultDescription
Enable on MobiletrueRun on mobile devices
Rate Limit (RPM)60API requests per minute
Request Timeout120Seconds before timeout
Use Native FetchfalseUse native fetch for internal CAs
UI Settings
SettingDefaultDescription
Show Context SourcestrueDisplay sources in chat responses
Show Indexing ProgresstrueShow indexing notifications

Mobile Support

Calcifer is fully functional on mobile devices:

  • Chat interface optimized for touch
  • Background indexing respects mobile resources
  • All features work offline with local Ollama

Privacy & Security

AspectDetails
Local ProcessingAll embeddings stored locally in IndexedDB
No Cloud StoragePlugin data never leaves your device
API ChoiceUse local Ollama for complete privacy
Memory ControlView and delete any stored memories

Network Usage Disclosure

Important: No data is sent to any server until you configure an API endpoint.

ServicePurposeData Sent
Ollama (local/remote)Chat completions, embeddingsNote content for context, user messages
OpenAI (or compatible)Chat completions, embeddingsNote content for context, user messages
  • You control which provider to use (local Ollama = no external network)
  • Note content is sent as context for AI responses (chunks of ~1000 chars)
  • No telemetry or analytics are collected by this plugin

Development

# Clone the repository
git clone https://github.com/anyesh/obsidian-calcifer.git
cd obsidian-calcifer

# Install dependencies
npm install

# Development mode (watch)
npm run dev

# Production build
npm run build

Troubleshooting

"No provider configured"
  • Add at least one API endpoint in settings
  • Ensure the endpoint is enabled
  • Test the connection
"Connection failed"
  • Check if Ollama is running (ollama serve)
  • Verify the base URL is correct
  • Check firewall/network settings
Indexing is slow
  • Reduce batch size for limited resources
  • Exclude large folders (templates, archives)
  • Mobile devices may need smaller chunk sizes
Chat responses are irrelevant
  • Ensure vault is indexed (check status bar)
  • Lower the minimum score threshold
  • Increase Top K for more context

License

MIT License — see LICENSE for details.


Acknowledgments

Obsidian Ollama

Made for the Obsidian community


Back to Top

For plugin developers

Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.