AI Providers

approved

by Pavel Frankov

A hub for setting AI providers (OpenAI-like, Ollama and more) in one place.

100 stars37,457 downloadsUpdated 2mo agoMIT
View on GitHub

Obsidian AI Providers

⚠️ Important Note: This plugin is a configuration tool - it helps you manage your AI settings in one place.

Think of it like a control panel where you can:

  • Store your API keys and settings for AI services
  • Share these settings with other Obsidian plugins
  • Avoid entering the same AI settings multiple times

The plugin itself doesn't do any AI processing - it just helps other plugins connect to AI services more easily.

image

Required by plugins

Supported providers

  • OpenAI
  • OpenRouter
  • Anthropic
  • Google Gemini
  • Mistral AI
  • Together AI
  • Fireworks AI
  • Perplexity AI
  • DeepSeek
  • xAI (Grok)
  • Cerebras
  • Z.AI
  • Groq
  • 302.AI
  • Novita AI
  • DeepInfra
  • SambaNova
  • LM Studio
  • Ollama (and Open WebUI)
  • OpenAI compatible API

Features

  • Fully encapsulated API for working with AI providers
  • Develop AI plugins faster without dealing directly with provider-specific APIs
  • Easily extend support for additional AI providers in your plugin
  • Available in 11 languages: English, Spanish, French, Italian, Portuguese, German, Russian, Chinese, Japanese, Korean, and Dutch

Installation

Obsidian plugin store (recommended)

This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=ai-providers

BRAT

You can install this plugin via BRAT: pfrankov/obsidian-ai-providers

Create AI provider

Ollama

  1. Install Ollama.
  2. Install Gemma 2 ollama pull gemma2 or any preferred model from the library.
  3. Select Ollama in Provider type
  4. Click refresh button and select the model that suits your needs (e.g. gemma2)

Additional: if you have issues with streaming completion with Ollama try to set environment variable OLLAMA_ORIGINS to *:

  • For MacOS run launchctl setenv OLLAMA_ORIGINS "*".
  • For Linux and Windows check the docs.

OpenAI

  1. Select OpenAI in Provider type
  2. Set Provider URL to https://api.openai.com/v1
  3. Retrieve and paste your API key from the API keys page
  4. Click refresh button and select the model that suits your needs (e.g. gpt-4o)

OpenAI compatible server

There are several options to run local OpenAI-like server:

OpenRouter

  1. Select OpenRouter in Provider type
  2. Set Provider URL to https://openrouter.ai/api/v1
  3. Retrieve and paste your API key from the API keys page
  4. Click refresh button and select the model that suits your needs (e.g. anthropic/claude-3.7-sonnet)

Google Gemini

  1. Select Google Gemini in Provider type
  2. Set Provider URL to https://generativelanguage.googleapis.com/v1beta/openai
  3. Retrieve and paste your API key from the API keys page
  4. Click refresh button and select the model that suits your needs (e.g. gemini-1.5-flash)

LM Studio

  1. Select LM Studio in Provider type
  2. Set Provider URL to http://localhost:1234/v1
  3. Click refresh button and select the model that suits your needs (e.g. gemma2)

Groq

  1. Select Groq in Provider type
  2. Set Provider URL to https://api.groq.com/openai/v1
  3. Retrieve and paste your API key from the API keys page
  4. Click refresh button and select the model that suits your needs (e.g. llama3-70b-8192)

For plugin developers

Docs: How to integrate AI Providers in your plugin.

Quick reference (details in SDK docs):

try {
	const finalText = await aiProviders.execute({
		provider,
		prompt: "Hello",
		onProgress: (chunk, full) => {/* stream UI update */},
		abortController
	});
	// use finalText
} catch (e) {
	// handle error / abort
}

Removed callbacks: onEnd / onError — promise resolve/reject covers them (only onProgress remains for streaming). Legacy chainable handler also deprecated.

Roadmap

  • Docs for devs
  • Ollama context optimizations
  • German translations
  • Chinese translations
  • Update to latest OpenAI version and embedding models
  • Russian translations
  • Groq Provider support
  • Passing messages instead of one prompt
  • Anthropic Provider support
  • Shared embeddings to avoid re-embedding the same documents multiple times
  • Spanish, Italian, French, Dutch, Portuguese, Japanese, Korean translations
  • Incapsulated basic RAG search with optional BM25 search

My other Obsidian plugins

  • Local GPT that assists with local AI for maximum privacy and offline access.
  • Colored Tags that colorizes tags in distinguishable colors.

For plugin developers

Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.