Mini-RAG

approved

by John Wheatley

Leverage Retrieval Augmented Generation (RAG) for your notes using a locally running LLM or AI.

16 stars1,044 downloadsUpdated 9mo agoMIT
View on GitHub

Mini-RAG

Local Retrieval Augmented Generation for your Obsidian notes


What is Mini-RAG?

Mini-RAG lets you chat with a locally running LLM, in the context of selected Obsidian notes and folders. For the LLM, you can select any locally installed Ollama model (see: Configure Mini-Rag).

Setting Up Mini-RAG

Install Ollama

If you don't already have Ollama installed, you can download and install Ollama here.

This is necessary because Mini-RAG relies on a locally running instance of Ollama for its responses. This is the same reason that Mini-RAG is currently a desktop-only plugin.

Configure Mini-RAG

Open "options" by clicking on the gear icon then navigate to Community Plugins > Mini-RAG > Options. Here you can set the:

  • Ollama URL: If left unset, Ollama's default URL is used
  • Model: From a dropdown list of AI Models installed on your local Ollama setup
  • Temperature: Higher temperatures give more creative response, but also lead to more hallucinations
  • Enable context-free chats: Provides the option to chat with an LLM without the context of a note or folder

Using Mini-RAG

Opening a Mini-RAG Chat

This is done from the right-click context menu. You will see the "Mini-RAG" option when you:

  • Right-Click within a note
  • Right-Click a note in the sidebar
  • Right-Click a folder in the sidebar
  • Open a note's triple-dot menu

Saving Conversations

To save a Mini-RAG conversation, click the "Save" button. If you continue the conversation after this, you will need to click "save" again to update the saved conversation.


Author

For more about the author visit JJWheatley.com

For plugin developers

Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.