LLM Test Generator
approvedby Aldo E George
Generate AI-powered test questions from your notes with multiple LLM providers (OpenAI, Claude, Mistral, Gemini, DeepSeek) to enhance your learning and retention.
LLM Testing Plugin
Test your knowledge with AI-generated questions based on your Obsidian notes. This plugin works with any language, creating contextually relevant questions from your notes and providing instant feedback in your own language using various Large Language Models.
Features
- AI-Generated Questions: Automatically create test questions based on your notes using multiple LLM providers. Now supports any language—questions and feedback will match the language of your source notes.
- Dynamic Model Selection: Automatically fetches newly released models from providers and filters out incompatible ones to ensure a smooth experience.
- Knowledge Assessment: Test your understanding with customized questions at different difficulty levels
- Instant Feedback: Get immediate feedback on your answers in the language of the test
- Score Tracking: Track your progress with detailed scoring
- Organized Dashboard: View and manage all your tests in one place
Installation
From Obsidian Community Plugins
- Open Obsidian Settings
- Go to "Community Plugins" and disable Safe Mode
- Click "Browse" and search for "Test Plugin"
- Install the plugin and enable it
Setup
- After installation, go to the plugin settings in Obsidian
- Select your preferred LLM provider (OpenAI, Anthropic Claude, Mistral, Gemini, or DeepSeek)
- Enter your API key for the selected provider
- Choose your preferred model from the available options
- Click the test flask icon in the ribbon or use the command "Open Test Dashboard"
Getting API Keys
To use this plugin, you'll need an API key from one of the supported providers:
- OpenAI: Get your API key from OpenAI Platform
- Mistral AI: Get your API key from Mistral Console
- Google Gemini: Get your API key from Google AI Studio
- DeepSeek: Get your API key from the DeepSeek website
- Ollama: No API key required. Download and install Ollama from Ollama.com and run the models locally
You can choose from a wide range of models for each provider. The list of models is dynamically updated—just save your API key and hit refresh in the settings to get the latest releases (like GPT-5, o3-mini, Gemini 2.0, etc.). The plugin proactively filters out specialized non-chat models to prevent API errors.
Models with larger context windows can handle longer notes, while smaller models may be more cost-effective for frequent testing.
Usage
Creating Tests
- Open the Test Dashboard from the ribbon or command palette
- Click "Refresh" to scan your vault for notes
- Select the notes you want to create tests for by checking the boxes
- Click "Create Tests" to generate questions based on the selected notes
Taking Tests
- From the Test Dashboard, click on any test with a "Start" badge
- Answer the questions in the test document
- Click "Mark" to receive feedback and scoring
- Review your results and improve your understanding
Bulk Marking
The plugin allows you to mark multiple tests at once:
- Complete answers in multiple test documents
- Return to the Test Dashboard
- Click "Mark All Tests" button at the bottom right
- All tests with answers will be graded simultaneously
How It Works
This plugin uses Retrieval-Augmented Generation (RAG) with various LLM models to:
- Index and analyze your Obsidian notes
- Generate contextually relevant questions based on the content
- Mark your answers by comparing them to the original note content
- Provide helpful feedback to improve your understanding
Requirements
- Obsidian v0.15.0 or higher
- An API key from one of the supported providers (OpenAI, Anthropic, Mistral, Google, or DeepSeek)
FAQ & Troubleshooting
Q: Why do I need an API key?
A: The plugin uses LLM APIs to generate questions and mark answers. You need an API key to access these services.
Q: Will my notes be sent to the LLM provider?
A: Yes, the plugin sends the content of the notes you select for test generation to the API of your chosen provider. Only use this plugin with notes that you're comfortable sharing with the selected service.
Q: I'm getting an error about context length exceeding limits.
A: LLM models have token limits. Try:
- Selecting smaller notes
- Splitting larger notes into multiple files
- Using a model with a larger context window (like GPT-4o, Claude 3 Opus, or Gemini 1.5 Pro)
Q: Can I customize the types of questions generated?
A: Currently, the plugin generates a mix of short (1-mark), long (2-mark), and extended (3-mark) questions. Future versions may include customization options.
Privacy
This plugin sends the content of selected notes to your chosen LLM provider for processing. Please review the privacy policy of your selected provider before using this plugin:
- OpenAI Privacy Policy
- Anthropic Privacy Policy
- Mistral AI Privacy Policy
- Google AI Privacy Policy
- DeepSeek Privacy Policy
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
License
This project is licensed under the MIT License.
Acknowledgements
- Built with Obsidian Plugin API
- Uses LLM APIs from OpenAI, Anthropic, Mistral, Google, and DeepSeek for test generation and grading
For plugin developers
Search results and similarity scores are powered by semantic analysis of your plugin's README. If your plugin isn't appearing for searches you'd expect, try updating your README to clearly describe your plugin's purpose, features, and use cases.