Welcome to the RLM-REPL documentation! This directory contains comprehensive guides for using the library.
- Getting Started Guide - Installation, setup, and your first query
- Installation instructions
- Model setup (Ollama, OpenAI, etc.)
- First examples
- Common issues
- API Reference - Complete API documentation
- Core classes (
RLMREPL,RLMConfig,DatabaseConfig) - All methods and parameters
- Data classes and events
- Complete examples
- Core classes (
- Configuration Guide - All configuration options
RLMConfigparametersDatabaseConfigoptions- Environment variables
- Best practices
- Configuration examples
- Examples - Comprehensive usage examples
- Basic usage
- Streaming events
- Persistent database
- Building applications
- Advanced queries
- Error handling
- Architecture - How the system works
- System overview
- Core components
- Reading process
- Event system
- Performance considerations
- Troubleshooting Guide - Common issues and solutions
- Installation issues
- Configuration problems
- API connection issues
- Query problems
- Performance issues
- Main README - Project overview and quick start
- Examples Directory - Working code examples
- GitHub Repository - Source code
docs/
├── README.md # This file
├── getting-started.md # Installation and first steps
├── api-reference.md # Complete API documentation
├── configuration.md # Configuration options
├── examples.md # Usage examples
├── architecture.md # System architecture
└── troubleshooting.md # Common issues and solutions
- Start here: Getting Started Guide
- Need examples?: Examples
- API questions?: API Reference
- Having issues?: Troubleshooting Guide
Author: Remy Gakwaya
Background: RLM-REPL was created after reading the MIT paper on Recursive Language Models. The initial approach used a REPL where LLMs would generate Python functions, but this proved challenging with smaller models.
After hundreds of iterations, Remy developed the RLM-REPL v8 concept - a human-like reading strategy optimized for local, smaller language models. The philosophy: if it works with poor and small models on limited computation, it will excel with leading LLMs.
The library evolved to use SQL-based retrieval with DuckDB, implementing the proven v8 reading strategy (overview → search → deep read → synthesize) in a more reliable way that works with models of all sizes.
Found an error or want to improve the documentation? Contributions are welcome!
- Check existing issues
- Open a new issue or pull request
- Follow the existing documentation style