Core library: scoring, selection, and caching for the Context Engine
-
Updated
Feb 11, 2026 - Rust
Core library: scoring, selection, and caching for the Context Engine
RL + LLM pipeline for smart context selection and response generation. Train agents to choose the best chunks, generate answers with any LLM, and evaluate with RAGAs, BERTScore, BLEU, ROUGE, and cosine similarity.
CognitiveRAG backend for OpenClaw: multi-layer memory, evidence selection, and explainable context construction.
CLI for building, resolving, and inspecting context caches
Open-source platform for deterministic, token-aware context selection for AI agents and LLMs
Add a description, image, and links to the context-selection topic page so that developers can more easily learn about it.
To associate your repository with the context-selection topic, visit your repo's landing page and select "manage topics."