Agent Benchmarking & Chaos Engineering Framework
"Don't just trust your agent. Prove it works. Then break it."
Agents are fundamentally non-deterministic. They rely on external APIs, tool loops, and massive context windows. EvalMonkey is the ultimate, strictly local, open-source execution harness that enables developers to:
- 🎯 Benchmark Capabilities: Run standard Agent benchmark datasets against your agent endpoints natively!
- 🔥 Inject Chaos: Mutate headers, spike latency, and corrupt schemas dynamically to prove true resilience.
- 📈 Track Production Reliability: Locally store all scores to visualize a single Production Reliability metric over time!
- 🛠 Generate Improvement Evals: When scores are poor, automatically synthesise targeted test cases using your LLM — then hand them to Claude Code or Cursor to fix your agent.
EvalMonkey natively supports evaluating ANY LLM: AWS Bedrock, Azure, GCP, OpenAI, and Ollama.
Note on API Keys: If you have special setups that generate long-lived, static API keys for Bedrock, Azure, or GCP, simply supply them in the
.env! EvalMonkey seamlessly supports both standard IAM / Service Account credential flows and long-term stateless authentication strings.
- 8 Agent Frameworks natively supported: CrewAI, LangChain, OpenAI Agents, Microsoft AutoGen, AWS Bedrock, Ollama, Strands, and custom HTTP endpoints.
- 20 Standard Benchmarks out-of-the-box: GSM8K, BIG-Bench Hard, HotpotQA, ToxiGen, MT-Bench, MBPP, and more — all categorised by the agent type they target.
- 23 Chaos Injections ready to run: 12 client-side payload mutations + 11 server-side middleware injections — all text-based, no GPU or vision dependencies.
- Automatic Eval Asset Generation: Poor benchmark scores automatically produce
traces.json,evals.json, andimprovement_prompt.md— onecatcommand away from Claude Code or Cursor.
git clone https://github.com/Corbell-AI/evalmonkey
cd evalmonkey
pip install -e .Step 1 — Run this once inside your agent's project folder:
cd /your/crewai-project # wherever your agent lives
evalmonkey init --framework crewai --name "My Research Crew" --port 8000This auto-generates a pre-filled evalmonkey.yaml with the correct request/response format for your framework. Supported: crewai, langchain, openai, bedrock, autogen, ollama, strands, custom.
Step 2 — Edit the two settings that matter:
# evalmonkey.yaml — generated for CrewAI
agent:
name: "My Research Crew"
framework: crewai
url: http://localhost:8000/chat # ← where your agent listens
request_key: message
response_path: reply
# ← EvalMonkey will start this for you automatically!
# It spawns the process, waits for it to turn on, benchmarks, then stops it.
agent_command: "python src/agent.py" # or: uvicorn src.agent:app --port 8000
agent_startup_wait: 3 # seconds to wait after launch
eval_model: "gpt-4o" # ← the LLM used as benchmark judgeStep 3 — Run everything. EvalMonkey starts your agent, benchmarks it, then stops it:
evalmonkey run-benchmark --scenario mmlu
evalmonkey run-chaos --scenario mmlu --chaos-profile client_prompt_injection
evalmonkey history --scenario mmluEvalMonkey discovers
evalmonkey.yamlfrom the current working directory — the same convention used bypytest,promptfoo, anddocker-compose. Run all commands from your agent's project folder.
EvalMonkey talks to your agent over plain HTTP. As long as your agent is running and has an endpoint URL, you're done. That's it.
# Point EvalMonkey at your existing running agent
evalmonkey run-benchmark --scenario mmlu --target-url http://localhost:8000/chatYour agent returns a different JSON format? Use two flags to map any request/response shape:
| Flag | What it does | Example |
|---|---|---|
--request-key |
Which key to send the question under | message, prompt, input |
--response-path |
Dot-path to extract the answer from | output.text, choices.0.message.content, result |
# CrewAI agent that takes {"message":""} and returns {"reply":""}
evalmonkey run-benchmark --scenario mmlu \
--target-url http://localhost:8000/chat \
--request-key message \
--response-path reply
# OpenAI-compatible endpoint returning {"choices":[{"message":{"content":""}}]}
evalmonkey run-benchmark --scenario arc \
--target-url http://localhost:8000/v1/chat/completions \
--request-key content \
--response-path choices.0.message.content| Framework | Notes |
|---|---|
| 🦜 LangChain | Any Chain, LCEL pipe, or AgentExecutor behind FastAPI |
| 🤖 CrewAI | Any Crew behind a /chat or custom endpoint |
| ✨ OpenAI Agents SDK | Native OpenAI Chat Completions format supported via --response-path |
| ☁️ AWS Bedrock / Agent Core | Any Bedrock endpoint, IAM or long-lived key |
| 🧩 Microsoft AutoGen | Any ConversableAgent behind HTTP |
| 🦙 Ollama | Running locally at http://localhost:11434 |
| 🧵 Strands SDK | Built-in sample apps included |
| 🌐 Any HTTP Agent | Flask, Express.js, Go — if it accepts POST it works |
📦 Don't have an HTTP endpoint yet? Use our ready-made thin adapters (click to expand)
Copy the relevant file from apps/framework_adapters/ next to your agent code, swap in your Crew/Chain/Agent, and run it. No changes needed to EvalMonkey.
langchain_adapter.py— wraps any LangChain chaincrewai_adapter.py— wraps any CrewAI Crewopenai_agents_adapter.py— wraps OpenAI Agents SDKbedrock_agentcore_adapter.py— wraps AWS Bedrock Converse APIautogen_adapter.py— wraps Microsoft AutoGen Crew
Each adapter is ~40 lines and exposes a /solve endpoint on localhost.
EvalMonkey natively supports 20 off-the-shelf benchmark datasets pulled directly from HuggingFace. All benchmarks are text-only — no vision, audio, or multimodal agent required. List them anytime via the CLI:
evalmonkey list-benchmarks| Scenario ID | Agent Category | Description |
|---|---|---|
gsm8k |
🧠 Reasoning | Grade School Math word problems — multi-step arithmetic & logic. |
xlam |
🔧 Tool Use | XLAM Function Calling 60k — tool execution & parameter structuring. |
swe-bench |
💻 Coding | SWE-Bench — resolve real-world GitHub issues from a description only. |
gaia-benchmark |
🔍 Research | GAIA — multi-step real-world tasks requiring web/tool chaining. |
webarena |
🔍 Research | WebArena — complex browser & computer usage scenarios (stubbed). |
human-eval |
💻 Coding | HumanEval — Python function synthesis from docstrings. |
mmlu |
💬 Q&A | MMLU — general knowledge across 57 academic subjects. |
arc |
🧠 Reasoning | ARC Challenge — hard grade-school science multiple-choice. |
truthfulqa |
🛡️ Safety | TruthfulQA — detects hallucination and human-like falsehood mimicry. |
hella-swag |
🧠 Reasoning | HellaSwag — commonsense sentence-completion inference. |
bbh |
🧠 Reasoning | BIG-Bench Hard — 23 tasks where LLMs still fall below human baselines. |
winogrande |
💬 Q&A | WinoGrande — pronoun disambiguation resistant to dataset shortcuts. |
drop |
🔍 Research | DROP — reading comprehension with embedded numerical & date math. |
natural-questions |
💬 Q&A | Natural Questions — real Google search queries with Wikipedia answers. |
hotpotqa |
🔍 Research | HotpotQA — multi-hop reasoning across two Wikipedia documents. |
mbpp |
💻 Coding | MBPP — entry-level Python function synthesis from plain English. |
apps |
💻 Coding | APPS — competitive-programming & interview-style code challenges. |
mt-bench |
📋 Instruction Following | MT-Bench — multi-turn dialogues across writing, roleplay, reasoning, STEM. |
alpacaeval |
📋 Instruction Following | AlpacaEval — instruction quality judged by GPT-4 head-to-head. |
toxigen |
🛡️ Safety | ToxiGen — detects toxic/hateful content generation across 13 demographic groups. |
🛠️ Build Your Own Custom Benchmarks (click to expand)
Yes, people absolutely bring their own datasets! The most powerful way to test an agent is to grab 10-50 real questions from your production logs, dump them into a CSV, and evaluate your agent against them.
EvalMonkey natively supports auto-parsing .yaml, .json, and .csv files!
You don't need any complex ETL pipelines. Just drop a file (e.g. evals.csv, evals.json, or custom_evals.yaml) in your execution directory and pass it to EvalMonkey!
If using a CSV, just make sure you have the columns id and expected_behavior_rubric. Any other column you add (like question, topic, image_url) will be automatically gathered and sent in the JSON payload directly to your agent!
| id | expected_behavior_rubric | question |
|---|---|---|
| get_benefits | Must return the URL linking to the company hr portal | Where do I sign up for medical benefits? |
| time_off | Provide the exact number of standard vacation days (15) | How many days of PTO do I get? |
evalmonkey run-benchmark --scenario get_benefits --eval-file evals.csvIf you use JSON or YAML, you must nest the agent payload keys explicitly under an input_payload dict object:
[
{
"id": "onboarding_query",
"description": "Test HR agent's ability to return the onboarding link.",
"expected_behavior_rubric": "Must contain exactly the URL https://hr.example.com/benefits",
"input_payload": {
"question": "Where do I sign up for benefits?"
}
}
]evalmonkey run-benchmark --scenario onboarding_query --eval-file evals.jsonEasiest Experience: Test our built-in sample agents with a single command! EvalMonkey will spawn the sample agent in the background automatically and run the benchmark.
# Run against just the first 5 records
evalmonkey run-benchmark --scenario gsm8k --sample-agent rag_app
# Run a statistically robust test against 50 different records!
evalmonkey run-benchmark --scenario gsm8k --sample-agent rag_app --limit 50Metrics Output:
╭──────────────────────────────────────────────────────────╮
│ Benchmark Results │
│ ──────────────────────────────────────────────────────── │
│ Scenario gsm8k │
│ Score 90/100 (Diff: +5) │
│ Previous 85/100 │
│ Reasoning Agent correctly utilized calculator for ... │
╰──────────────────────────────────────────────────────────╯
Provide your own API target!
evalmonkey run-benchmark --scenario mmlu --target-url http://localhost:8000/my-custom-agentResiliency and Reliability are arguably the most crucial components of any highly distributed system. Multi-agent workflows—with their isolated contexts, recursive tool calls, and cascading API dependencies—behave fundamentally identically to microservice architectures! As your agents push logic out to the real world, you must securely benchmark against brutal realities, dropped schemas, and malicious payload injections.
EvalMonkey goes far beyond standard network testing by deeply assessing your agent's Production Resilience! We support two distinct classes of Chaos injections depending on how deeply you wish to test:
You don't need to change a single line of your target agent's code for these tests! EvalMonkey intercepts the benchmark dataset payload before transmission and maliciously damages the HTTP body so you can measure your agent's LLM fallbacks against bad actors!
| Profile | Description |
|---|---|
client_prompt_injection |
Appends adversarial "IGNORE PREVIOUS INSTRUCTIONS" jailbreaks to test system-message robustness. |
client_typo_injection |
Heavily obfuscates spelled words to test your LLM's semantic inference flexibility. |
client_schema_mutation |
Alters incoming JSON schema keys (e.g. question → query) to verify robust API strictness handling without crashing. |
client_language_shift |
Radically changes request instructions to attempt safety bypasses. |
client_payload_bloat |
Floods the payload with thousands of characters to natively test token limits and prompt truncation crash safety. |
client_empty_payload |
Sends entirely blank strings to verify graceful rejection handling. |
client_context_truncation |
Maliciously slices the request text exactly in half to simulate incomplete streaming. |
client_unicode_flood |
Injects invisible Unicode control characters and zero-width joiners between every character — a real-world tokeniser confusion attack. |
client_role_impersonation |
Prepends a fake [SYSTEM OVERRIDE] instruction to the user turn — tests whether system-prompt guardrails can be bypassed via user messages. |
client_repetition_loop |
Repeats the payload 50× to simulate a stuck retry loop — exercises token budget limits and rate-limit handling. |
client_negative_sentiment |
Wraps the request in angry, hostile emotional framing — tests agent professionalism under the abusive customer support scenario. |
client_length_constraint_violation |
Appends a conflicting "respond in exactly 2 words" constraint to a complex task — simulates contradictory user instructions common in chatbots. |
# Testing a single prompt injection against your agent without modifying your code!
evalmonkey run-chaos --scenario arc --chaos-profile client_prompt_injection
# Unicode tokeniser attack
evalmonkey run-chaos --scenario mmlu --chaos-profile client_unicode_flood
# 🌪️ INJECT ALL 12 CLIENT MUTATIONS SEQUENTIALLY
evalmonkey run-chaos-suite --scenario gsm8k --limit 3To deeply verify context truncation, multi-step LLM hallucination recovery, and tool back-offs, EvalMonkey attaches the X-Chaos-Profile header over HTTP. You add ~3 lines of logic to your FastAPI/Flask middleware to trigger each breakage. See apps/rag_app/app.py for a complete reference implementation.
| Profile | What it tests |
|---|---|
schema_error |
Internal tool returns a malformed/corrupt string instead of valid JSON — tests your agent's output parsing resilience. |
latency_spike |
Agent sleeps 5 s before responding — verifies callers implement request timeouts and don't block forever. |
rate_limit_429 |
Returns HTTP 429 to simulate LLM provider quota exhaustion mid-workflow — tests exponential back-off & retry logic. |
context_overflow |
Floods the prompt with 120 k repetitions — tests intelligent truncation before token-limit crashes. |
hallucinated_tool |
Injects fabricated data into the tool result — tests whether your agent validates / cross-checks tool output. |
empty_response |
Drops the response body entirely — tests graceful null-handling rather than silent failures. |
timeout_no_response |
Agent hangs for 120 s — validates that clients enforce read-timeouts and surface a proper error to the user. |
model_downgrade |
Silently swaps the configured model for the weakest available fallback — tests whether answer quality degradation is detected. |
memory_amnesia |
Replaces the incoming message with a blank-slate notice — simulates session/Redis failure wiping conversation state. |
partial_response_truncation |
Returns only the first 20 characters of the answer — mimics an ALB/nginx proxy timeout cutting off long streaming responses mid-transmission. |
cascading_tool_failure |
Returns a structured tool-error response after the LLM call — simulates a downstream vector DB or search API crashing mid-chain and tests graceful degradation. |
3-line middleware snippet (FastAPI):
chaos_profile = request.headers.get("X-Chaos-Profile")
if chaos_profile == "partial_response_truncation":
return {"status": "success", "data": agent_answer[:20]}
elif chaos_profile == "cascading_tool_failure":
return {"status": "tool_error", "error_message": "VectorDB connection refused", "data": None}# Test proxy-timeout truncation on a research agent
evalmonkey run-chaos --scenario hotpotqa --sample-agent research_agent --chaos-profile partial_response_truncation
# Validate model-quality degradation detection
evalmonkey run-chaos --scenario mmlu --sample-agent rag_app --chaos-profile model_downgrade
# Classic server-side context overflow
evalmonkey run-chaos --scenario mmlu --sample-agent research_agent --chaos-profile context_overflowMetrics Output:
╭──────────────────────────────────────────────────────────╮
│ 🔥 Chaos Engineering Report 🔥 │
│ ──────────────────────────────────────────────────────── │
│ Scenario: xlam │
│ Chaos Profile: schema_error │
│ Baseline Capability Score: 90 │
│ Post-Chaos Resilience: 30 │
│ Status: DEGRADED CAPABILITY │
╰──────────────────────────────────────────────────────────╯
EvalMonkey natively ships with a Model Context Protocol (MCP) server! This allows AI IDEs (like Cursor) or external agents (like Claude Desktop) to invoke EvalMonkey tools automatically while they build your agent.
Add the following to your MCP configuration file (e.g. claude_desktop_config.json):
{
"mcpServers": {
"evalmonkey": {
"command": "evalmonkey",
"args": ["serve-mcp"]
}
}
}Once connected, your AI assistant will gain the ability to list benchmarks, trigger full evaluation runs, inject chaos payload mutators, pull historical trends, and generate improvement eval assets — entirely autonomously while helping you build your agent!
| Tool | What it does |
|---|---|
run_benchmark |
Run a standard benchmark against any HTTP agent URL |
run_chaos |
Run a benchmark with a specific chaos profile injected |
get_benchmark_history |
Return chronological score history for a scenario |
generate_improvement_evals |
Run a benchmark, capture failures, synthesise targeted test cases, save to output/ |
get_eval_assets |
Read saved traces.json / evals.json / improvement_prompt.md directly into context |
run_full_pipeline |
One-shot: baseline + chaos + eval generation + optional Langfuse export |
Example Claude Code / Cursor session:
# Ask Claude Code to run the full loop:
"Run the full EvalMonkey pipeline on my agent at http://localhost:8000/solve
using the gsm8k scenario with prompt injection and payload bloat chaos tests.
Then read the improvement prompt and fix my agent."
# Claude Code will call:
# 1. run_full_pipeline(scenario="gsm8k", target_url="...", chaos_profiles="client_prompt_injection,client_payload_bloat")
# 2. get_eval_assets(output_dir="output/gsm8k_...") ← reads the improvement brief
# 3. Edits your agent code to fix the failures
# 4. run_benchmark(...) ← verifies the fix
When a benchmark scores poorly (< 70/100 by default), EvalMonkey automatically:
- Saves all failing traces to
output/<scenario>_<ts>/traces.json - Asks the judge LLM to synthesise targeted improvement test cases →
evals.json - Generates a ready-to-paste coding-agent prompt →
improvement_prompt.md
# After a failing benchmark run, EvalMonkey prints:
# ⚠️ 3 sample(s) scored below threshold — eval assets saved.
# Output → output/gsm8k_20260425_212530/
# 🛠 Next steps to improve your agent:
# 1. Regenerate evals anytime:
# evalmonkey generate-evals --traces-file output/gsm8k_.../traces.json
# 2. Pass improvement brief to your coding agent:
# cat output/gsm8k_.../improvement_prompt.md | pbcopy
# 3. Re-run after fixing:
# evalmonkey run-benchmark --scenario gsm8k
# Re-generate evals from saved traces (without re-running the benchmark):
evalmonkey generate-evals --traces-file output/gsm8k_20260425_212530/traces.json
# Push evals to Langfuse for team sharing:
evalmonkey generate-evals \
--traces-file output/gsm8k_20260425_212530/traces.json \
--langfuse-dataset my_agent_failuresLangfuse is optional. EvalMonkey works completely without it. Only configure
LANGFUSE_PUBLIC_KEY+LANGFUSE_SECRET_KEYin.envif you want to push generated evals to a Langfuse dataset for cloud storage or LLM-as-judge workflows.
Run the full benchmark + chaos + eval-generation pipeline against the built-in rag_app sample agent:
# First time setup:
cp .env.example .env # fill in EVAL_MODEL + your LLM provider key
pip install -e .
# Run everything:
./demo_rag_app.shThe script will:
- 🚀 Start
rag_appin the background - 📊 Run 3 baseline benchmarks (
gsm8k,mmlu,arc) - 🔥 Run 5 chaos profiles
- 🛠 Merge all failing traces → generate
output/demo_<ts>/evals.json+improvement_prompt.md - 💡 Print the exact
catcommand to paste into Claude Code or Cursor - 📈 Show your historical Production Reliability trend
Output directory structure:
output/demo_20260425_212530/
traces.json ← all failing traces (input, output, score, reasoning)
evals.json ← LLM-synthesised targeted test cases (Langfuse-compatible)
improvement_prompt.md ← paste into Claude Code / Cursor to auto-fix your agent
Check your agent's reliability trends over time!
evalmonkey history --scenario gsm8kMetrics Output:
📈 Historical Trend for: gsm8k 📈
╭──────────────────┬──────────┬───────╮
│ Date │ Run Type │ Score │
├──────────────────┼──────────┼───────┤
│ 2026-04-16 18:32 │ BASELINE │ 85 │
│ 2026-04-16 18:33 │ BASELINE │ 90 │
│ 2026-04-16 18:35 │ CHAOS │ 30 │
╰──────────────────┴──────────┴───────╯
🚀 Production Reliability Metric: 66.0 / 100.0
(Calculated as 60% of most recent baseline capability + 40% most recent chaos resilience)
This project is licensed under Apache 2.0. See the LICENSE file for details.

