Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
276 changes: 109 additions & 167 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,219 +1,161 @@
# Code-Swarm: SOTA Agent Swarm Architecture

> **KEIN tmux! KEINE Worktrees! KEIN Background-Dispatch!**
> Subagent-Delegation erfolgt NUR via opencode-native oh-my-opencode Sub-Sessions.

## 🤖 Main Agents

### SIN-Zeus — Supreme Fleet Commander
| Property | Value |
|----------|-------|
| Model | `fireworks-ai/minimax-m2.7` |
| Role | Fleet Commander |
| Mode | Primary |
| Reasoning | xhigh |

**Capabilities:**
- github-orchestration
- fleet-dispatch
- multi-agent-coordination
- planning
- research

**Hard Rules:**
- NEVER_IDLE_FLEET
- NEVER_DIRECT_CODING
- GITHUB_IS_SOURCE_OF_TRUTH
- MIN_2_PARALLEL_TOOLS

### SIN-Solo — Direct Single-Agent Executor (formerly Sin-Executor-Solo)
| Property | Value |
|----------|-------|
| Model | `vercel/deepseek-v4-pro` |
| Role | Direct Executor |
| Mode | Primary |

**Capabilities:**
- direct-coding
- single-agent-execution
- no-delegation
- minimal-invasive-changes

**Hard Rules:**
- WORK_ALONE
- MINIMAL_CHANGES
- NO_GOVERNANCE_EDITS
- VALIDATE_IMMEDIATELY

### Coder-SIN-Qwen
| Property | Value |
|----------|-------|
| Model | `vercel/deepseek-v4-flash` |
| Role | Alternative Coder |

### Stealth-Orchestrator
| Property | Value |
|----------|-------|
| Model | `vercel/deepseek-v4-flash` |
| Role | Browser Automation |
# Infra-SIN-OpenCode-Stack — Cognitive Assembly Line

**Version:** 1.0.0
**Stand:** 2026-05-01
**Autoren:** SIN-Zeus + Explore Agent

---

## Simone-MCP Integration
## 🧠 Was ist die Cognitive Assembly Line?

**Source**: https://github.com/Delqhi/Simone-MCP
Die Cognitive Assembly Line ist eine strukturierte Pipeline für automatisierte, parallele Agenten-Workflows. Sie besteht aus 6 Phasen:

Every Code-Swarm agent uses **Simone-MCP** for AST-level code operations via MCP 2.0 protocol.
```
User Prompt → Descriptor → Router → Parallel Swarms → Validation Layer → Execution Layer → Aggregation → Final Output
```

### Simone-MCP Tools
---

| Tool | Type | Description |
|:---|:---|:---|
| `code.find_symbol` | Read | Locate symbol definitions across workspace |
| `code.find_references` | Read | Find textual references to a symbol |
| `code.replace_symbol_body` | Write | Replace the body of a Python function |
| `code.insert_after_symbol` | Write | Insert text immediately after a symbol block |
| `code.project_overview` | Read | Summarize workspace footprint and file types |
## 📦 Enthaltene Komponenten

### Deployment
### **Main Agents** (in `OpenSIN-documentation/.opencode/opencode.json`)
- `SIN-Zeus` — Control Plane Orchestrator
- `coder-sin-swarm` — Primary Coding Agent
- `Coder-SIN-Qwen` — Alternative Coding Agent
- `Stealth-Orchestrator` — Browser Automation
- `SIN-Solo` — Single-Agent Executor
- **Neu:** `explore` — Codebase Analysis (Step 3.5 Flash, reasoning high)
- **Neu:** `orchestrator` — Pipeline Coordination (Minimax M2.7)

**Local Development:**
```python
from simone_mcp.client import SimoneClient
from simone_mcp.bridge import SwarmSimoneBridge
### **Subagents** (in `oh-my-opencode.json` — diese Datei)
21 spezialisierte Subagents, organisiert in 6 Gruppen:

bridge = SwarmSimoneBridge(local=True)
await bridge.analyze_code("MyClass")
```
| Gruppe | Subagents | Zweck |
|--------|-----------|-------|
| **Audio & Medien** | `audio_agent`, `multimedia_looker` | TTS, STT, Vision, GUI |
| **Web-Recherche** | `athena`, `argus`, `daedalus`, `hermes_scout` | Strategic, Multi-Source, Technical Research, Fast Retrieval |
| **Code-Qualität** | *placeholder* (code-checker, test-runner, security-scanner, performance-auditor) | Linting, Testing, Security, Perf |
| **Dokumentation** | *placeholder* (doc-writer, pr-generator, changelog-writer) | Docs, PRs, Changelog |
| **DevOps** | *placeholder* (ci-agent, env-manager, infra-provisioner, backup-agent) | CI/CD, Env, IaC, Backup |
| **Data Science** | *placeholder* (data-viz, data-analyzer, ml-trainer, ml-deployer) | Viz, EDA, ML Training/Deployment |

**Production (OCI VM):**
```
ubuntu@92.5.60.87:8234
```
**Aktive Subagents:** 13 von 21 sind als MCP-Server in OpenCode.json konfiguriert (enabled: true/false). Die noch fehlenden 8 sind als Platzhalter in OpenCode.json vorhanden, müssen aber noch implementiert werden.

---

## SOTA Implementation
## 🔌 MCP-Server Integration

### P0 - Production Critical
- [x] **Data Persistence**: PostgreSQL schema + Redis cache + S3 storage + pgvector
- [x] **Monitoring**: Prometheus metrics + OpenTelemetry tracing + Health checks
- [x] **Security**: OAuth2/JWT auth + RBAC permissions + bcrypt
- [x] **Testing**: Unit tests + Integration tests + Load tests (Locust)
Jeder Subagent wird als **MCP-Server** bereitgestellt. Die Konfiguration erfolgt in:

### P1 - High Value
- [x] **API Gateway**: FastAPI REST + OpenAPI/Swagger + Rate limiting
- [x] **Kubernetes**: Helm chart + HPA auto-scaling + Istio
- [x] **WebSockets**: Real-time agent status + Live task updates
- [x] **CLI**: Rich output + Progress bars + Typer
- [x] **Documentation**: MkDocs + Swagger
- **OpenSIN-documentation/.opencode/opencode.json** (MCP-Sektion)
- **OpenSIN-documentation/.opencode/oh-my-opencode.json** (Subagent-Definitionen — diese Datei)

### P2 - Optimization
- [x] **Self-Improvement**: RLHF feedback loops + Bayesian optimization
** Beispiel für einen Subagent (`audio_agent`):**

```json
{
"role": "Audio TTS/SST",
"model": "groq/whisper-large-v3",
"fallback_model": "nvidia-nim/whisper-large-v3",
"tools": ["whisper", "coqui-tts", "ffmpeg"],
"benchmarks": ["audio_transcription", "tts_quality"],
"responsibilities": ["Sprache-zu-Text", "Text-zu-Sprache", "Audio-Analyse"]
}
```

---

## Quick Start
## 🚀 Nutzung der Pipeline

```bash
# Clone and install
gh repo clone OpenSIN-Code/Code-Swarm
cd Code-Swarm
pip install -r requirements.txt
### 1. Pipeline-Descriptor
Analysiert den User-Prompt und erstellt ein Pipeline-Template:

# Copy configs to opencode
cp configs/opencode.json ~/.config/opencode/opencode.json
cp configs/oh-my-opencode.json ~/.config/opencode/oh-my-opencode.json
```bash
echo "Erstelle eine Python-Funktion..." | opencode run --command pipeline-descriptor
```

# Run tests
pytest tests/unit/
**Output:**
```json
{
"pipeline": ["analyze_codebase_patterns", "determine_test_structure"],
"subagents": ["explore"],
"complexity": "low",
"estimated_time": "2-5"
}
```

# Start API server
uvicorn api.main:app --reload
### 2. Pipeline-Router
Dispatcht Tasks an Subagents basierend auf dem Descriptor:

# CLI commands
python -m cli.main status
```bash
opencode run --command pipeline-router --args '{"subagents": ["explore", "librarian"]}'
```

---

## Architecture
### 3. Validation Layer
Führt Code-Qualitäts-Checks aus (sobald MCPs aktiviert):

```bash
opencode run --command pipeline-validation --args '{"target": "src/"}'
```
SIN-Zeus (Fleet Commander)
├── hermes (Dispatcher) → Fireworks AI + Simone-MCP
├── prometheus (System Planner) → Fireworks AI + Simone-MCP
├── zeus (Validation Superlayer) → Fireworks AI
├── atlas (Backend Engineer) → Fireworks AI + Simone-MCP
├── SIN-Solo (Direct Executor) → Vercel DeepSeek V4 Pro
├── multimedia_looker (Vision) → NVIDIA Nemotron 3 Nano Omni
└── LangGraph Pipeline (StateGraph + Simone-MCP + Feedback Loops)

### 4. Execution Layer
Führt Domain-Specialists aus:

```bash
opencode run --command pipeline-execution --args '{"agent": "coder-sin-swarm", "task": "..."}'
```

---
### 5. Aggregation
Sammelt Ergebnisse und generiert PR/Docs:

## GitHub Issues
```bash
opencode run --command pipeline-aggregation --args '{"pr_title": "..."}'
```

### 6. Vollständige Pipeline
Alle Phasen in einem:

| # | Status | Description |
|---|--------|-------------|
| #15 | ✅ Epic | Simone-MCP Full Integration |
| #16 | 🔧 TODO | Deploy Simone-MCP on OCI VM |
| #17 | 🔗 TODO | Configure endpoint |
| #18 | 🧠 TODO | LangGraph integration |
| #19 | 💾 TODO | Hybrid memory |
| #20 | ⚙️ TODO | opencode.json MCP config |
| #21 | ✅ Epic | SIN-Zeus & SIN-Solo Fusion |
```bash
echo "User prompt" | opencode run --command pipeline-full
```

---

## Model Hierarchy
## 📊 Benchmark-Vergleich (2026)

| Modell | Agenten | Provider |
|--------|---------|----------|
| `fireworks-ai/minimax-m2.7` | SIN-Zeus, coder-sin-swarm, hermes, prometheus, zeus, atlas, hephaestus | Fireworks AI |
| `vercel/deepseek-v4-flash` | Coder-SIN-Qwen, Stealth-Orchestrator, 10 Subagenten | Vercel |
| `vercel/deepseek-v4-pro` | SIN-Solo | Vercel |
| `nvidia/nvidia/nemotron-3-nano-omni` | multimedia_looker | NVIDIA |
| `groq/whisper-large-v3` | audio_agent | Groq |
| **Simone-MCP (MCP 2.0)** | **Alle 22 Agenten** | **AST-Level Operations** |
Die aktuellsten Coding-Benchmarks für Modelle wie GLM-5.1, DeepSeek V4-Pro, Qwen 3.6 Max, MiniMax M2.5, Mistral Small 4, Codestral, Step 3.5 Flash, etc. sind dokumentiert in:

---
**OpenSIN-documentation:** `docs/guides/aktuelle-coding-benchmarks-2026.md`

## Tools
---

OpenCode provides various tools for different tasks:
## 🛠️ Entwicklung & Erweiterung

- [`look_at`](./docs/tools/look_at.md) - Extract basic information from media files (PDFs, images, diagrams)
### Neue Subagents hinzufügen

For full tool documentation, see the [Tools Reference](./docs/tools/README.md).
1. **MCP-Server implementieren** in `OpenSIN-backend/bin/` (z.B. `code-checker`, `test-runner`)
2. **MCP-Konfiguration** in `OpenSIN-documentation/.opencode/opencode.json` hinzufügen
3. **Subagent-Definition** in `oh-my-opencode.json` (dieser Datei) hinzufügen
4. **Pipeline-Commands** erweitern, wenn nötig

---
###ipeline anpassen

## Documentation
Die Pipeline-Commands sind in `OpenSIN-documentation/.opencode/opencode.json` definiert. Sie können nach Bedarf erweitert werden (z.B. `pipeline-<stage>` für spezifische Workflows).

The monolithic `AGENTS.md` has been migrated to a modular documentation structure:
---

### Quick Links
## 📚 Verwandte Dokumentation

| Category | Path | Description |
|----------|------|-------------|
| **Rules** | [`docs/rules/`](./docs/rules/README.md) | All operational rules |
| **Core** | [`docs/rules/core/`](./docs/rules/core/README.md) | Vision Gate, Anti-Loops |
| **Design** | [`docs/rules/design/`](./docs/rules/design/design-routing.md) | Design task routing |
| **LLM** | [`docs/rules/llm/`](./docs/rules/llm/llm-calls.md) | LLM call rules |
| **Browser** | [`docs/rules/browser/`](./docs/rules/browser/chrome-session.md) | Chrome & automation |
| **Configuration** | [`docs/configuration/`](./docs/configuration/README.md) | Models & plugins |
| **Migration** | [`docs/MIGRATION.md`](./docs/MIGRATION.md) | AGENTS.md → modular docs |
- **OpenSIN-documentation AGENTS.md** — Agent-Mandates und Tech-Stack Rules
- **OpenCode Config Reference** — https://opencode.ai/config.json
- **Cognitive Assembly Line Blueprint** — `docs/guides/cognitive-assembly-line-subagent-blueprint.md`

### MkDocs
---

This repo has full MkDocs support. Run:
## 🤝 Contributing

```bash
pip install mkdocs mkdocs-material
mkdocs serve
```
Änderungen an dieser Konfiguration sollten über Pull Requests erfolgen. Siehe `OpenSIN-documentation/CONTRIBUTING.md`.

For full documentation site, see [mkdocs.yml](./mkdocs.yml).
---

---RecursiveMAS integration - see plan: https://github.com/OpenSIN-AI/OpenSIN-overview/blob/main/docs/03_ops/recursive-mas-integration.md
**Powered by OpenSIN-AI** — Enterprise AI Agents working autonomously.