Skip to content

test: adapt fork-choice harness to refreshed fixture format#303

Open
MegaRedHand wants to merge 1 commit intopr-ssz-htr-xmssfrom
pr-fixture-schema
Open

test: adapt fork-choice harness to refreshed fixture format#303
MegaRedHand wants to merge 1 commit intopr-ssz-htr-xmssfrom
pr-fixture-schema

Conversation

@MegaRedHand
Copy link
Copy Markdown
Collaborator

@MegaRedHand MegaRedHand commented Apr 17, 2026

Summary

Updates the fork-choice spec-test runner to consume the new fixture schema shipped by leanSpec e9ddede:

  1. Tick steps can use interval or time. interval encodes the absolute interval count since genesis; time is UNIX seconds. Derive the millisecond timestamp from whichever field is present and honor hasProposal.
  2. Tick checks carry a time field — the expected Store::time() value in intervals since genesis. Validate it instead of erroring out as "unsupported".

Skipped for now

The new fixtures also add a proof.participants bitfield to gossipAggregatedAttestation steps, which changes the expected behavior of those step handlers. Rather than add another bypass helper here, this PR leaves the existing single-validator shortcut in place and adds three fixtures to SKIP_TESTS:

  • test_tick_interval_0_skips_acceptance_when_not_proposer
  • test_tick_interval_progression_through_full_slot
  • test_safe_target_uses_merged_pools_at_interval_3

The follow-up #304 swaps the bypass helpers for real verifying entry points and unblocks these three.

Fixes

test_on_tick_advances_across_multiple_empty_slots (was failing with 'time' check not supported).

Stack

Stacked on #302. Merge after it.

Test plan

  • cargo test -p ethlambda-blockchain --release --test forkchoice_spectests passes (67/70 plus 3 skipped)
  • cargo test --workspace --release passes (314 passed, 6 ignored)
  • CI is green

@github-actions
Copy link
Copy Markdown

🤖 Kimi Code Review

Review of PR #303: Refactor attestation and block processing to separate verification from core logic

Overall Assessment: Good refactoring that properly separates cryptographic verification from state transition logic. Improves testability while maintaining security boundaries. One performance issue identified.


1. Performance: Redundant state lookup in attestation processing

File: crates/blockchain/src/store.rs
Lines: ~605 and ~737

target_state is fetched twice when processing aggregated attestations:

  1. First in on_gossip_aggregated_attestation for XMSS signature verification
  2. Again in on_gossip_aggregated_attestation_core for validator bounds checking

Suggestion: Pass the validators or the state to the core function to avoid the redundant store lookup:

// In on_gossip_aggregated_attestation:
let target_state = store.get_state(...)?;
// ... verify signatures ...
on_gossip_aggregated_attestation_core(store, aggregated, &target_state)?;

// In on_gossip_aggregated_attestation_core:
pub fn on_gossip_aggregated_attestation_core(
    store: &mut Store,
    aggregated: SignedAggregatedAttestation,
    target_state: &BeaconState,  // Add this parameter
) -> Result<(), StoreError> {
    // Remove the second get_state call
    let num_validators = target_state.validators.len() as u64;
    // ...
}

2. Code correctness: Duplicate block handling

File: crates/blockchain/src/store.rs
Lines: ~678-681

The early return for duplicate blocks uses store.has_state(&block_root), which relies on the invariant that states are only stored after successful block processing. This is correct for the current architecture, but consider documenting this assumption:

// Assumes post-block states are stored immediately after processing
if store.has_state(&block_root) {
    return Ok(());
}

3. Security: Validation ordering

File: crates/blockchain/src/store.rs
Lines: ~615-644 (new order)

The refactor moves validate_attestation_data (now in core) to execute after XMSS signature verification. While this is acceptable (signature verification requires the target state anyway), ensure validate_attestation_data performs only cheap checks. If it validates attestation timing rules (e.g., not from future epochs), consider whether early rejection is preferred over the redundant state lookup optimization mentioned above.


4. Test code safety

File: crates/blockchain/src/store.rs
Lines: ~694-718

on_gossip_attestation_without_verification remains pub despite being documented as test-only. Consider:

#[cfg(test)]
pub fn on_gossip_attestation_without_verification(...)

Or if used by integration tests in crates/blockchain/tests/, the current approach with explicit documentation is acceptable.


5. Positive observations

  1. Clean separation of concerns: Moving signature verification out of on_block_core and on_gossip_aggregated_attestation_core makes the code more testable while maintaining that production paths always verify.

  2. SSZ correctness: The change from HashedAttestationData::new(data.clone()) to `data


Automated review by Kimi (Moonshot AI) · kimi-k2.5 · custom prompt

@github-actions
Copy link
Copy Markdown

🤖 Claude Code Review

Now I have enough context for a thorough review. Here are my findings:


PR #303 — Adapt fork-choice harness to refreshed fixture format

Overview

This PR adapts the fork-choice spec-test harness to consume the updated leanSpec e9ddede fixture schema, fixing 4 previously broken tests. It also refactors on_block_core and on_gossip_aggregated_attestation to cleanly separate signature verification from core processing logic.


Correctness

Tick time semantics are consistent.
The interval branch computes genesis_time * 1000 + interval * MILLISECONDS_PER_INTERVAL, which on_tick then converts back to (ms - genesis_ms) / MILLISECONDS_PER_INTERVAL = interval. The checks.time comparison against store.time() (also in intervals-since-genesis) is internally consistent.

Validation ordering change in on_gossip_aggregated_attestation.
Previously: validate_attestation_data → XMSS verification → store.
Now: XMSS verification → validate_attestation_data (inside _core) → store.

The expensive XMSS crypto now runs before the cheap slot/checkpoint validation. This is a minor performance regression: malformed attestations (wrong source/target, future slot, unknown block roots) will burn XMSS verification cycles before being rejected. Consider moving validate_attestation_data back to the top of on_gossip_aggregated_attestation, before the get_state + XMSS path.

Double has_state check.
on_block now checks for duplicates before verifying signatures (good early-exit optimization), and on_block_core retains the same check (line 774). This is intentional defence-in-depth since on_block_core is now callable independently, but it's worth a comment on the on_block check explaining the early-exit purpose (which you have — fine).

sig_verification duration dropped from the block import log.
The ?sig_verification field was removed from the "Processed new block" log entry with no replacement. Signature verification is now done in on_block before the timed region, and its duration is not measured. This is a noticeable observability regression for diagnosing slow block imports. Consider measuring it in on_block and passing the duration as a field, or at minimum adding it back to on_block's span.


Security / API Surface

on_block_core and on_gossip_aggregated_attestation_core are now pub.

Making on_block_core fully public means any external crate can process blocks without signature verification — the same risk that on_block_without_verification carried, just with a different name. The doc comment says "tests that operate without valid signatures can call this directly", but pub grants that to production callers too.

Consider pub(crate) for both _core functions. The test harness in crates/blockchain/tests/ is within the same crate, so pub(crate) would still make them accessible there while preventing accidental use from the network layer or the binary crate.


Rust / Style

panic! vs error return in the tick arm.

(None, None) => panic!("tick step missing both time and interval"),

The rest of the harness uses return Err(...) for malformed fixture data (e.g., missing block, missing attestation). A panic here is inconsistent and would crash the whole test process rather than recording a clean failure for that fixture file. Prefer returning an error:

(None, None) => return Err(format!("Step {step_idx}: tick step missing both time and interval").into()),

hash_tree_root() vs HashedAttestationData::root() equivalence.
on_gossip_aggregated_attestation now uses aggregated.data.hash_tree_root() directly (line 633 area) for the XMSS-verification path, while on_gossip_aggregated_attestation_core still wraps in HashedAttestationData::new(aggregated.data.clone()). Confirm these two hash paths produce identical roots — if HashedAttestationData caches or overrides the root computation, a mismatch would cause silent double-hashing or a wrong data_root passed to the XMSS verifier.

Minor: step.valid default.
In validate_checks and the step match arms, step.valid (a bool field in the fixture) governs expected pass/fail. No default is set in the ForkChoiceStep deserialisation; if a fixture omits valid, it will silently be false (Rust default). This pre-existed but is worth a #[serde(default = "bool_true")] or an explicit check to make fixtures with missing valid fail loudly.


Test Coverage

The 70/70 pass rate is the main signal. The new AggregatedSignatureProof::empty(participants) construction in the harness correctly mirrors how build_signed_block fills attestation proofs, so the participants bitfield flows all the way into fork choice as intended.

The on_block_core call in the block step (line 85) no longer auto-ticks to the block's slot time before calling — wait, it does (on_tick is called at line 84 before on_block_core). Good.


Summary

The core logic is sound and the fixture compatibility fixes are correct. Three items worth addressing before merge:

  1. pubpub(crate) on on_block_core and on_gossip_aggregated_attestation_core to contain the no-verification surface.
  2. Move validate_attestation_data back before XMSS verification in on_gossip_aggregated_attestation (performance, not correctness).
  3. panic!return Err(...) in the tick (None, None) arm for test hygiene.

The sig_verification log loss is worth tracking as a follow-up if performance observability matters.


Automated review by Claude (Anthropic) · sonnet · custom prompt

@github-actions
Copy link
Copy Markdown

🤖 Codex Code Review

Findings

  1. High: aggregated gossip messages now do expensive XMSS verification before the cheap semantic gate. In on_gossip_aggregated_attestation, the old upfront validate_attestation_data(...) call is gone, and the code now reaches verify_aggregated_signature(...) first at store.rs. The same attestation-data checks only happen later inside store.rs. That means an attacker can send aggregated attestations with a known target.root but bad source/head topology or future-slot timing and still force full proof verification before rejection. For a consensus client, that is a real DoS regression on the gossip path.

  2. Medium: the forkchoice spectest harness now injects placeholder aggregated proofs that later code treats as real child proofs. Both the gossipAggregatedAttestation path and build_signed_block create AggregatedSignatureProof::empty(...) at forkchoice_spectests.rs and forkchoice_spectests.rs. empty() writes zero-length proof_data at block.rs. Those proofs are then reused as aggregation children in store.rs and block attestation compaction in store.rs, where child proof deserialization requires real bytes and fails at lib.rs. So any fixture that later re-aggregates or compacts these attestations can silently lose participant coverage or diverge from spec behavior.

I did not run the test suite here: the sandbox blocks Cargo from writing its rustup/cargo state and fetching dependencies.


Automated review by OpenAI Codex · gpt-5.4 · custom prompt

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Apr 17, 2026

Greptile Summary

Adapts the fork-choice spec-test harness to consume the refreshed leanSpec e9ddede fixture schema: tick steps now handle both interval (absolute count since genesis) and time (UNIX seconds), the hasProposal flag is forwarded to on_tick, checks.time is validated against Store::time(), and gossipAggregatedAttestation steps route through the new on_gossip_aggregated_attestation_core so the fixture's proof.participants bitfield populates the new payload buffer correctly. The PR also refactors on_block and on_gossip_aggregated_attestation in production code by extracting public *_core helpers and moving signature verification to the outer wrappers.

Confidence Score: 5/5

Safe to merge; all findings are P2 style/efficiency suggestions with no correctness impact on the fork-choice logic or test results.

All four comments are P2 (stale comment, redundant state lookup, validation-order change with no wrong-outcome risk, missing log field). The test changes correctly model the new fixture schema and pass 70/70 fork-choice spec tests. The production refactoring is logically sound.

crates/blockchain/src/store.rs — redundant target_state fetch and reordered validate_attestation_data in the on_gossip_aggregated_attestation production path.

Important Files Changed

Filename Overview
crates/blockchain/src/store.rs Refactors on_block and on_gossip_aggregated_attestation into public core helpers, makes on_block_core pub, and extracts validate_attestation_data into on_gossip_aggregated_attestation_core — introducing a redundant state lookup and a reordered validation step in the production path.
crates/blockchain/tests/forkchoice_spectests.rs Adapts the test harness to handle interval/time tick fields, hasProposal, gossipAggregatedAttestation with proof.participants, and validates the time store check; the TODO comment is now partially stale.
crates/blockchain/tests/types.rs Adds new fields (interval, has_proposal, proof) to ForkChoiceStep and AttestationStepData, and promotes the time field in StoreChecks from unsupported to validated; clean, no issues.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A["on_gossip_aggregated_attestation (production)"] --> B["fetch target_state\n(for pubkeys + bounds check)"]
    B --> C["bounds-check participants"]
    C --> D["collect pubkeys\nverify XMSS signature"]
    D --> E["on_gossip_aggregated_attestation_core"]
    E --> F["validate_attestation_data\n⚠ after sig verification"]
    F --> G["fetch target_state AGAIN\n⚠ redundant lookup"]
    G --> H["bounds-check AGAIN\n⚠ redundant"]
    H --> I["insert_new_aggregated_payload"]

    J["Tests / spec-test harness"] --> E

    K["on_block (production)"] --> L["duplicate-block check\nfetch parent_state\nverify_signatures"]
    L --> M["on_block_core (now pub)"]
    M --> N["state_transition\ninsert block+state\nupdate_head"]

    O["on_block_core (test harness)"] --> M
Loading

Comments Outside Diff (2)

  1. crates/blockchain/tests/forkchoice_spectests.rs, line 24-34 (link)

    P2 Stale TODO comment after gossipAggregatedAttestation fix

    The block comment above SKIP_TESTS still says "aggregated attestations with validator_id=0 into known (should use proof.participants into new). TODO: fix these" — but this PR fixes exactly that, routing through on_gossip_aggregated_attestation_core with the fixture's proof.participants bitfield. The gossipAggregatedAttestation clause of the TODO is now resolved; only the attestation-step part (individual attestations into known) remains relevant.

    ``markdown
    This is a comment left during a code review.
    Path: crates/blockchain/tests/forkchoice_spectests.rs
    Line: 24-34

    Comment:
    Stale TODO comment after gossipAggregatedAttestation fix

    The block comment above SKIP_TESTS still says "aggregated attestations with validator_id=0 into known (should use proof.participants into new). TODO: fix these" — but this PR fixes exactly that, routing through on_gossip_aggregated_attestation_core with the fixture's proof.participants bitfield. The gossipAggregatedAttestation clause of the TODO is now resolved; only the attestation-step part (individual attestations into known) remains relevant.

    ``

  • crates/blockchain/src/store.rs, line 854-864 (link)

    P2 sig_verification timing removed from the "Processed new block" log

    Signature verification timing (?sig_verification) was dropped from the on_block_core log event when the verification step moved into on_block. The timing is now computed in on_block but never emitted, making it impossible to profile crypto overhead via logs.

    Consider capturing and logging it in on_block alongside the block-level summary, or passing it into on_block_core as an Option<Duration> so the existing log field can be restored.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: crates/blockchain/src/store.rs
    Line: 854-864
    
    Comment:
    **`sig_verification` timing removed from the "Processed new block" log**
    
    Signature verification timing (`?sig_verification`) was dropped from the `on_block_core` log event when the verification step moved into `on_block`. The timing is now computed in `on_block` but never emitted, making it impossible to profile crypto overhead via logs.
    
    Consider capturing and logging it in `on_block` alongside the block-level summary, or passing it into `on_block_core` as an `Option<Duration>` so the existing log field can be restored.
    
    How can I resolve this? If you propose a fix, please make it concise.
  • Prompt To Fix All With AI
    This is a comment left during a code review.
    Path: crates/blockchain/tests/forkchoice_spectests.rs
    Line: 24-34
    
    Comment:
    **Stale TODO comment after gossipAggregatedAttestation fix**
    
    The block comment above `SKIP_TESTS` still says "aggregated attestations with `validator_id=0` into known (should use `proof.participants` into new). TODO: fix these" — but this PR fixes exactly that, routing through `on_gossip_aggregated_attestation_core` with the fixture's `proof.participants` bitfield. The `gossipAggregatedAttestation` clause of the TODO is now resolved; only the `attestation`-step part (individual attestations into known) remains relevant.
    
    ```suggestion
    // We don't check signatures in spec-tests, so invalid-signature tests always pass.
    // Individual gossip attestation steps bypass the new→known promotion pipeline
    // (on_gossip_attestation_without_verification inserts directly into known payloads).
    // TODO: route attestation steps through the real promotion pipeline.
    const SKIP_TESTS: &[&str] = &[
    
    How can I resolve this? If you propose a fix, please make it concise.
    
    ---
    
    This is a comment left during a code review.
    Path: crates/blockchain/src/store.rs
    Line: 735-757
    
    Comment:
    **Redundant state lookup and bounds check on the production call path**
    
    When called from `on_gossip_aggregated_attestation`, `on_gossip_aggregated_attestation_core` re-fetches `target_state` (~line 742) and re-runs the participant bounds check (~lines 746-751) even though the outer function already did both at lines 613-621. Because `get_state` returns an owned `State`, this is two independent deserialization/copy operations for the same root.
    
    The check is still needed inside core for test-only callers, but the double work on the production path could be avoided by passing the pre-validated state through an internal helper.
    
    This is a P2 suggestion — no correctness impact, but worth a follow-up to avoid the extra state fetch on every gossiped aggregation.
    
    How can I resolve this? If you propose a fix, please make it concise.
    
    ---
    
    This is a comment left during a code review.
    Path: crates/blockchain/src/store.rs
    Line: 608-666
    
    Comment:
    **`validate_attestation_data` now runs after signature verification**
    
    The original `on_gossip_aggregated_attestation` ran `validate_attestation_data` (a cheap, purely local check) *before* the expensive XMSS signature verification. After this refactor, validation moves inside `on_gossip_aggregated_attestation_core` and executes only *after* the full signature check passes.
    
    Two consequences: an attestation with invalid data and an invalid signature now consumes the full signature-verification budget before being rejected, and `inc_attestations_invalid` is never incremented for it. A comment explaining the deliberate ordering would help future readers.
    
    How can I resolve this? If you propose a fix, please make it concise.
    
    ---
    
    This is a comment left during a code review.
    Path: crates/blockchain/src/store.rs
    Line: 854-864
    
    Comment:
    **`sig_verification` timing removed from the "Processed new block" log**
    
    Signature verification timing (`?sig_verification`) was dropped from the `on_block_core` log event when the verification step moved into `on_block`. The timing is now computed in `on_block` but never emitted, making it impossible to profile crypto overhead via logs.
    
    Consider capturing and logging it in `on_block` alongside the block-level summary, or passing it into `on_block_core` as an `Option<Duration>` so the existing log field can be restored.
    
    How can I resolve this? If you propose a fix, please make it concise.

    Reviews (1): Last reviewed commit: "test: adapt fork-choice harness to refre..." | Re-trigger Greptile

    Comment thread crates/blockchain/src/store.rs Outdated
    Comment on lines +735 to +757
    pub fn on_gossip_aggregated_attestation_core(
    store: &mut Store,
    signed_block: SignedBlock,
    verify: bool,
    aggregated: SignedAggregatedAttestation,
    ) -> Result<(), StoreError> {
    validate_attestation_data(store, &aggregated.data)
    .inspect_err(|_| metrics::inc_attestations_invalid())?;

    let target_state = store
    .get_state(&aggregated.data.target.root)
    .ok_or(StoreError::MissingTargetState(aggregated.data.target.root))?;
    let num_validators = target_state.validators.len() as u64;
    if aggregated
    .proof
    .participant_indices()
    .any(|vid| vid >= num_validators)
    {
    return Err(StoreError::InvalidValidatorIndex);
    }

    let hashed = HashedAttestationData::new(aggregated.data.clone());
    store.insert_new_aggregated_payload(hashed, aggregated.proof);

    Ok(())
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    P2 Redundant state lookup and bounds check on the production call path

    When called from on_gossip_aggregated_attestation, on_gossip_aggregated_attestation_core re-fetches target_state (~line 742) and re-runs the participant bounds check (~lines 746-751) even though the outer function already did both at lines 613-621. Because get_state returns an owned State, this is two independent deserialization/copy operations for the same root.

    The check is still needed inside core for test-only callers, but the double work on the production path could be avoided by passing the pre-validated state through an internal helper.

    This is a P2 suggestion — no correctness impact, but worth a follow-up to avoid the extra state fetch on every gossiped aggregation.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: crates/blockchain/src/store.rs
    Line: 735-757
    
    Comment:
    **Redundant state lookup and bounds check on the production call path**
    
    When called from `on_gossip_aggregated_attestation`, `on_gossip_aggregated_attestation_core` re-fetches `target_state` (~line 742) and re-runs the participant bounds check (~lines 746-751) even though the outer function already did both at lines 613-621. Because `get_state` returns an owned `State`, this is two independent deserialization/copy operations for the same root.
    
    The check is still needed inside core for test-only callers, but the double work on the production path could be avoided by passing the pre-validated state through an internal helper.
    
    This is a P2 suggestion — no correctness impact, but worth a follow-up to avoid the extra state fetch on every gossiped aggregation.
    
    How can I resolve this? If you propose a fix, please make it concise.

    Comment on lines 608 to 666
    pub fn on_gossip_aggregated_attestation(
    store: &mut Store,
    aggregated: SignedAggregatedAttestation,
    ) -> Result<(), StoreError> {
    validate_attestation_data(store, &aggregated.data)
    .inspect_err(|_| metrics::inc_attestations_invalid())?;

    // Verify aggregated proof signature
    // Resolve target state and validator pubkeys needed for signature verification.
    let target_state = store
    .get_state(&aggregated.data.target.root)
    .ok_or(StoreError::MissingTargetState(aggregated.data.target.root))?;
    let validators = &target_state.validators;
    let num_validators = validators.len() as u64;

    let participant_indices: Vec<u64> = aggregated.proof.participant_indices().collect();
    if participant_indices.iter().any(|&vid| vid >= num_validators) {
    return Err(StoreError::InvalidValidatorIndex);
    }

    let pubkeys: Vec<_> = participant_indices
    .iter()
    .map(|&vid| {
    validators[vid as usize]
    .get_attestation_pubkey()
    .map_err(|_| StoreError::PubkeyDecodingFailed(vid))
    })
    .collect::<Result<_, _>>()?;

    let hashed = HashedAttestationData::new(aggregated.data.clone());
    let data_root = hashed.root();
    let data_root = aggregated.data.hash_tree_root();
    let slot: u32 = aggregated.data.slot.try_into().expect("slot exceeds u32");

    {
    let _timing = metrics::time_pq_sig_aggregated_signatures_verification();
    ethlambda_crypto::verify_aggregated_signature(
    &aggregated.proof.proof_data,
    pubkeys,
    &data_root,
    slot,
    )
    }
    .map_err(StoreError::AggregateVerificationFailed)?;

    // Store one proof per attestation data (not per validator)
    store.insert_new_aggregated_payload(hashed, aggregated.proof.clone());
    let num_participants = aggregated.proof.participants.count_ones();
    metrics::update_latest_new_aggregated_payloads(store.new_aggregated_payloads_count());

    let slot = aggregated.data.slot;
    let target_slot = aggregated.data.target.slot;
    let target_root = aggregated.data.target.root;
    let source_slot = aggregated.data.source.slot;

    on_gossip_aggregated_attestation_core(store, aggregated)?;

    metrics::update_latest_new_aggregated_payloads(store.new_aggregated_payloads_count());
    info!(
    slot,
    num_participants,
    target_slot = aggregated.data.target.slot,
    target_root = %ShortRoot(&aggregated.data.target.root.0),
    source_slot = aggregated.data.source.slot,
    target_slot,
    target_root = %ShortRoot(&target_root.0),
    source_slot,
    "Aggregated attestation processed"
    );

    metrics::inc_attestations_valid(1);

    Ok(())
    Copy link
    Copy Markdown
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    P2 validate_attestation_data now runs after signature verification

    The original on_gossip_aggregated_attestation ran validate_attestation_data (a cheap, purely local check) before the expensive XMSS signature verification. After this refactor, validation moves inside on_gossip_aggregated_attestation_core and executes only after the full signature check passes.

    Two consequences: an attestation with invalid data and an invalid signature now consumes the full signature-verification budget before being rejected, and inc_attestations_invalid is never incremented for it. A comment explaining the deliberate ordering would help future readers.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: crates/blockchain/src/store.rs
    Line: 608-666
    
    Comment:
    **`validate_attestation_data` now runs after signature verification**
    
    The original `on_gossip_aggregated_attestation` ran `validate_attestation_data` (a cheap, purely local check) *before* the expensive XMSS signature verification. After this refactor, validation moves inside `on_gossip_aggregated_attestation_core` and executes only *after* the full signature check passes.
    
    Two consequences: an attestation with invalid data and an invalid signature now consumes the full signature-verification budget before being rejected, and `inc_attestations_invalid` is never incremented for it. A comment explaining the deliberate ordering would help future readers.
    
    How can I resolve this? If you propose a fix, please make it concise.

    The leanSpec bump to `e9ddede` regenerated fork-choice fixtures with a
    new schema that breaks the existing test runner:
    
    1. `tick` steps now sometimes use `interval` (absolute intervals since
       genesis) instead of `time` (UNIX seconds). Derive the millisecond
       timestamp from whichever field is present; accept `hasProposal`.
    2. Tick `checks` carry a `time` field (expected `Store::time()` in
       intervals since genesis). Validate it instead of erroring out.
    
    Also adds three fixtures to `SKIP_TESTS`
    (`test_tick_interval_0_skips_acceptance_when_not_proposer`,
    `test_tick_interval_progression_through_full_slot`,
    `test_safe_target_uses_merged_pools_at_interval_3`). They contain
    `gossipAggregatedAttestation` steps whose attestation checks require
    routing the proof's participants bitfield into the `new` aggregated
    payload buffer. The follow-up PR wires the real verifying entry point
    through and unblocks all three.
    
    Fixes `test_on_tick_advances_across_multiple_empty_slots`.
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Labels

    None yet

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    1 participant