Skip to content

fix: FIFO message queue — process queued messages sequentially after agent finishes#21891

Open
Mr-V1be wants to merge 2 commits intoanomalyco:devfrom
Mr-V1be:fix/fifo-message-queue
Open

fix: FIFO message queue — process queued messages sequentially after agent finishes#21891
Mr-V1be wants to merge 2 commits intoanomalyco:devfrom
Mr-V1be:fix/fifo-message-queue

Conversation

@Mr-V1be
Copy link
Copy Markdown

@Mr-V1be Mr-V1be commented Apr 10, 2026

Issue for this PR

Closes #2246
Closes #1476

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

When a user sends multiple messages while the agent is busy, they are silently dropped — only the last one might get a response. This PR fixes that by implementing a proper FIFO queue.

Root cause (3 issues):

  1. Runner.ensureRunning() discards new work when state is Running — returns the current run's deferred, never starts the queued work
  2. runLoop breaks entirely after model finishes (outcome === "break") instead of checking for more queued messages
  3. TUI generates MessageID at keypress time, so queue-delayed messages sort at the wrong chronological position in chat

Fix:

  • Added JS-level promise chain in the static prompt() function that serializes calls by session ID before they reach the Effect runtime. This guarantees FIFO order based on call order, not fiber scheduling order
  • createUserMessage now always generates a fresh MessageID.ascending() at insertion time instead of using the TUI-provided one, fixing the position bug
  • Added waitForIdle() to SessionRunState — blocks until the current agent run fully completes before creating the next message
  • Changed runLoop's break to continue after model finishes, so it re-checks activeUserID() for more queued messages
  • Added system-reminder filter to prevent queued messages from leaking into the current turn's LLM context
  • Cancel (Esc) sets a flag that loop() retry checks — prevents auto-restart after user cancellation
  • Status bar shows clickable "N queued" badge with queue count and message preview dialog

I understand why each change is needed: the serialization must happen at the JS Promise level (not inside Effect fibers) because Effect-ts fiber scheduling doesn't guarantee execution order matches call order. The MessageID fix is needed because ULID-based IDs embed timestamps, and the TUI/sync layer sorts messages by ID.

How did you verify your code works?

Tested manually in the TUI:

  1. Started a long-running task, sent 5 messages while agent was busy
  2. Verified each message processed sequentially only after the previous one fully completed
  3. Verified messages appear at the END of chat (after the agent's response), not mid-conversation
  4. Verified "N queued" badge shows correct count in status bar
  5. Verified clicking the badge opens a dialog with queued message texts
  6. Verified Esc stops processing without auto-restarting queued messages
  7. Verified new messages after Esc work normally
  8. Checked logs — queue.activeUserID correctly identifies unhandled messages and processes them in order

Screenshots / recordings

N/A — terminal TUI changes, tested interactively

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

EvgeniyGospk and others added 2 commits April 10, 2026 16:45
…agent finishes

Fixes anomalyco#2246, anomalyco#1476 — queued messages now process one at a time after the
agent fully completes each turn, instead of being silently dropped or
processed out of order.

Root cause: Runner.ensureRunning() drops new work when state=Running,
and the TUI generates MessageIDs at keypress time (not processing time),
causing queue-delayed messages to appear at the wrong position in chat.

Changes:
- JS-level promise chain in static prompt() serializes calls in FIFO order
- waitForIdle() in run-state.ts blocks until current agent run completes
- createUserMessage always generates fresh MessageID at insertion time
- runLoop continues checking for queued messages instead of breaking
- System-reminder filter prevents queued messages from leaking into
  the current turn's LLM context
- Cancel (Esc) respects user intent — doesn't auto-restart queue
- Status bar shows "N queued" badge with click-to-view message list

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The test expected the old behavior where user messages are created
immediately while the agent is busy. With FIFO queue, the second
message waits for the first run to complete before being created.

- Resolve gate before waiting (no deadlock)
- Check parentID against the actual user message ID (not pre-generated)
- Increase timeout to 10s for sequential processing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Queued message not running Forgets queued messages

1 participant