Skip to content

fix: preserve Anthropic thinking blocks and signatures in LiteLLM round-trip#4999

Open
giulio-leone wants to merge 5 commits intogoogle:mainfrom
giulio-leone:fix/litellm-anthropic-thinking-roundtrip
Open

fix: preserve Anthropic thinking blocks and signatures in LiteLLM round-trip#4999
giulio-leone wants to merge 5 commits intogoogle:mainfrom
giulio-leone:fix/litellm-anthropic-thinking-roundtrip

Conversation

@giulio-leone
Copy link
Copy Markdown
Contributor

Summary

Fixes #4801 — Adaptive thinking is broken when using Claude models through LiteLLM.

Root Cause

When Claude produces extended thinking with thinking_blocks (each containing a type, thinking text, and signature), the round-trip through ADK's LiteLLM integration silently loses them:

  1. _extract_reasoning_value() only read reasoning_content (a flattened string without signatures), ignoring the richer thinking_blocks field
  2. _content_to_message_param() set reasoning_content on the outgoing ChatCompletionAssistantMessage, but LiteLLM's anthropic_messages_pt() prompt template silently drops the reasoning_content field entirely
  3. Result: thinking blocks vanish from conversation history after turn 1; Claude stops producing them

Fix

Three coordinated changes in lite_llm.py:

Change What it does
_is_anthropic_provider() helper Detects anthropic, bedrock, vertex_ai providers
_extract_reasoning_value() Now prefers thinking_blocks (with per-block signatures) over reasoning_content
_convert_reasoning_value_to_parts() Handles ChatCompletionThinkingBlock dicts, preserving thought_signature
_content_to_message_param() For Anthropic providers, embeds thinking blocks directly in the message content list as {"type": "thinking", ...} dicts — this format passes through LiteLLM's anthropic_messages_pt() correctly

For non-Anthropic providers (OpenAI, etc.), behavior is unchanged — reasoning_content is still used.

Verification

  • LiteLLM's anthropic_messages_pt() was tested to confirm:
    • reasoning_content field → DROPPED (existing LiteLLM bug)
    • content as list with {"type": "thinking", ...}PRESERVED
    • Signatures in thinking blocks → PRESERVED when in content list ✅

Tests

Added 7 targeted tests covering:

  • _is_anthropic_provider() — provider detection
  • _extract_reasoning_value() — prefers thinking_blocks over reasoning_content
  • _convert_reasoning_value_to_parts() — signature preservation from block dicts
  • _convert_reasoning_value_to_parts() — plain string fallback (no signature)
  • _content_to_message_param() — Anthropic: thinking blocks embedded in content list
  • _content_to_message_param() — OpenAI: reasoning_content field used (unchanged)
  • _content_to_message_param() — Anthropic thinking + tool calls combined

Full test suite: 4732 passed, 0 failures

⚠️ This reopens #4811 which was accidentally closed due to fork deletion.

@adk-bot adk-bot added the models [Component] Issues related to model support label Mar 25, 2026
@rohityan rohityan self-assigned this Mar 26, 2026
@giulio-leone giulio-leone force-pushed the fix/litellm-anthropic-thinking-roundtrip branch from 4cc8546 to 10575cb Compare March 30, 2026 21:45
@giulio-leone giulio-leone force-pushed the fix/litellm-anthropic-thinking-roundtrip branch from 10575cb to b6bbf5c Compare April 8, 2026 22:48
…nd-trip

When using Claude models through LiteLLM, extended thinking blocks
(with signatures) were lost after the first turn because:

1. _extract_reasoning_value() only read reasoning_content (flattened
   string without signatures), ignoring thinking_blocks
2. _content_to_message_param() set reasoning_content on the outgoing
   message, which LiteLLM's anthropic_messages_pt() template silently
   drops

This fix:
- Adds _is_anthropic_provider() helper to detect anthropic/bedrock/
  vertex_ai providers
- Updates _extract_reasoning_value() to prefer thinking_blocks (with
  per-block signatures) over reasoning_content
- Updates _convert_reasoning_value_to_parts() to handle
  ChatCompletionThinkingBlock dicts, preserving thought_signature
- Updates _content_to_message_param() to embed thinking blocks
  directly in the message content list for Anthropic providers,
  bypassing the broken reasoning_content path

Fixes google#4801
Cover the model-driven LiteLLM Anthropic round-trip with a regression test.
@giulio-leone giulio-leone force-pushed the fix/litellm-anthropic-thinking-roundtrip branch from b6bbf5c to 3f28b1d Compare April 8, 2026 23:32
@rohityan rohityan added the needs review [Status] The PR/issue is awaiting review from the maintainer label Apr 13, 2026
@rohityan
Copy link
Copy Markdown
Collaborator

Hi @giulio-leone , Thank you for your contribution! We appreciate you taking the time to submit this pull request. Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share.

@rohityan
Copy link
Copy Markdown
Collaborator

Hi @wukath , can you please review this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support needs review [Status] The PR/issue is awaiting review from the maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Adaptive Thinking Broken Claude Litellm

3 participants