Describe the bug
When using AgentCoreMemorySessionManager with converter=OpenAIConverseConverter and Bedrock models that have extended thinking enabled (e.g., us.anthropic.claude-sonnet-4-20250514-v1:0), multi-turn conversations fail with a ValidationException on the second turn.
The _openai_to_bedrock() function in converters/openai.py reconstructs content blocks in the wrong order — reasoningContent blocks are appended after text and toolUse blocks, but Bedrock requires that if an assistant message contains any thinking blocks, the first block must be a thinking block.
This causes:
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the Converse operation: If an assistant message contains any thinking blocks, the first block must be thinking. Found text
To Reproduce
-
Create a Strands agent with extended thinking enabled on a Bedrock model:
from strands import Agent
from strands.models.bedrock import BedrockModel
from bedrock_agentcore.memory.integrations.strands import AgentCoreMemorySessionManager
from bedrock_agentcore.memory.integrations.strands.converters.openai import OpenAIConverseConverter
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig
model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
streaming=True,
additional_request_fields={
"thinking": {"type": "enabled", "budget_tokens": 5000}
},
)
config = AgentCoreMemoryConfig(
memory_id="your-memory-id",
session_id="test-session",
namespace="default",
)
session_manager = AgentCoreMemorySessionManager(
agentcore_memory_config=config,
converter=OpenAIConverseConverter,
)
agent = Agent(model=model, session_manager=session_manager)
-
Send a first message (succeeds):
Bedrock returns an assistant message with content blocks ordered as: [reasoningContent, text]
-
The session manager serializes this to OpenAI format via _bedrock_to_openai(), storing reasoning in _strands_reasoning_content
-
On the next turn, the session manager restores the conversation via _openai_to_bedrock(), which reconstructs the content blocks as: [text, reasoningContent] — wrong order
-
Send a second message (fails):
agent("Now multiply that by 3") # ValidationException
Root cause in code (converters/openai.py, lines 76–126):
def _openai_to_bedrock(openai_msg: dict) -> dict:
content_items: list[dict[str, Any]] = []
# Step 1: text goes first
text_content = openai_msg.get("content")
if text_content and isinstance(text_content, str):
content_items.append({"text": text_content}) # index 0
# Step 2: tool_calls go second
for tc in openai_msg.get("tool_calls", []):
content_items.append({"toolUse": ...}) # index 1+
# Step 3: reasoning appended LAST ← BUG
for rc in openai_msg.get("_strands_reasoning_content", []):
if isinstance(rc, dict) and "reasoningContent" in rc:
content_items.append(rc) # index N (should be 0)
return {"role": bedrock_role, "content": content_items}
Reasoning blocks must be prepended, not appended.
Expected behavior
The _openai_to_bedrock() function should reconstruct content blocks with reasoningContent blocks first, matching the original ordering that Bedrock produced. The restored message should be [reasoningContent, text], not [text, reasoningContent].
Suggested fix
def _openai_to_bedrock(openai_msg: dict) -> dict:
content_items: list[dict[str, Any]] = []
+ reasoning_items: list[dict[str, Any]] = []
# ... text and tool_call handling unchanged ...
for rc in openai_msg.get("_strands_reasoning_content", []):
if isinstance(rc, dict) and "reasoningContent" in rc:
- content_items.append(rc)
+ reasoning_items.append(rc)
+ # Reasoning blocks must come first per Bedrock API contract
+ content_items = reasoning_items + content_items
bedrock_role = "assistant" if role == "assistant" else "user"
return {"role": bedrock_role, "content": content_items}
Additional context
- The default
AgentCoreMemoryConverter (native Strands format) does not have this bug — it roundtrips via SessionMessage.to_dict()/from_dict() which preserves JSON array ordering. This bug is specific to OpenAIConverseConverter.
- Related upstream issue: strands-agents/sdk-python#1698
- The Strands SDK's built-in session managers (
FileSessionManager, S3SessionManager) also do not have this problem — they preserve all content block types and ordering.
- The
SessionManager interface in Strands operates on the full Message TypedDict which includes reasoningContent — the interface contract is correct; this is an implementation bug in the OpenAI converter.
Environment:
bedrock-agentcore version: 1.6.2
strands-agents version: latest
- Python: 3.10+
- OS: Linux (tested on Amazon Linux 2 / Ubuntu)
- Bedrock model:
us.anthropic.claude-sonnet-4-20250514-v1:0 with extended thinking enabled
Describe the bug
When using
AgentCoreMemorySessionManagerwithconverter=OpenAIConverseConverterand Bedrock models that have extended thinking enabled (e.g.,us.anthropic.claude-sonnet-4-20250514-v1:0), multi-turn conversations fail with aValidationExceptionon the second turn.The
_openai_to_bedrock()function inconverters/openai.pyreconstructs content blocks in the wrong order —reasoningContentblocks are appended aftertextandtoolUseblocks, but Bedrock requires that if an assistant message contains any thinking blocks, the first block must be a thinking block.This causes:
To Reproduce
Create a Strands agent with extended thinking enabled on a Bedrock model:
Send a first message (succeeds):
Bedrock returns an assistant message with content blocks ordered as:
[reasoningContent, text]The session manager serializes this to OpenAI format via
_bedrock_to_openai(), storing reasoning in_strands_reasoning_contentOn the next turn, the session manager restores the conversation via
_openai_to_bedrock(), which reconstructs the content blocks as:[text, reasoningContent]— wrong orderSend a second message (fails):
Root cause in code (
converters/openai.py, lines 76–126):Reasoning blocks must be prepended, not appended.
Expected behavior
The
_openai_to_bedrock()function should reconstruct content blocks withreasoningContentblocks first, matching the original ordering that Bedrock produced. The restored message should be[reasoningContent, text], not[text, reasoningContent].Suggested fix
Additional context
AgentCoreMemoryConverter(native Strands format) does not have this bug — it roundtrips viaSessionMessage.to_dict()/from_dict()which preserves JSON array ordering. This bug is specific toOpenAIConverseConverter.FileSessionManager,S3SessionManager) also do not have this problem — they preserve all content block types and ordering.SessionManagerinterface in Strands operates on the fullMessageTypedDict which includesreasoningContent— the interface contract is correct; this is an implementation bug in the OpenAI converter.Environment:
bedrock-agentcoreversion: 1.6.2strands-agentsversion: latestus.anthropic.claude-sonnet-4-20250514-v1:0with extended thinking enabled