[agentserver] Always include model field in response payload#46302
Open
[agentserver] Always include model field in response payload#46302
Conversation
Default model to empty string when not provided in the request, ensuring the field is always present in the response payload. The OpenAI SDK requires model to be present to deserialize the response object. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Contributor
There was a problem hiding this comment.
Pull request overview
Ensures the Responses hosting layer always stamps a model field into response payloads (even when omitted from the request), preventing downstream clients (notably the OpenAI SDK) from failing to deserialize responses when model is missing.
Changes:
- Default
modelto""when building the per-request execution context soapply_common_defaults()will always includemodelin lifecycle snapshots.
RaviPidaparthi
approved these changes
Apr 14, 2026
Member
|
Can we have a e2e test for this? May be update an existing one. |
Address PR review feedback: add contract tests verifying the model field is present in the response payload when omitted from the request, for both sync (stream=False) and streaming (stream=True) modes.
…eleted'
The OpenAI spec returns {id, object: 'response', deleted: true} for
DELETE /responses/{id}. Our handler was returning 'response.deleted'
which doesn't match. Fixed the handler and updated all 5 test
assertions.
ResponseExecution now carries agent_session_id and conversation_id so that _RuntimeState.to_snapshot can forcibly stamp them (S-038/S-040) on both the response.as_dict() path and the minimal fallback dict. All four orchestrator ResponseExecution creation sites pass both fields from the execution context.
The manual _patch.py override of ResponseObject.output erased the element type (list instead of list[OutputItem]), preventing the model framework from deserializing nested dicts into OutputItem instances. This caused get_history to return plain dicts instead of typed models. Changes: - Remove output:list override; use generated list[OutputItem] - Remove ToolChoiceAllowed override (generated type is identical) - Move Sphinx docstring fixes into models_patch.py shim so make generate-models preserves them instead of overwriting - Accept emitter upgrade to model_base.py (XML refactor) - Regenerate _validators.py from current TypeSpec sources
6fa2b47 to
a141311
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
When the request payload doesn't include a
modelfield, the response payload omitsmodelentirely. The OpenAI SDK requiresmodelto be present to deserialize the response object, resulting in an empty object being returned to the caller.Fix
Default
modelto empty string ("") instead ofNonewhen not provided in the request. This ensuresmodelis always stamped on the response payload viaapply_common_defaults()(which guards withif model is not None).One-line change in
_endpoint_handler.py:Testing
All 678 responses tests pass.