Is your feature request related to a problem? Please describe. Yes.
Currently, the library strictly enforces that any thinking_config (such as the
thinking budget) must be configured via the LlmAgent.planner field
(specifically using BuiltInPlanner).
If a user attempts to set thinking_config directly within the
generate_content_config of an LlmAgent,
LlmAgent.validate_generate_content_config raises a ValueError.
This creates friction for two reasons: 1. Boilerplate: Users who simply want
to enable thinking or adjust the budget (which is effectively a model
hyperparameter) are forced to instantiate a full BuiltInPlanner object, adding
unnecessary import overhead and complexity. 2. Architectural Clarity: It
conflates "model parameters" (like temperature, max_output_tokens, and now
thinking_budget) with "agent strategy" (the Planner). Users intuitively
expect model-level settings to reside in generate_content_config.
Describe the solution you'd like I propose relaxing the validation logic in
LlmAgent to allow thinking_config to be set directly in
generate_content_config.
To prevent ambiguity or silent failures, we should implement an "Allow but Warn"
strategy:
- Update Validation: Modify
LlmAgent.validate_generate_content_config to
remove the ValueError for thinking_config.
- Add Precedence Warning: If both
self.planner (with thinking enabled)
AND generate_content_config.thinking_config are present, issue a
UserWarning. This informs the user that the Planner's configuration will
take precedence (due to the order of request processors).
- Runtime Logging: Ideally, update
BuiltInPlanner.apply_thinking_config
to log an INFO or WARNING message if it detects it is overwriting an
existing thinking configuration on the LlmRequest.
Describe alternatives you've considered * Status Quo: Continue enforcing
Planner usage. This maintains strict separation but keeps the developer
experience deeper than necessary for simple thinking model usage. * Silent
Overwrite: Remove the validation but add no warnings. This is risky because
the _NlPlanningRequestProcessor runs after the basic processor. A user might
set a budget of 2000 in config, have a default planner with budget 1000, and be
confused why their setting isn't working. The warning is essential.
Additional context -
Internal Logic: The underlying flow logic in
src/google/adk/flows/llm_flows/basic.py (_BasicLlmRequestProcessor) already
deep-copies the entire generate_content_config to the LlmRequest. Therefore,
once the validation in llm_agent.py is removed, the parameter will correctly
propagate to the model without further changes to the core flow.
Cross-Language Consistency: A review of the Go implementation
(google/adk-go) shows that it does not enforce this restriction. In
adk-go, ThinkingConfig is allowed within the GenerateContentConfig struct
and is passed through to the model without requiring a separate Planner
abstraction. Bringing the Python implementation in line with Go would improve
ecosystem consistency.
Is your feature request related to a problem? Please describe. Yes.
Currently, the library strictly enforces that any
thinking_config(such as thethinking budget) must be configured via the
LlmAgent.plannerfield(specifically using
BuiltInPlanner).If a user attempts to set
thinking_configdirectly within thegenerate_content_configof anLlmAgent,LlmAgent.validate_generate_content_configraises aValueError.This creates friction for two reasons: 1. Boilerplate: Users who simply want
to enable thinking or adjust the budget (which is effectively a model
hyperparameter) are forced to instantiate a full
BuiltInPlannerobject, addingunnecessary import overhead and complexity. 2. Architectural Clarity: It
conflates "model parameters" (like
temperature,max_output_tokens, and nowthinking_budget) with "agent strategy" (thePlanner). Users intuitivelyexpect model-level settings to reside in
generate_content_config.Describe the solution you'd like I propose relaxing the validation logic in
LlmAgentto allowthinking_configto be set directly ingenerate_content_config.To prevent ambiguity or silent failures, we should implement an "Allow but Warn"
strategy:
LlmAgent.validate_generate_content_configtoremove the
ValueErrorforthinking_config.self.planner(with thinking enabled)AND
generate_content_config.thinking_configare present, issue aUserWarning. This informs the user that the Planner's configuration willtake precedence (due to the order of request processors).
BuiltInPlanner.apply_thinking_configto log an INFO or WARNING message if it detects it is overwriting an
existing thinking configuration on the
LlmRequest.Describe alternatives you've considered * Status Quo: Continue enforcing
Plannerusage. This maintains strict separation but keeps the developerexperience deeper than necessary for simple thinking model usage. * Silent
Overwrite: Remove the validation but add no warnings. This is risky because
the
_NlPlanningRequestProcessorruns after the basic processor. A user mightset a budget of 2000 in config, have a default planner with budget 1000, and be
confused why their setting isn't working. The warning is essential.
Additional context -
Internal Logic: The underlying flow logic in
src/google/adk/flows/llm_flows/basic.py(_BasicLlmRequestProcessor) alreadydeep-copies the entire
generate_content_configto theLlmRequest. Therefore,once the validation in
llm_agent.pyis removed, the parameter will correctlypropagate to the model without further changes to the core flow.
Cross-Language Consistency: A review of the Go implementation
(
google/adk-go) shows that it does not enforce this restriction. Inadk-go,ThinkingConfigis allowed within theGenerateContentConfigstructand is passed through to the model without requiring a separate Planner
abstraction. Bringing the Python implementation in line with Go would improve
ecosystem consistency.