Is your feature request related to a specific problem?
When building a multi-agent system with LlmAgent and Gemini, each agent may need a different model hosted in a different Vertex AI location. For example:
- Agent 1 uses
gemini-3.1-pro-preview, which is only available in the global location
- Agent 2 uses
gemini-2.5-flash, which is available in us-central1
Currently, Gemini creates its own Client internally via the api_client cached property (google_llm.py L273), which reads project and location from environment variables (GOOGLE_CLOUD_PROJECT, GOOGLE_CLOUD_LOCATION). Since environment variables are process-global, there is no way to configure different locations (or projects) for different agents within the same process.
Previously it was possible to construct a google.genai.Client with per-agent settings and pass it into Gemini(client=client), but this is no longer supported — Gemini is a Pydantic BaseModel that only accepts model, base_url, speech_config, use_interactions_api, and retry_options.
This makes it impossible to run a multi-agent system where agents target different Vertex AI locations or projects.
Describe the Solution You'd Like
Allow Gemini to accept an optional client: Optional[Client] = None parameter. When provided, Gemini should use the supplied client instead of constructing its own in the api_client cached property.
Proposed API:
from google.genai import Client
from google.adk.models.google_llm import Gemini
from google.adk.agents import LlmAgent
client_global = Client(vertexai=True, project="my-project", location="global")
client_us = Client(vertexai=True, project="my-project", location="us-central1")
agent_1 = LlmAgent(
name="Agent1",
model=Gemini(client=client_global, model="gemini-3.1-pro-preview"),
instruction="...",
)
agent_2 = LlmAgent(
name="Agent2",
model=Gemini(client=client_us, model="gemini-2.5-flash"),
instruction="...",
)
If client is not provided, behavior stays exactly as it is today (auto-create from env vars).
Impact on your work
This is blocking our multi-agent deployment. We have a root coordinator agent that delegates to specialist sub-agents, and our models are available in different Vertex AI regions. Without per-agent client configuration, we cannot use ADK's native Gemini integration and would have to work around it by subclassing Gemini and overriding the api_client property, which is fragile and may break across releases.
Is your feature request related to a specific problem?
When building a multi-agent system with
LlmAgentandGemini, each agent may need a different model hosted in a different Vertex AI location. For example:gemini-3.1-pro-preview, which is only available in thegloballocationgemini-2.5-flash, which is available inus-central1Currently,
Geminicreates its ownClientinternally via theapi_clientcached property (google_llm.py L273), which reads project and location from environment variables (GOOGLE_CLOUD_PROJECT,GOOGLE_CLOUD_LOCATION). Since environment variables are process-global, there is no way to configure different locations (or projects) for different agents within the same process.Previously it was possible to construct a
google.genai.Clientwith per-agent settings and pass it intoGemini(client=client), but this is no longer supported —Geminiis a PydanticBaseModelthat only acceptsmodel,base_url,speech_config,use_interactions_api, andretry_options.This makes it impossible to run a multi-agent system where agents target different Vertex AI locations or projects.
Describe the Solution You'd Like
Allow
Geminito accept an optionalclient: Optional[Client] = Noneparameter. When provided,Geminishould use the supplied client instead of constructing its own in theapi_clientcached property.Proposed API:
If
clientis not provided, behavior stays exactly as it is today (auto-create from env vars).Impact on your work
This is blocking our multi-agent deployment. We have a root coordinator agent that delegates to specialist sub-agents, and our models are available in different Vertex AI regions. Without per-agent client configuration, we cannot use ADK's native
Geminiintegration and would have to work around it by subclassingGeminiand overriding theapi_clientproperty, which is fragile and may break across releases.