fix: remove global add_function_to_prompt — breaks native tool calling (Groq, OpenAI)#4985
fix: remove global add_function_to_prompt — breaks native tool calling (Groq, OpenAI)#4985vitas wants to merge 2 commits intogoogle:mainfrom
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
|
Response from ADK Triaging Agent Hello @vitas, thank you for creating this PR! Before we can merge this, could you please:
This information will help reviewers to review your PR more efficiently. Thanks! |
e385ebd to
dfb6eba
Compare
Setting `litellm.add_function_to_prompt = True` globally forces ALL
models through text-based tool calling, even models that support
native function calling (Groq, OpenAI, Anthropic).
When this flag is set, LiteLLM injects tool definitions into the
system prompt as text. Models then output XML-style function tags
(`<function=name {...} </function>`) instead of proper `tool_calls`
JSON. Providers like Groq reject this with `tool_use_failed`.
Proof: Direct `litellm.completion()` without this flag returns proper
`tool_calls` JSON with `finish_reason: "tool_calls"`. With the flag,
the same model fails.
The fix removes the global default. Models that need text-based tool
calling can opt in per-instance:
LiteLlm(model="ollama/qwen2", add_function_to_prompt=True)
Models with native tool calling work without any flag:
LiteLlm(model="groq/llama-3.3-70b-versatile")
Fixes: kagent-dev/kagent#1532
Related: huggingface/smolagents#1119, BerriAI/litellm#11001
dfb6eba to
b04d192
Compare
Install google-adk from vitas/adk-python@fix/groq-tool-calling which removes the global add_function_to_prompt=True that broke native tool calling for Groq/OpenAI/Anthropic. Verified: first LLM call now uses proper tool_calls JSON (9563 tokens used). Second call hits Groq free tier rate limit (12K TPM) but the tool calling format is correct. PR: google/adk-python#4985 Signed-off-by: Vitas <vitas@users.noreply.github.com>
|
Hi @vitas , Thank you for your contribution! We appreciate you taking the time to submit this pull request. Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share. |
|
Hi @GWeale , can you please review this. |
Problem
_ensure_litellm_imported()setslitellm.add_function_to_prompt = Trueglobally at import time (line 188). This forces ALL models through LiteLLM's text-based tool calling path — tool definitions are injected into the system prompt as text instead of being passed as thetoolsparameter.Models that support native function calling (Groq, OpenAI, Anthropic) then output XML-style function tags instead of proper
tool_callsJSON:Groq rejects with:
{"error":{"message":"Failed to call a function. See 'failed_generation'","code":"tool_use_failed"}}Proof
Direct
litellm.completion()inside the same environment without the global flag returns propertool_callsJSON withfinish_reason: "tool_calls":Fix
Remove the global
litellm.add_function_to_prompt = True. Models that need text-based tool calling (e.g., some Ollama models without native support) can opt in per-instance:This kwarg flows through
_additional_argsintoacompletion(), so per-model opt-in already works.Impact
LiteLlmadd_function_to_prompt=TrueexplicitlyReferences