Skip to content

fix(client): drop v1-incompat discriminator on InputItem (b48)#814

Merged
scale-ballen merged 1 commit intomainfrom
fix/drop-v1-incompat-discriminators
Apr 22, 2026
Merged

fix(client): drop v1-incompat discriminator on InputItem (b48)#814
scale-ballen merged 1 commit intomainfrom
fix/drop-v1-incompat-discriminators

Conversation

@scale-ballen
Copy link
Copy Markdown
Contributor

@scale-ballen scale-ballen commented Apr 22, 2026

Summary

Drop Field(discriminator=\"type\") from InputItem.__root__ in gen/openai.py. Under pydantic v2 + v1 compat, the discriminator on this union triggers:

pydantic.v1.errors.ConfigError: Field 'type' is not the same for all submodels of 'Item'

at module-load time, breaking any consumer that imports llmengine (e.g. egp-api-backend).

Background

This is the exact fix @brandon.allen applied in #776 (1d50898 'feat: support openai 2 in python client'). A later commit (bd8a98f 'fix: address python client review feedback') ran codegen which wiped the manual fix. b47 (this repo's main) still carries the bug; b48 re-applies the patch.

Impact

  • JSON in/out: identical. Every submodel has a Literal[...] type field, so validation still selects the correct submodel by value.
  • Validation: O(n) try-each instead of O(1) jump. Negligible for typical usage.
  • Error messages on invalid input: generic union error instead of 'invalid discriminator value'.
  • Downstream: egp-api-backend (the immediate consumer) doesn't use InputItem at all — it only uses Completion and Model. Zero runtime impact there.

Other 9 discriminator=\"type\" sites in gen/openai.py are left intact; they don't recurse into RootModel unions so they don't hit the v1-compat failure.

Local verification

Built wheel, installed into a pydantic==2.11.9 venv (matches egp-api-backend's pin):

  • Before: from llmengine import Completion, ModelConfigError
  • After: imports cleanly; ChatCompletionV2Request/Response work; all 9 Item submodels import; InputItem/Item/ItemResource/OutputItem1 import; InputMessage instantiation + validation roundtrips

Follow-ups (separate tickets)

  • Fix OpenAPI schema or datamodel-code-generator config so next regen doesn't re-introduce this
  • Add import smoke test to llm-engine CI under pydantic 2 + v1 compat so regressions fail fast

Test plan

  • Local import smoke test under pydantic==2.11.9
  • Publish b48 to CodeArtifact
  • egp-api-backend CI passes with b48 pin (scaleapi/scaleapi#140646)

🤖 Generated with Claude Code

Greptile Summary

Re-applies the manual fix from #776 that was accidentally reverted by a codegen run: drops discriminator="type" from InputItem.__root__ because Item is itself a __root__-based union (RootModel) and does not expose a uniform type field, causing pydantic v1-compat to raise ConfigError at module-load time. The version is bumped to b48 accordingly. Validation correctness is unaffected; union resolution falls back to sequential try-each instead of O(1) discriminator lookup, which is negligible for typical payloads.

Confidence Score: 5/5

Safe to merge — the one-line fix correctly resolves a module-load crash with no JSON or validation regressions.

The change is a targeted, well-understood removal of a pydantic v1-compat incompatible option. The PR description documents root cause, local verification, and downstream impact. The only remaining feedback is a P2 style suggestion to add an explanatory comment/TODO ticket so the next codegen run is easier to catch.

No files require special attention; consider adding the inline comment with a ticket number before the next codegen run to avoid re-introducing the bug.

Important Files Changed

Filename Overview
clients/python/llmengine/data_types/gen/openai.py Removes discriminator="type" from InputItem.root to fix pydantic v1-compat ConfigError; fix is correct because Item is itself a root union with no uniform type field
clients/python/pyproject.toml Version bumped from 0.0.0.beta47 to 0.0.0.beta48 to publish the fix; change is straightforward

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[InputItem.__root__] --> B{Union resolution}
    B -->|before fix| C[ConfigError at import\nItem has no uniform type field]
    B -->|after fix - sequential try-each| D[EasyInputMessage]
    B -->|after fix - sequential try-each| E[Item - __root__ union\nwith its own discriminator]
    B -->|after fix - sequential try-each| F[ItemReferenceParam]
    E --> G[InputMessage / OutputMessage / FileSearchToolCall / ...]
Loading

Fix All in Cursor Fix All in Claude Code

Prompt To Fix All With AI
This is a comment left during a code review.
Path: clients/python/llmengine/data_types/gen/openai.py
Line: 11517-11520

Comment:
**Missing workaround comment and TODO ticket link**

This is an intentional manual fix to a generated file that will be silently reverted the next time codegen runs (as happened between #776 and bd8a98f). Without an inline comment explaining *why* the discriminator was removed, future reviewers (or the codegen follow-up ticket) have no breadcrumb. Per the repo's conventions, complex workarounds should carry an explanatory comment, and TODO comments should include a ticket number so they're searchable.

```suggestion
class InputItem(BaseModel):
    # TODO(<ticket>): Remove Field() workaround once codegen is fixed to omit
    # discriminator="type" here. Item is itself a __root__ union (RootModel) and
    # does not expose a uniform `type` field, so pydantic v1-compat raises
    # ConfigError: "Field 'type' is not the same for all submodels of 'Item'"
    # at module-load time when discriminator="type" is present. Validation still
    # works correctly via sequential union try-each.
    __root__: Annotated[
        Union[EasyInputMessage, Item, ItemReferenceParam], Field()
    ]
```

**Rule Used:** Add comments to explain complex or non-obvious log... ([source](https://app.greptile.com/review/custom-context?memory=928586f9-9432-435e-a385-026fa49318a2))

**Learned From**
[scaleapi/scaleapi#126958](https://github.com/scaleapi/scaleapi/pull/126958)

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "fix(client): drop v1-incompat discrimina..." | Re-trigger Greptile

Context used:

  • Rule used - Add comments to explain complex or non-obvious log... (source)

Learned From
scaleapi/scaleapi#126958

  • Rule used - Include ticket numbers in TODO comments to make cl... (source)

Learned From
scaleapi/scaleapi#126926

InputItem's __root__ union uses Field(discriminator="type"), but one of
its submodels (Item) is itself a RootModel union whose nested submodels
don't satisfy pydantic v1 compat's strict discriminator requirements.
Under pydantic v2 with v1 compat, class definition fails at import:

  pydantic.v1.errors.ConfigError: Field 'type' is not the same for all
  submodels of 'Item'

This is the same fix Brandon applied in 1d50898 (feat: support openai 2
in python client), which was wiped out by a subsequent codegen regen.

Dropping the discriminator on InputItem makes it a regular Union (try-
each validation). Valid data still deserializes into the correct submodel
because each has a Literal[...] type field — only the matching submodel
validates. JSON input/output is byte-identical.

Minimal patch. Other 9 discriminator="type" sites in gen/openai.py are
left intact; they don't recursively reference RootModel unions so they
don't hit the v1-compat failure.

Local verification:
- Before: `from llmengine import Completion, Model` raises ConfigError
- After: imports cleanly; ChatCompletionV2Request/Response work; all 9
  Item submodels import; InputItem/Item/ItemResource/OutputItem1 import;
  InputMessage instantiation + validation roundtrips

Follow-ups:
- Fix OpenAPI schema or datamodel-code-generator config so next regen
  doesn't re-introduce the bug
- Add import smoke test to llm-engine CI under pydantic 2 + v1 compat

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@scale-ballen scale-ballen merged commit 237b404 into main Apr 22, 2026
8 checks passed
@scale-ballen scale-ballen deleted the fix/drop-v1-incompat-discriminators branch April 22, 2026 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants