Skip to content

chore(deps): update loader dependencies major (major)#194

Open
dreadnode-renovate-bot[bot] wants to merge 1 commit intomainfrom
renovate/major-loader-deps-major
Open

chore(deps): update loader dependencies major (major)#194
dreadnode-renovate-bot[bot] wants to merge 1 commit intomainfrom
renovate/major-loader-deps-major

Conversation

@dreadnode-renovate-bot
Copy link
Copy Markdown
Contributor

@dreadnode-renovate-bot dreadnode-renovate-bot Bot commented Feb 24, 2026

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
psutil ==6.1.1==7.2.2 age confidence
transformers ==4.57.6==5.7.0 age confidence

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

giampaolo/psutil (psutil)

v7.2.2

Compare Source

v7.2.1

Compare Source

v7.2.0

Compare Source

v7.1.3

Compare Source

v7.1.2

Compare Source

v7.1.1

Compare Source

v7.1.0

Compare Source

v7.0.0

Compare Source

huggingface/transformers (transformers)

v5.7.0

Compare Source

Release v5.7.0

New Model additions

Laguna
image

Laguna is Poolside's mixture-of-experts language model family that extends standard SwiGLU MoE transformers with two key innovations. It features per-layer head counts allowing different decoder layers to have different query-head counts while sharing the same KV cache shape, and implements a sigmoid MoE router with auxiliary-loss-free load balancing that uses element-wise sigmoid of gate logits plus learned per-expert bias for router scoring.

Links: Documentation

DEIMv2
image

DEIMv2 (DETR with Improved Matching v2) is a real-time object detection model that extends DEIM with DINOv3 features and spans eight model sizes from X to Atto for diverse deployment scenarios. It uses a Spatial Tuning Adapter (STA) for larger variants to convert DINOv3's single-scale output into multi-scale features, while ultra-lightweight models employ pruned HGNetv2 backbones. The unified design achieves superior performance-cost trade-offs, with DEIMv2-X reaching 57.8 AP with only 50.3M parameters and DEIMv2-S being the first sub-10M model to exceed 50 AP on COCO.

Links: Documentation | Paper

Attention

Several attention-related bugs were fixed across multiple models, including a cross-attention cache type error in T5Gemma2 for long inputs, incorrect cached forward behavior in Qwen3.5's gated-delta-net linear attention, and a crash in GraniteMoeHybrid when no Mamba layers are present. Attention function dispatch was also updated to align with the latest model implementations.

Tokenizers

There was a bug in AutoTokenizer that caused the wrong tokenizer class to be initialized. This caused regressions in models like DeepSeek R1.

Generation

Continuous batching generation received several fixes and improvements, including correcting KV deduplication and memory estimation for long sequences (16K+), and removing misleading warnings about num_return_sequences and other unsupported features that were incorrectly firing even when functionality worked correctly. Documentation for per-request sampling parameters was also added.

Kernels

Improved kernel support by fixing configuration reading and error handling for FP8 checkpoints (e.g., Qwen3.5-35B-A3B-FP8), enabling custom expert kernels registered from the HF Hub to be properly loaded, and resolving an incompatibility that prevented Gemma3n and Gemma4 from using the rotary kernel.

Bugfixes and improvements

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v5.6.2: Patch release v5.6.2

Compare Source

Patch release v5.6.2

Qwen 3.5 and 3.6 MoE (text-only) were broken when using with FP8. It should now work again with this 🫡

Full Changelog: huggingface/transformers@v5.6.1...v5.6.2

v5.6.1: Patch release v5.6.1

Compare Source

Patch release v5.6.1

Flash attention path was broken! Sorry everyone for this one 🤗

v5.6.0

Compare Source

Release v5.6.0

New Model additions

OpenAI Privacy Filter

OpenAI Privacy Filter is a bidirectional token-classification model for personally identifiable information (PII) detection and masking in text. It is intended for high-throughput data sanitization workflows where teams need a model that they can run on-premises that is fast, context-aware, and tunable. The model labels an input sequence in a single forward pass, then decodes coherent spans with a constrained Viterbi procedure, predicting probability distributions over 8 privacy-related output categories for each input token.

Links: Documentation

QianfanOCR

Qianfan-OCR is a 4B-parameter end-to-end document intelligence model developed by Baidu that performs direct image-to-text conversion without traditional multi-stage OCR pipelines. It supports a broad range of prompt-driven tasks including structured document parsing, table extraction, chart understanding, document question answering, and key information extraction all within one unified model. The model features a unique "Layout-as-Thought" capability that generates structured layout representations before producing final outputs, making it particularly effective for complex documents with mixed element types.

Links: Documentation | Paper

SAM3-LiteText

SAM3-LiteText is a lightweight variant of SAM3 that replaces the heavy SAM3 text encoder (353M parameters) with a compact MobileCLIP-based text encoder optimized through knowledge distillation, while keeping the SAM3 ViT-H image encoder intact. This reduces text encoder parameters by up to 88% while maintaining segmentation performance comparable to the original model. The model enables efficient vision-language segmentation by addressing the redundancy found in text prompting for segmentation tasks.

Links: Documentation | Paper

SLANet

SLANet and SLANet_plus are lightweight models designed for table structure recognition, focusing on accurately recognizing table structures in documents and natural scenes. The model improves accuracy and inference speed by adopting a CPU-friendly lightweight backbone network PP-LCNet, a high-low-level feature fusion module CSP-PAN, and a feature decoding module SLA Head that aligns structural and positional information. SLANet was developed by Baidu PaddlePaddle Vision Team as part of their table structure recognition solutions.

Links: Documentation

Breaking changes

The internal rotary_fn is no longer registered as a hidden kernel function, so any code referencing self.rotary_fn(...) within an Attention module will break and must be updated to call the function directly instead.

Serve

The transformers serve command received several enhancements, including a new /v1/completions endpoint for legacy text completion, multimodal support for audio and video inputs, improved tool-calling via parse_response, proper forwarding of tool_calls/tool_call_id fields, a 400 error on model mismatch when the server is pinned to a specific model, and fixes for the response API. Documentation was also updated to cover new serving options such as --compile and --model-timeout.

Vision

Several vision-related bug fixes were applied in this release, including correcting Qwen2.5-VL temporal RoPE scaling for still images, fixing missing/mismatched image processor backends for Emu3 and BLIP, resolving modular image processor class duplication, and preventing accelerate from incorrectly splitting vision encoders in PeVideo/PeAudioVideo models. Image loading performance was also improved by leveraging torchvision's native decode_image in the torchvision backend, yielding up to ~17% speedup over PIL-based loading.

Parallelization

Fixed several bugs affecting distributed training, including silently wrong results or NaN loss with Expert Parallelism, NaN weights on non-rank-0 FSDP processes, and a resize failure in PP-DocLayoutV3; additionally added support for loading adapters with Tensor Parallelism, added MoE to the Gemma4 TP plan, and published documentation for TP training.

Tokenization

Fixed a docstring typo in streamer classes, resolved a Kimi-K2.5 tokenizer regression and _patch_mistral_regex AttributeError, and patched a streaming generation crash for Qwen3VLProcessor caused by incorrect _tokenizer attribute access. Additional housekeeping included moving the GPT-SW3 instruct tokenizer to an internal testing repo and fixing a global state leak in the tokenizer registry during tests.

Cache

Cache handling was improved for Gemma4 and Gemma3n models by dissociating KV state sharing from the Cache class, ensuring KV states are always shared regardless of whether a Cache is used. Additionally, the image cache for Paddle models was updated to align with the latest API.

Audio

Audio models gained vLLM compatibility through targeted fixes across several model implementations, while reliability improvements were also made including exponential back-off retries for audio file downloads, a crash fix in the text-to-speech pipeline when generation configs contain None values, and corrected test failures for Kyutai Speech-To-Text.

Bugfixes and improvements

Note

PR body was truncated to here.


Configuration

📅 Schedule: (UTC)

  • Branch creation
    • At any time (no schedule defined)
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate.

@dreadnode-renovate-bot dreadnode-renovate-bot Bot added the type/digest Dependency digest updates label Feb 24, 2026
@dreadnode-renovate-bot dreadnode-renovate-bot Bot force-pushed the renovate/major-loader-deps-major branch 3 times, most recently from 07525d6 to 3ac3e72 Compare March 1, 2026 00:53
@dreadnode-renovate-bot dreadnode-renovate-bot Bot force-pushed the renovate/major-loader-deps-major branch from 3ac3e72 to 4daa5d1 Compare March 8, 2026 00:48
@dreadnode-renovate-bot dreadnode-renovate-bot Bot force-pushed the renovate/major-loader-deps-major branch 2 times, most recently from 3e0d62f to 4b95150 Compare April 1, 2026 00:57
@dreadnode-renovate-bot dreadnode-renovate-bot Bot force-pushed the renovate/major-loader-deps-major branch from 4b95150 to 40a28f1 Compare April 8, 2026 00:52
@dreadnode-renovate-bot dreadnode-renovate-bot Bot force-pushed the renovate/major-loader-deps-major branch 2 times, most recently from 85f7052 to c4f4579 Compare April 19, 2026 00:59
@dreadnode-renovate-bot dreadnode-renovate-bot Bot force-pushed the renovate/major-loader-deps-major branch from c4f4579 to 37b26b9 Compare April 26, 2026 01:01
| datasource | package      | from   | to    |
| ---------- | ------------ | ------ | ----- |
| pypi       | psutil       | 6.1.1  | 7.2.2 |
| pypi       | transformers | 4.57.6 | 5.7.0 |
@dreadnode-renovate-bot dreadnode-renovate-bot Bot force-pushed the renovate/major-loader-deps-major branch from 37b26b9 to ca4e25e Compare May 3, 2026 01:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

type/digest Dependency digest updates

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants