Skip to content

feat: Qwen Image Edit 2511 — full pipeline with LoRA, GGUF, quantization, and UI#131

Open
lstein wants to merge 60 commits intomainfrom
feat/qwen-image-edit-2511
Open

feat: Qwen Image Edit 2511 — full pipeline with LoRA, GGUF, quantization, and UI#131
lstein wants to merge 60 commits intomainfrom
feat/qwen-image-edit-2511

Conversation

@lstein
Copy link
Copy Markdown
Owner

@lstein lstein commented Mar 24, 2026

Summary

Complete implementation of the Qwen Image Edit 2511 pipeline for InvokeAI, including text-to-image generation, image editing with reference images, LoRA support (including Lightning distillation), GGUF quantized transformers, and BitsAndBytes encoder quantization.

Key Features

  • Text-to-image and image editing via Qwen Image Edit 2511 model
  • GGUF quantized transformer support with separate Diffusers model as component source for VAE/text encoder
  • LoRA support including Lightning distillation LoRAs for fast 4/8-step generation
  • BitsAndBytes encoder quantization (int8/nf4) to reduce VRAM usage for the Qwen2.5-VL text encoder
  • Shift override for Lightning LoRA sigma schedules
  • Starter models: full Diffusers model, 4 GGUF quantization levels, 2 Lightning LoRAs, and a bundle
  • Frontend UI: Advanced panel with component source selector, encoder quantization dropdown, shift override control; scheduler hidden for this model type

Backend Changes

  • Denoise: FlowMatchEulerDiscreteScheduler integration, 2x2 patch packing/unpacking, reference latent concatenation along sequence dim, zero_cond_t modulation, LoRA application via LayerPatcher with sidecar patching for GGUF, shift override for Lightning
  • Text encoder: Edit-specific system prompt template, vision token expansion per reference image, hidden state extraction with token trimming, attention mask output, BitsAndBytes quantization support
  • VAE encode/decode: AutoencoderKLQwenImage with per-channel latents_mean/latents_std normalization, 5D frame dimension, pixel-space resize before encoding
  • Model loader: Component source field for GGUF models to provide VAE + text encoder from a separate Diffusers model
  • LoRA: Loader invocations (single + collection), config detection, conversion utils, prefix constants, registered in factory discriminated union
  • GGUF loader fixes: Correct base class (ModelLoader), zero_cond_t=True, correct in_channels
  • Starter models: Lightning LoRAs (4-step, 8-step bf16), model bundle
  • transformers requirement: Downgraded back to >=4.56.0 (the video processor fallback imports already handle this)

Frontend Changes

  • Advanced settings: Component source selector (Diffusers model for GGUF), encoder quantization dropdown (None/int8/nf4), shift override checkbox+slider
  • Generation settings: Scheduler hidden for Qwen Image Edit
  • Graph builder: LoRA wiring via collection loader, reference image VAE encoding with pixel-space resize, shift/quantization passthrough
  • State management: qwenImageEditComponentSource, qwenImageEditQuantization, qwenImageEditShift in params slice with persistence and model-switch cleanup

Functional Testing Guide

1. Text-to-Image Generation (Basic)

  1. Install the "Qwen Image Edit 2511" Diffusers model from Starter Models
  2. Select it as the main model
  3. Enter a prompt (e.g., "Persian cat sitting on a red velvet cushion")
  4. Set Steps=30, CFG=4
  5. Generate — should produce a coherent image matching the prompt

2. GGUF Quantized Transformer

  1. Install a GGUF variant (e.g., Q4_K_M) from Starter Models
  2. Also install the full Diffusers model (needed as Component Source)
  3. Select the GGUF model as the main model
  4. In Advanced settings, set "Component Source (Diffusers)" to the full Diffusers model
  5. Generate with the same prompt — quality should be close to the full model

3. BitsAndBytes Encoder Quantization

  1. With the full Diffusers model selected, open Advanced settings
  2. Set "Encoder Quantization" to "4-bit (nf4)"
  3. Generate — should produce similar quality with reduced VRAM for the text encoder
  4. Test "8-bit (int8)" as well

4. LoRA Support

  1. Install "Qwen Image Edit Lightning (4-step, bf16)" from Starter Models
  2. Enable the LoRA in the LoRA panel with weight=1.0
  3. Set Steps=4, CFG=1
  4. In Advanced, check "Shift Override" and set to 3.0
  5. Generate — should produce a coherent image in 4 steps (~10x faster)
  6. Test with "8-step" variant (Steps=8, CFG=1, Shift=3)

5. Image Editing with Reference Image

  1. Add a Qwen Image Edit reference image in the reference images panel
  2. Set the canvas dimensions to match the reference image's aspect ratio
  3. Enter an edit prompt (e.g., "Change the background to a beach scene")
  4. Generate with Steps=30, CFG=1 (matching the diffusers pipeline default)
  5. The output should preserve the reference image's content while applying the edit
  6. Note: Edit quality varies by seed and prompt complexity — try different seeds if results are unsatisfactory

6. Multiple Reference Images

  1. Add 2+ reference images
  2. All images are sent to the text encoder for vision-language conditioning
  3. The first reference image is VAE-encoded as spatial conditioning for the transformer

7. Model Switching Cleanup

  1. Switch from Qwen Image Edit to another model type (e.g., FLUX)
  2. The component source, quantization, and shift settings should be cleared automatically
  3. Switch back — settings should need to be re-selected

🤖 Generated with Claude Code

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot When I attempt to run a generation with Qwen Image (Diffusers version). I get this stack trace:

[2026-03-24 23:25:33,875]::[InvokeAI]::ERROR --> Error while invoking session 58d5a101-921c-4f77-a1e3-2df705e3e458, invocation 90bb0da8-c432-4197-9b93-42e9a8fea02f (qwen_image_edit_text_encoder): Received a NoneType for argument image_processor, but a ImageProcessingMixin was expected.
[2026-03-24 23:25:33,875]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 56, in invoke
    conditioning_data = ConditioningFieldData(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 94, in _encode
    text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/models/qwen2_vl/processing_qwen2_vl.py", line 91, in __init__
    super().__init__(image_processor, tokenizer, video_processor, chat_template=chat_template)
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 534, in __init__
    self.check_argument_for_proper_class(attribute_name, arg)
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 614, in check_argument_for_proper_class
    raise TypeError(
TypeError: Received a NoneType for argument image_processor, but a ImageProcessingMixin was expected.

Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot When I attempt to run a generation with Qwen Image (Diffusers version). I get this stack trace:

[2026-03-24 23:25:33,875]::[InvokeAI]::ERROR --> Error while invoking session 58d5a101-92...

The crash was caused by Qwen2VLProcessor(tokenizer=tokenizer, image_processor=None) in the fallback branch. The tokenizer submodel for this model is a Qwen2_5_VLProcessor (Qwen2.5-VL class), not Qwen2VLProcessor, so the isinstance check was failing and hitting the broken fallback.

Fixed in 666fdfa by replacing the broken isinstance check + manual constructor with AutoProcessor.from_pretrained() using the tokenizer submodel path — the same pattern used in llava_onevision_vllm.py. This correctly loads whichever processor class the model uses with all components (image_processor, tokenizer, video_processor) properly initialized.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot Can you figure out where the WARNING in the fourth line of log output is coming from and remove it?
``
$ invokeai-web
[2026-03-25 09:49:24,865]::[InvokeAI]::INFO --> Using torch device: NVIDIA Graphics Device
[2026-03-25 09:49:24,869]::[InvokeAI]::INFO --> cuDNN version: 90701
[2026-03-25 09:49:25,315]::[InvokeAI]::INFO --> Patchmatch initialized
[2026-03-25 09:49:26,223]::[InvokeAI]::WARNING --> "default_factory" is not supported, calling it now to set "default"
[2026-03-25 09:49:26,692]::[InvokeAI]::INFO --> InvokeAI version 6.12.0
[2026-03-25 09:49:26,692]::[InvokeAI]::INFO --> Root directory = /home/lstein/invokeai-lstein
[2026-03-25 09:49:26,693]::[InvokeAI]::INFO --> Initializing in-memory database
[2026-03-25 09:49:26,694]::[InvokeAI]::INFO --> Database update needed
[2026-03-25 09:49:26,694]::[InvokeAI]::INFO --> Using in-memory database, no backup needed
[2026-03-25 09:49:26,698]::[InvokeAI]::INFO --> Removing models/.cache directory. Converted models will now be cached in .convert_cache.
[2026-03-25 09:49:26,698]::[InvokeAI]::INFO --> Removing legacy just-in-time models. Downloaded models will now be cached in .download_cache.
[2026-03-25 09:49:26,698]::[InvokeAI]::INFO --> Removing defunct core models.
[2026-03-25 09:49:26,701]::[InvokeAI]::INFO --> Removing UNIQUE(name, base, type) constraint from models table

Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot Can you figure out where the WARNING in the fourth line of log output is coming from and remove it?
``
$ invokeai-web
...

The warning came from invokeai/app/invocations/fields.py:744InputField() doesn't support default_factory and logs a warning when it receives one. The reference_images field in qwen_image_edit_text_encoder.py was using default_factory=list instead of default=[] (which is the pattern used everywhere else in the codebase). Fixed in 62aebab.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot I'm getting the following error upon generating with the model:

ValueError: Unrecognized model in /home/lstein/invokeai-main/models/a28d6553-3e88-4d32-a006-24a5f3cc4144/tokenizer. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: aimv2, aimv2_vision_model, albert, align, altclip, apertus, arcee, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, bitnet, blenderbot, blenderbot-small, blip, blip-2, blip_2_qformer, bloom, blt, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, cohere2_vision, colpali, colqwen2, conditional_detr, convbert, convnext, convnextv2, cpmant, csm, ctrl, cvt, d_fine, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v2, deepseek_v3, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dia, diffllama, dinat, dinov2, dinov2_with_registers, dinov3_convnext, dinov3_vit, distilbert, doge, donut-swin, dots1, dpr, dpt, edgetam, edgetam_video, edgetam_vision_model, efficientformer, efficientloftr, efficientnet, electra, emu3, encodec, encoder-decoder, eomt, ernie, ernie4_5, ernie4_5_moe, ernie_m, esm, evolla, exaone4, falcon, falcon_h1, falcon_mamba, fastspeech2_conformer, fastspeech2_conformer_with_hifigan, flaubert, flava, flex_olmo, florence2, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, gemma3n, gemma3n_audio, gemma3n_text, gemma3n_vision, git, glm, glm4, glm4_moe, glm4v, glm4v_moe, glm4v_moe_text, glm4v_text, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gpt_oss, gptj, gptsan-japanese, granite, granite_speech, granitemoe, granitemoehybrid, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hgnet_v2, hiera, hubert, hunyuan_v1_dense, hunyuan_v1_moe, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, internvl, internvl_vision, jamba, janus, jetmoe, jukebox, kosmos-2, kosmos-2.5, kyutai_speech_to_text, layoutlm, layoutlmv2, layoutlmv3, led, levit, lfm2, lfm2_vl, lightglue, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longcat_flash, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, metaclip_2, mgp-str, mimi, minimax, ministral, mistral, mistral3, mixtral, mlcd, mllama, mm-grounding-dino, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, modernbert-decoder, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmo3, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, ovis2, owlv2, owlvit, paligemma, parakeet_ctc, parakeet_encoder, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, perception_encoder, perception_lm, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_omni, qwen2_5_vl, qwen2_5_vl_text, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen2_vl_text, qwen3, qwen3_moe, qwen3_next, qwen3_omni_moe, qwen3_vl, qwen3_vl_moe, qwen3_vl_moe_text, qwen3_vl_text, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam2, sam2_hiera_det_model, sam2_video, sam2_vision_model, sam_hq, sam_hq_vision_model, sam_vision_model, seamless_m4t, seamless_m4t_v2, seed_oss, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip2_vision_model, siglip_vision_model, smollm3, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, t5gemma, table-transformer, tapas, textnet, time_series_transformer, timesfm, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, vaultgemma, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, vjepa2, voxtral, voxtral_encoder, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xcodec, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xlstm, xmod, yolos, yoso, zamba, zamba2, zoedepth


Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot I'm getting the following error upon generating with the model:

ValueError: Unrecognized model in /home/lstein/invokeai-main/models/a28d6553-3e88-4d32-a006-24a5f3cc414...

Fixed in 8bbb06d. The tokenizer subdirectory contains tokenizer/preprocessor files but no config.json with model_type, which AutoProcessor requires to identify the processor class via AutoConfig. Switched to Qwen2_5_VLProcessor.from_pretrained directly — it loads from the tokenizer subdirectory without needing that type-detection step.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot I'm still getting tokenizer errors:


Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 53, in invoke
    prompt_embeds = self._encode(context, pil_images)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 81, in _encode
    processor = Qwen2_5_VLProcessor.from_pretrained(str(tokenizer_abs_path), local_files_only=True)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 1394, in from_pretrained
    args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 1453, in _get_arguments_from_pretrained
    args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/models/auto/image_processing_auto.py", line 489, in from_pretrained
    raise initial_exception
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/models/auto/image_processing_auto.py", line 476, in from_pretrained
    config_dict, _ = ImageProcessingMixin.get_image_processor_dict(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/image_processing_base.py", line 361, in get_image_processor_dict
    raise OSError(
OSError: Can't load image processor for '/home/lstein/invokeai-main/models/a28d6553-3e88-4d32-a006-24a5f3cc4144/tokenizer'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/lstein/invokeai-main/models/a28d6553-3e88-4d32-a006-24a5f3cc4144/tokenizer' is the correct path to a directory containing a preprocessor_config.json file

Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot I'm still getting tokenizer errors:


...

Fixed in ab7e1ec. The tokenizer/ subdirectory only holds tokenizer files — preprocessor_config.json lives at the model root. Removed the / "tokenizer" suffix so Qwen2_5_VLProcessor.from_pretrained is now called on the root directory. It finds preprocessor_config.json (image processor) at the root and tokenizer files inside tokenizer/ automatically, while still skipping the model_type detection that AutoProcessor requires.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot
The next error involves the image processor. Note that I have not loaded any reference images because the user interface is not letting me add reference images for this model type. Could you modify the UI to allow reference images to be uploaded? Please see the FLUX2 model support for an example of the way to do this.
Also, please handle the scenario in which the user invokes the model without providing any reference images; let's see if the model can perform txt2image without a reference image to work on.

[2026-03-25 11:13:46,158]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/image_processing_base.py", line 354, in get_image_processor_dict
    resolved_image_processor_file = resolved_image_processor_files[0]
                                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range

Copy link
Copy Markdown

Copilot AI commented Mar 25, 2026

@copilot
The next error involves the image processor. Note that I have not loaded any reference images because the user interface is not letting me add reference images for this mod...

Fixed in dec4a46.

Processor loading: Replaced Qwen2_5_VLProcessor.from_pretrained(root_dir) with a component-based approach that searches for preprocessor_config.json in three locations (tokenizer/, model root, image_processor/) and falls back to Qwen2_5_VLImageProcessor() defaults if not found. This works regardless of how the model was downloaded.

Reference images UI: Added qwen_image_edit_reference_image support following the FLUX2 pattern — clicking "Add Reference Image" when a Qwen model is selected now shows a reference image slot with just an image picker (no model selector needed). Reference images are collected and wired into the text encoder's reference_images input in the graph.

txt2img without reference images: No warning or block when no reference images are set — the text encoder works with an empty list, so txt2img works fine.

@lstein
Copy link
Copy Markdown
Owner Author

lstein commented Mar 25, 2026

@copilot New error:

[2026-03-25 12:13:00,527]::[InvokeAI]::ERROR --> Error while invoking session aeec3262-439f-4507-9207-fbf879faae9d, invocation 46994740-f85e-4f29-bb6c-ededfd5c1e00 (qwen_image_edit_text_encoder): Received a NoneType for argument video_processor, but a BaseVideoProcessor was expected.
[2026-03-25 12:13:00,528]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 52, in invoke
    prompt_embeds = self._encode(context, pil_images)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI-lstein/invokeai/app/invocations/qwen_image_edit_text_encoder.py", line 101, in _encode
    processor = Qwen2_5_VLProcessor(tokenizer=tokenizer, image_processor=image_processor)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/processing_qwen2_5_vl.py", line 96, in __init__
    super().__init__(image_processor, tokenizer, video_processor, chat_template=chat_template)
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 534, in __init__
    self.check_argument_for_proper_class(attribute_name, arg)
  File "/home/lstein/invokeai-lstein/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 614, in check_argument_for_proper_class
    raise TypeError(
TypeError: Received a NoneType for argument video_processor, but a BaseVideoProcessor was expected.

@lstein lstein force-pushed the feat/qwen-image-edit-2511 branch from ba333f3 to e806505 Compare April 5, 2026 17:35
…raph builder

chore(backend): add test for heuristic detection of Qwen Image Edit GGUF model variant

chore(frontend): add regression test for ref images not added to qwen image in generate mode

fix(frontend): graph build handling of Qwen Image when CFG <=1

chore(frontend): add regression test for optimal dimension selection
@lstein lstein force-pushed the feat/qwen-image-edit-2511 branch from a9252c3 to c2d7b8b Compare April 5, 2026 17:56
@lstein lstein force-pushed the feat/qwen-image-edit-2511 branch from e2710c4 to b7de7d7 Compare April 5, 2026 22:55
lstein and others added 17 commits April 6, 2026 00:13
Replace the single-collector pattern (all images added as item edges to
one collect node — order not guaranteed) with collector chaining (each
image gets its own collect node, chained via collection → collection
edge). This matches the FLUX.2 Klein kontext conditioning pattern and
preserves insertion order of reference images.

Test: verify 2 reference images produce 2 chained collect nodes with
correct item, chain, and final output edges.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Test that sample_to_lowres_estimated_image() with the Qwen-specific
latent RGB factors and bias produces correct preview images:

- Valid RGB PIL Image from synthetic 1x16x4x4 latent tensor
- Deterministic output for identical inputs
- Known-value test: all-ones tensor → hand-calculated RGB pixel
- Zero tensor produces uniform pixels matching bias values
- Qwen factors have correct shape (16x3 + 3 bias)
- 3D input (no batch dim) accepted

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…obing

When a starter model provides variant in override_fields, it was passed
both via **override_fields and as an explicit kwarg to cls(), causing
TypeError("got multiple values for keyword argument 'variant'").

Use .pop() instead of .get() so variant and repo_variant are removed
from override_fields before spreading.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…wargs

The double-kwarg bug (TypeError: got multiple values for keyword argument)
could occur for ANY config class that reads a field from override_fields
with .get() and then passes it both via **override_fields AND as an
explicit kwarg to cls(). Changed all instances of .get() to .pop() for:
- variant (11 config classes)
- repo_variant (7 config classes)
- prediction_type (2 config classes)
- submodels (1 config class)

Also fixed the GGUF Qwen Image variant logic which was popping the
explicit variant but not preserving it when the filename heuristic
didn't apply.

Regression tests (3 new):
- Diffusers config with variant in override_fields doesn't crash
- Explicit variant override takes precedence over auto-detection
- GGUF config with variant in override_fields doesn't crash

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Conflicts:
#	invokeai/app/api/dependencies.py
#	invokeai/app/invocations/fields.py
#	invokeai/app/invocations/metadata.py
#	invokeai/app/invocations/primitives.py
#	invokeai/backend/model_manager/configs/factory.py
#	invokeai/backend/model_manager/configs/lora.py
#	invokeai/backend/model_manager/configs/main.py
#	invokeai/backend/model_manager/load/model_loaders/lora.py
#	invokeai/backend/model_manager/starter_models.py
#	invokeai/backend/model_manager/taxonomy.py
#	invokeai/backend/stable_diffusion/diffusion/conditioning_data.py
#	invokeai/frontend/web/src/features/nodes/types/common.ts
#	invokeai/frontend/web/src/features/nodes/util/graph/generation/addImageToImage.ts
#	invokeai/frontend/web/src/features/nodes/util/graph/generation/addInpaint.ts
#	invokeai/frontend/web/src/features/nodes/util/graph/generation/addTextToImage.ts
#	invokeai/frontend/web/src/features/nodes/util/graph/graphBuilderUtils.ts
#	invokeai/frontend/web/src/features/nodes/util/graph/types.ts
#	invokeai/frontend/web/src/features/parameters/util/optimalDimension.ts
#	invokeai/frontend/web/src/features/settingsAccordions/components/AdvancedSettingsAccordion/AdvancedSettingsAccordion.tsx
#	invokeai/frontend/web/src/features/settingsAccordions/components/GenerationSettingsAccordion/GenerationSettingsAccordion.tsx
#	invokeai/frontend/web/src/services/api/schema.ts
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Conflicts:
#	invokeai/frontend/web/src/features/metadata/parsing.tsx
Rewrite ParamQwenImageShift to use the same CompositeSlider +
CompositeNumberInput + reset-to-auto pattern as ParamZImageShift,
and move it from the Advanced Settings accordion to the Generation
Settings accordion alongside the Z-Image shift control.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…fusers model

When switching to a qwen-image base model, automatically set the
VAE/encoder component source to the first available Qwen Image
diffusers model if none is currently selected. Follows the same
pattern as Z-Image's auto-defaulting logic.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Non-edit Qwen GGUFs now get an explicit "generate" variant instead of
None, matching the new tensor-based variant detection semantics.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants