Fix SDXL Refiner with Higher Order Schedulers#13453
Open
Beinsezii wants to merge 1 commit intohuggingface:mainfrom
Open
Fix SDXL Refiner with Higher Order Schedulers#13453Beinsezii wants to merge 1 commit intohuggingface:mainfrom
Beinsezii wants to merge 1 commit intohuggingface:mainfrom
Conversation
Beinsezii
commented
Apr 13, 2026
Comment on lines
-670
to
-677
| if self.scheduler.order == 2 and num_inference_steps % 2 == 0: | ||
| # if the scheduler is a 2nd order scheduler we might have to do +1 | ||
| # because `num_inference_steps` might be even given that every timestep | ||
| # (except the highest one) is duplicated. If `num_inference_steps` is even it would | ||
| # mean that we cut the timesteps in the middle of the denoising step | ||
| # (between 1st and 2nd derivative) which leads to incorrect results. By adding 1 | ||
| # we ensure that the denoising process always ends after the 2nd derivate step of the scheduler | ||
| num_inference_steps = num_inference_steps + 1 |
Contributor
Author
There was a problem hiding this comment.
Based on this comment, it was previously hardcoded specifically for Heun's method, and anything else is 100% broken. Thing is, Heun appears to be the only higher order singlestep solver in Diffusers, so I guess we can't add tests for this yet?
Beinsezii
commented
Apr 13, 2026
Comment on lines
-1178
to
+1181
| num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps))) | ||
| timesteps = timesteps[:num_inference_steps] | ||
| num_inference_steps = ( | ||
| (torch.as_tensor(timesteps)[:: self.scheduler.order] >= discrete_timestep_cutoff).sum().item() | ||
| ) | ||
| timesteps = timesteps[: num_inference_steps * self.scheduler.order] |
Contributor
Author
There was a problem hiding this comment.
technically this might change the current results with Heun but it's necessary because otherwise it'll split wrong on a butcher tableaux with non-sequential coefficients like
RKZ.Butcher6
+0.0 |
+0.2764 | +0.2764
+0.7236 | -0.2236 +0.9472
+0.2764 | +0.0326 +0.309 -0.0652
+0.7236 | +0.0461 +0.0 +0.1667 +0.5109
+0.2764 | +0.1206 +0.0 -0.1817 +0.1667 +0.1708
+1.0 | +0.1667 +0.0 +0.0751 -3.3877 +0.5279 +3.618
-----------------------------------------------------------------
| +0.0833 +0.0 +0.0 +0.0 +0.4167 +0.4167 +0.0833
Where it could split on stage 3, but the following stages contain lesser timestep values, and since the refiner is not trained on earlier timesteps this will lead to worse results.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fixes SDXL refiner w/ higher order schedulers by changing the strange hardcoded
order==2check with a simple tensor stride.Standalone script using a scheduler with
order=15Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@yiyixuxu @sayakpaul