[Feature request] Let user provide his own randn data for samplers in sampling.py
See original GitHub issuePlease add an option for samplers to accept an argument with random data and use that if it is provided.
The reason for this is as follows.
We use samplers in stable diffusion to generate pictures, and we use seeds to make it possible for other users to reproduce results.
In a batch of one image, everything works perfectly: set seed beforehand, generate noise, run sampler, and get the image everyone else will be able to get.
If the user produces a batch of multiple images (which is desirable because it works faster than multiple independent batches), the expectation is that each image will have its own seed and will be reproducible individually outside of the batch. I achieve that for DDIM and PLMS samplers from stable diffusion by preparing the correct random noise according to seeds beforehand, and since those samplers do not have randomness in them, it works well.
Samplers here use torch.randn in a loop, so samples in a batch will get different random data than samples produced individually, which results in different output.
An example of what I want to have:
from
def sample_euler_ancestral(model, x, sigmas, extra_args=None, callback=None, disable=None):
"""Ancestral sampling with Euler method steps."""
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
for i in trange(len(sigmas) - 1, disable=disable):
denoised = model(x, sigmas[i] * s_in, **extra_args)
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
d = to_d(x, sigmas[i], denoised)
# Euler method
dt = sigma_down - sigmas[i]
x = x + d * dt
x = x + torch.randn_like(x) * sigma_up
return x
to
def sample_euler_ancestral(model, x, sigmas, extra_args=None, callback=None, disable=None, user_random_data=None):
"""Ancestral sampling with Euler method steps."""
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
for i in trange(len(sigmas) - 1, disable=disable):
denoised = model(x, sigmas[i] * s_in, **extra_args)
sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
if callback is not None:
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
d = to_d(x, sigmas[i], denoised)
# Euler method
dt = sigma_down - sigmas[i]
x = x + d * dt
x = x + (torch.randn_like(x) if user_random_data is None else user_random_data[i]) * sigma_up
return x
(difference only in next-to-last line)
Issue Analytics
- State:
- Created a year ago
- Comments:18 (8 by maintainers)
“New York City, oil on canvas”, guidance scale 5.5, all samples drawn with eta=1:
12 steps, DPM++ 2S:
24 steps, DPM++ 2S:
50 steps, Euler ancestral:
I added the callable version in a branch: https://github.com/crowsonkb/k-diffusion/tree/noise-samplers, the ancestral samplers take a
noise_sampler=
argument now that is a callable with two arguments,sigma
andsigma_next
, the interval to return the noise for. I also added a (non-default) torchsde.BrownianTree based noise sampler which produces more stable samples across different numbers of steps and different ancestral samplers (they should in fact converge to the same limiting image using this given enough steps). You use it like this:I’d like people to try it out, it’s pretty neat!