question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Allow safety checker pipeline configuration that returns boolean array but does not black out images

See original GitHub issue

Is your feature request related to a problem? Please describe. See this PR comment: https://github.com/huggingface/diffusers/pull/815#discussion_r994418216

TLDR: with recent changes in #815, developers have the ability to disable the safety checker. Currently, the only options available to devs is to either have the safety checker or not have it at all. While this is useful, many applications of NSFW content require opt in access from end users. For example, consider the Reddit NSFW model – the end user is shown a ‘nsfw’ overlay that they have to manually click through. Currently, the diffusers library does not make it easy to support such a use case.

Describe the solution you’d like I think the best approach is to add a flag to the SafetyChecker class called black_out_images. This flag would then modify the if statement on this line: https://github.com/huggingface/diffusers/blob/797b290ed09a84091a4c23884b7c104f8e94b128/src/diffusers/pipelines/stable_diffusion/safety_checker.py#L74

        for idx, has_nsfw_concept in enumerate(has_nsfw_concepts):
            if has_nsfw_concept and black_out_images:
                images[idx] = np.zeros(images[idx].shape)  # black image

The flag would then be passed into the SafetyChecker from the top level pipeline config.

Describe alternatives you’ve considered Another alternative is to do this at the pipeline level. For example, we could pass in a flag to the Pipeline class called black_out_nsfw_images. This flag would then modify the safety_checker call here: https://github.com/huggingface/diffusers/blob/797b290ed09a84091a4c23884b7c104f8e94b128/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L335

        safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
        cleaned_image, has_nsfw_concept = self.safety_checker(
            images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype)
        )

		if black_out_nsfw_images:
			image = cleaned_image

Additional context In both cases, I believe the config can default to ‘nsfw images will be blacked out’. Having the option is critical, however.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:30 (12 by maintainers)

github_iconTop GitHub Comments

2reactions
meg-huggingfacecommented, Oct 18, 2022

Thanks so much for this discussion, and just popping in to say I completely agree that Hugging Face should create, and maintain, code for post-processing generated content (not limited to blurring, but that’s definitely one). IIRC, the current safety/censoring approach came from discussions with Stability AI – it definitely wasn’t one I had recommended.

From my perspective, there just haven’t been enough engineers around at HF with “free time” to do the more nuanced work needed here. We’re creating a job description for this task (and others), but it’s not approved yet + not up yet + not hired-for yet.

In the mean time, any community work on this would be beneficial for everyone (I think), and hopefully makes sense since Stable Diffusion is a community contribution. We’d want to have Stability AI agree to the approach too, since we’d be changing from what had been worked through earlier.

1reaction
theahuracommented, Oct 18, 2022

@meg-huggingface can you comment more on the discussion with Stability AI? I was under the impression that the Stable Diffusion model is tangentially related to Stability AI at best (they don’t seem to be the maintainers on the original SD repo, nor are they authors on the paper), so I’m curious why Stability AI would be involved in any discussions around usage of the model

Read more comments on GitHub >

github_iconTop Results From Across the Web

Give more customizable options for safety checker #815
The safety checker can be disabled by doing: from diffusers import ... still returns whether or not an image is nsfw, but does...
Read more >
Scan result policies - GitLab Docs
Someone stops a pipeline security job, and users can't skip the security scan. A job in a merge request fails and is configured...
Read more >
Use runtime and type-safe parameters - Azure Pipelines
Use parameters to determine what steps run​​ This pipeline only runs a step when the boolean parameter test is true.
Read more >
black · PyPI
In return, Black gives you speed, determinism, and freedom from pycodestyle ... Also, as a safety measure which slows down processing, Black will...
Read more >
Pipeline Expressions Guide - Spinnaker
Pipeline expressions allow you to dynamically set and access variables during pipeline execution. You can use an expression in almost any ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found