|
| 1 | +<!--Copyright 2023 The HuggingFace Team. All rights reserved. |
| 2 | + |
| 3 | +Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
| 4 | +the License. You may obtain a copy of the License at |
| 5 | + |
| 6 | +http://www.apache.org/licenses/LICENSE-2.0 |
| 7 | + |
| 8 | +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
| 9 | +an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
| 10 | +specific language governing permissions and limitations under the License. |
| 11 | +--> |
| 12 | + |
| 13 | +# Attend and Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models |
| 14 | + |
| 15 | +## Overview |
| 16 | + |
| 17 | +Attend and Excite for Stable Diffusion was proposed in [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://attendandexcite.github.io/Attend-and-Excite/) and provides textual attention control over the image generation. |
| 18 | + |
| 19 | +The abstract of the paper is the following: |
| 20 | + |
| 21 | +*Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on a variety of tasks and provide evidence for its versatility and flexibility.* |
| 22 | + |
| 23 | +Resources |
| 24 | + |
| 25 | +* [Project Page](https://attendandexcite.github.io/Attend-and-Excite/) |
| 26 | +* [Paper](https://arxiv.org/abs/2301.13826) |
| 27 | +* [Original Code](https://github.com/AttendAndExcite/Attend-and-Excite) |
| 28 | +* [Demo](https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite) |
| 29 | + |
| 30 | + |
| 31 | +## Available Pipelines: |
| 32 | + |
| 33 | +| Pipeline | Tasks | Colab | Demo |
| 34 | +|---|---|:---:|:---:| |
| 35 | +| [pipeline_semantic_stable_diffusion_attend_and_excite.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_semantic_stable_diffusion_attend_and_excite) | *Text-to-Image Generation* | - | - |
| 36 | + |
| 37 | + |
| 38 | +### Usage example |
| 39 | + |
| 40 | + |
| 41 | +```python |
| 42 | +import torch |
| 43 | +from diffusers import StableDiffusionAttendAndExcitePipeline |
| 44 | + |
| 45 | +model_id = "CompVis/stable-diffusion-v1-4" |
| 46 | +pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
| 47 | +pipe = pipe.to("cuda") |
| 48 | + |
| 49 | +prompt = "a cat and a frog" |
| 50 | + |
| 51 | +# use get_indices function to find out indices of the tokens you want to alter |
| 52 | +pipe.get_indices(prompt) |
| 53 | + |
| 54 | +token_indices = [2, 5] |
| 55 | +seed = 6141 |
| 56 | +generator = torch.Generator("cuda").manual_seed(seed) |
| 57 | + |
| 58 | +images = pipe( |
| 59 | + prompt=prompt, |
| 60 | + token_indices=token_indices, |
| 61 | + guidance_scale=7.5, |
| 62 | + generator=generator, |
| 63 | + num_inference_steps=50, |
| 64 | + max_iter_to_alter=25, |
| 65 | +).images |
| 66 | + |
| 67 | +image = images[0] |
| 68 | +image.save(f"../images/{prompt}_{seed}.png") |
| 69 | +``` |
| 70 | + |
| 71 | + |
| 72 | +## StableDiffusionAttendAndExcitePipeline |
| 73 | +[[autodoc]] StableDiffusionAttendAndExcitePipeline |
| 74 | + - all |
| 75 | + - __call__ |
0 commit comments