File tree Expand file tree Collapse file tree 1 file changed +3
-5
lines changed
src/diffusers/pipelines/paint_by_example Expand file tree Collapse file tree 1 file changed +3
-5
lines changed Original file line number Diff line number Diff line change @@ -136,18 +136,16 @@ def prepare_mask_and_masked_image(image, mask):
136136
137137class PaintByExamplePipeline (DiffusionPipeline ):
138138 r"""
139- Pipeline for text -guided image inpainting using Stable Diffusion. *This is an experimental feature*.
139+ Pipeline for image -guided image inpainting using Stable Diffusion. *This is an experimental feature*.
140140
141141 This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
142142 library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
143143
144144 Args:
145145 vae ([`AutoencoderKL`]):
146146 Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
147- text_encoder ([`CLIPTextModel`]):
148- Frozen text-encoder. Stable Diffusion uses the text portion of
149- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
150- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
147+ image_encoder ([`PaintByExampleImageEncoder`]):
148+ Encodes the example input image. The unet is conditioned on the example image instead of a text prompt.
151149 tokenizer (`CLIPTokenizer`):
152150 Tokenizer of class
153151 [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
You can’t perform that action at this time.
0 commit comments