Skip to content

Commit 7fe638c

Browse files
update paint by example docs (huggingface#2598)
1 parent c812d97 commit 7fe638c

File tree

1 file changed

+3
-5
lines changed

1 file changed

+3
-5
lines changed

src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -136,18 +136,16 @@ def prepare_mask_and_masked_image(image, mask):
136136

137137
class PaintByExamplePipeline(DiffusionPipeline):
138138
r"""
139-
Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
139+
Pipeline for image-guided image inpainting using Stable Diffusion. *This is an experimental feature*.
140140
141141
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
142142
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
143143
144144
Args:
145145
vae ([`AutoencoderKL`]):
146146
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
147-
text_encoder ([`CLIPTextModel`]):
148-
Frozen text-encoder. Stable Diffusion uses the text portion of
149-
[CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
150-
the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
147+
image_encoder ([`PaintByExampleImageEncoder`]):
148+
Encodes the example input image. The unet is conditioned on the example image instead of a text prompt.
151149
tokenizer (`CLIPTokenizer`):
152150
Tokenizer of class
153151
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).

0 commit comments

Comments
 (0)