Skip to content

Commit 3028089

Browse files
authored
Fix typos (huggingface#7411)
* Fix typos * Fix typo in SVD.md
1 parent b536f39 commit 3028089

26 files changed

+32
-32
lines changed

docs/source/en/using-diffusers/svd.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ This guide will show you how to use SVD to generate short videos from images.
2121
Before you begin, make sure you have the following libraries installed:
2222

2323
```py
24-
!pip install -q -U diffusers transformers accelerate
24+
!pip install -q -U diffusers transformers accelerate
2525
```
2626

2727
The are two variants of this model, [SVD](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid) and [SVD-XT](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt). The SVD checkpoint is trained to generate 14 frames and the SVD-XT checkpoint is further finetuned to generate 25 frames.
@@ -86,7 +86,7 @@ Video generation is very memory intensive because you're essentially generating
8686
+ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0]
8787
```
8888

89-
Using all these tricks togethere should lower the memory requirement to less than 8GB VRAM.
89+
Using all these tricks together should lower the memory requirement to less than 8GB VRAM.
9090

9191
## Micro-conditioning
9292

examples/community/unclip_text_interpolation.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ class UnCLIPTextInterpolationPipeline(DiffusionPipeline):
4848
Tokenizer of class
4949
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
5050
prior ([`PriorTransformer`]):
51-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
51+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
5252
text_proj ([`UnCLIPTextProjModel`]):
5353
Utility class to prepare and combine the embeddings before they are passed to the decoder.
5454
decoder ([`UNet2DConditionModel`]):

src/diffusers/pipelines/kandinsky/pipeline_kandinsky_combined.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ class KandinskyCombinedPipeline(DiffusionPipeline):
129129
movq ([`VQModel`]):
130130
MoVQ Decoder to generate the image from the latents.
131131
prior_prior ([`PriorTransformer`]):
132-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
132+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
133133
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
134134
Frozen image-encoder.
135135
prior_text_encoder ([`CLIPTextModelWithProjection`]):
@@ -346,7 +346,7 @@ class KandinskyImg2ImgCombinedPipeline(DiffusionPipeline):
346346
movq ([`VQModel`]):
347347
MoVQ Decoder to generate the image from the latents.
348348
prior_prior ([`PriorTransformer`]):
349-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
349+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
350350
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
351351
Frozen image-encoder.
352352
prior_text_encoder ([`CLIPTextModelWithProjection`]):
@@ -586,7 +586,7 @@ class KandinskyInpaintCombinedPipeline(DiffusionPipeline):
586586
movq ([`VQModel`]):
587587
MoVQ Decoder to generate the image from the latents.
588588
prior_prior ([`PriorTransformer`]):
589-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
589+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
590590
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
591591
Frozen image-encoder.
592592
prior_text_encoder ([`CLIPTextModelWithProjection`]):

src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ class KandinskyPriorPipeline(DiffusionPipeline):
134134
135135
Args:
136136
prior ([`PriorTransformer`]):
137-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
137+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
138138
image_encoder ([`CLIPVisionModelWithProjection`]):
139139
Frozen image-encoder.
140140
text_encoder ([`CLIPTextModelWithProjection`]):

src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_combined.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ class KandinskyV22CombinedPipeline(DiffusionPipeline):
119119
movq ([`VQModel`]):
120120
MoVQ Decoder to generate the image from the latents.
121121
prior_prior ([`PriorTransformer`]):
122-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
122+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
123123
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
124124
Frozen image-encoder.
125125
prior_text_encoder ([`CLIPTextModelWithProjection`]):
@@ -346,7 +346,7 @@ class KandinskyV22Img2ImgCombinedPipeline(DiffusionPipeline):
346346
movq ([`VQModel`]):
347347
MoVQ Decoder to generate the image from the latents.
348348
prior_prior ([`PriorTransformer`]):
349-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
349+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
350350
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
351351
Frozen image-encoder.
352352
prior_text_encoder ([`CLIPTextModelWithProjection`]):
@@ -594,7 +594,7 @@ class KandinskyV22InpaintCombinedPipeline(DiffusionPipeline):
594594
movq ([`VQModel`]):
595595
MoVQ Decoder to generate the image from the latents.
596596
prior_prior ([`PriorTransformer`]):
597-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
597+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
598598
prior_image_encoder ([`CLIPVisionModelWithProjection`]):
599599
Frozen image-encoder.
600600
prior_text_encoder ([`CLIPTextModelWithProjection`]):

src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ class KandinskyV22PriorPipeline(DiffusionPipeline):
9090
9191
Args:
9292
prior ([`PriorTransformer`]):
93-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
93+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
9494
image_encoder ([`CLIPVisionModelWithProjection`]):
9595
Frozen image-encoder.
9696
text_encoder ([`CLIPTextModelWithProjection`]):

src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ class KandinskyV22PriorEmb2EmbPipeline(DiffusionPipeline):
108108
109109
Args:
110110
prior ([`PriorTransformer`]):
111-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
111+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
112112
image_encoder ([`CLIPVisionModelWithProjection`]):
113113
Frozen image-encoder.
114114
text_encoder ([`CLIPTextModelWithProjection`]):

src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ class ShapEImg2ImgPipeline(DiffusionPipeline):
8686
8787
Args:
8888
prior ([`PriorTransformer`]):
89-
The canonincal unCLIP prior to approximate the image embedding from the text embedding.
89+
The canonical unCLIP prior to approximate the image embedding from the text embedding.
9090
image_encoder ([`~transformers.CLIPVisionModel`]):
9191
Frozen image-encoder.
9292
image_processor ([`~transformers.CLIPImageProcessor`]):

src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -700,8 +700,8 @@ def __call__(
700700
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
701701
>>> init_image = Image.open(requests.get(url, stream=True).raw)
702702
>>> prompt = "two tigers"
703-
>>> n_propmt = "bad, deformed, ugly, bad anotomy"
704-
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0]
703+
>>> n_prompt = "bad, deformed, ugly, bad anotomy"
704+
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0]
705705
```
706706
707707
Returns:

src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -194,7 +194,7 @@ def __call__(
194194
A higher guidance scale value encourages the model to generate images closely linked to the text
195195
`prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
196196
image_guidance_scale (`float`, *optional*, defaults to 1.5):
197-
Push the generated image towards the inital `image`. Image guidance scale is enabled by setting
197+
Push the generated image towards the initial `image`. Image guidance scale is enabled by setting
198198
`image_guidance_scale > 1`. Higher image guidance scale encourages generated images that are closely
199199
linked to the source `image`, usually at the expense of lower image quality. This pipeline requires a
200200
value of at least `1`.

0 commit comments

Comments
 (0)