Skip to content

Commit 76d492e

Browse files
shirayuanton-l
andauthored
Fix typos and add Typo check GitHub Action (huggingface#483)
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: huggingface#483 (review) Co-authored-by: Anton Lozhkov <[email protected]> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by: Anton Lozhkov <[email protected]>
1 parent c049372 commit 76d492e

38 files changed

+92
-66
lines changed

.github/workflows/typos.yml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
name: Check typos
2+
3+
on:
4+
workflow_dispatch:
5+
6+
jobs:
7+
build:
8+
runs-on: ubuntu-latest
9+
10+
steps:
11+
- uses: actions/checkout@v3
12+
13+
- name: typos-action
14+
uses: crate-ci/[email protected]

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ as a modular toolbox for inference and training of diffusion models.
2121
More precisely, 🤗 Diffusers offers:
2222

2323
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)). Check [this overview](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/README.md#pipelines-summary) to see all supported pipelines and their corresponding official papers.
24-
- Various noise schedulers that can be used interchangeably for the prefered speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
24+
- Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
2525
- Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
2626
- Training examples to show how to train the most popular diffusion model tasks (see [examples](https://github.com/huggingface/diffusers/tree/main/examples), *e.g.* [unconditional-image-generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)).
2727

@@ -297,7 +297,7 @@ with autocast("cuda"):
297297
image.save("ddpm_generated_image.png")
298298
```
299299
- [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256)
300-
- [Unconditional Diffusion with continous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024)
300+
- [Unconditional Diffusion with continuous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024)
301301

302302
**Other Notebooks**:
303303
* [image-to-image generation with Stable Diffusion](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) ![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg),
@@ -346,8 +346,8 @@ The class provides functionality to compute previous image according to alpha, b
346346

347347
## Philosophy
348348

349-
- Readability and clarity is prefered over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
350-
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continous outputs**, *e.g.* vision and audio.
349+
- Readability and clarity is preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
350+
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio.
351351
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
352352

353353
## In the works

_typos.toml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
# Files for typos
2+
# Instruction: https://github.com/marketplace/actions/typos-action#getting-started
3+
4+
[default.extend-identifiers]
5+
6+
[default.extend-words]
7+
NIN_="NIN" # NIN is used in scripts/convert_ncsnpp_original_checkpoint_to_diffusers.py
8+
nd="np" # nd may be np (numpy)
9+
10+
11+
[files]
12+
extend-exclude = ["_typos.toml"]

docs/source/api/schedulers.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ To this end, the design of schedulers is such that:
4444
The core API for any new scheduler must follow a limited structure.
4545
- Schedulers should provide one or more `def step(...)` functions that should be called to update the generated sample iteratively.
4646
- Schedulers should provide a `set_timesteps(...)` method that configures the parameters of a schedule function for a specific inference task.
47-
- Schedulers should be framework-agonstic, but provide a simple functionality to convert the scheduler into a specific framework, such as PyTorch
47+
- Schedulers should be framework-agnostic, but provide a simple functionality to convert the scheduler into a specific framework, such as PyTorch
4848
with a `set_format(...)` method.
4949

5050
The base class [`SchedulerMixin`] implements low level utilities used by multiple schedulers.
@@ -53,7 +53,7 @@ The base class [`SchedulerMixin`] implements low level utilities used by multipl
5353
[[autodoc]] SchedulerMixin
5454

5555
### SchedulerOutput
56-
The class [`SchedulerOutput`] contains the ouputs from any schedulers `step(...)` call.
56+
The class [`SchedulerOutput`] contains the outputs from any schedulers `step(...)` call.
5757

5858
[[autodoc]] schedulers.scheduling_utils.SchedulerOutput
5959

@@ -71,7 +71,7 @@ Original paper can be found [here](https://arxiv.org/abs/2010.02502).
7171

7272
[[autodoc]] DDPMScheduler
7373

74-
#### Varience exploding, stochastic sampling from Karras et. al
74+
#### Variance exploding, stochastic sampling from Karras et. al
7575

7676
Original paper can be found [here](https://arxiv.org/abs/2006.11239).
7777

docs/source/quicktour.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -86,19 +86,19 @@ just like we did before only that now you need to pass your `AUTH_TOKEN`:
8686
>>> generator = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_auth_token=AUTH_TOKEN)
8787
```
8888

89-
If you do not pass your authentification token you will see that the diffusion system will not be correctly
90-
downloaded. Forcing the user to pass an authentification token ensures that it can be verified that the
89+
If you do not pass your authentication token you will see that the diffusion system will not be correctly
90+
downloaded. Forcing the user to pass an authentication token ensures that it can be verified that the
9191
user has indeed read and accepted the license, which also means that an internet connection is required.
9292

93-
**Note**: If you do not want to be forced to pass an authentification token, you can also simply download
93+
**Note**: If you do not want to be forced to pass an authentication token, you can also simply download
9494
the weights locally via:
9595

9696
```
9797
git lfs install
9898
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
9999
```
100100

101-
and then load locally saved weights into the pipeline. This way, you do not need to pass an authentification
101+
and then load locally saved weights into the pipeline. This way, you do not need to pass an authentication
102102
token. Assuming that `"./stable-diffusion-v1-4"` is the local path to the cloned stable-diffusion-v1-4 repo,
103103
you can also load the pipeline as follows:
104104

docs/source/training/text_inversion.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ The `textual_inversion.py` script [here](https://github.com/huggingface/diffuser
4949

5050
### Installing the dependencies
5151

52-
Before running the scipts, make sure to install the library's training dependencies:
52+
Before running the scripts, make sure to install the library's training dependencies:
5353

5454
```bash
5555
pip install diffusers[training] accelerate transformers
@@ -68,7 +68,7 @@ You need to accept the model license before downloading or using the weights. In
6868

6969
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
7070

71-
Run the following command to autheticate your token
71+
Run the following command to authenticate your token
7272

7373
```bash
7474
huggingface-cli login

docs/source/training/unconditional_training.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ distribution.
1818

1919
## Installing the dependencies
2020

21-
Before running the scipts, make sure to install the library's training dependencies:
21+
Before running the scripts, make sure to install the library's training dependencies:
2222

2323
```bash
2424
pip install diffusers[training] accelerate datasets
@@ -117,7 +117,7 @@ from datasets import load_dataset
117117
# example 1: local folder
118118
dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
119119

120-
# example 2: local files (suppoted formats are tar, gzip, zip, xz, rar, zstd)
120+
# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
121121
dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
122122

123123
# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)

examples/textual_inversion/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Colab for inference
1414
## Running locally
1515
### Installing the dependencies
1616

17-
Before running the scipts, make sure to install the library's training dependencies:
17+
Before running the scripts, make sure to install the library's training dependencies:
1818

1919
```bash
2020
pip install diffusers[training] accelerate transformers
@@ -33,7 +33,7 @@ You need to accept the model license before downloading or using the weights. In
3333

3434
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
3535

36-
Run the following command to autheticate your token
36+
Run the following command to authenticate your token
3737

3838
```bash
3939
huggingface-cli login

examples/textual_inversion/textual_inversion.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -422,7 +422,7 @@ def main():
422422
eps=args.adam_epsilon,
423423
)
424424

425-
# TODO (patil-suraj): laod scheduler using args
425+
# TODO (patil-suraj): load scheduler using args
426426
noise_scheduler = DDPMScheduler(
427427
beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000, tensor_format="pt"
428428
)

examples/unconditional_image_generation/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Creating a training image set is [described in a different document](https://hug
44

55
### Installing the dependencies
66

7-
Before running the scipts, make sure to install the library's training dependencies:
7+
Before running the scripts, make sure to install the library's training dependencies:
88

99
```bash
1010
pip install diffusers[training] accelerate datasets
@@ -102,7 +102,7 @@ from datasets import load_dataset
102102
# example 1: local folder
103103
dataset = load_dataset("imagefolder", data_dir="path_to_your_folder")
104104

105-
# example 2: local files (suppoted formats are tar, gzip, zip, xz, rar, zstd)
105+
# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd)
106106
dataset = load_dataset("imagefolder", data_files="path_to_zip_file")
107107

108108
# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd)

0 commit comments

Comments
 (0)