You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/training/overview.md
+32-53Lines changed: 32 additions & 53 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,75 +10,54 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
10
10
specific language governing permissions and limitations under the License.
11
11
-->
12
12
13
-
# 🧨 Diffusers Training Examples
13
+
# Overview
14
14
15
-
Diffusers training examples are a collection of scripts to demonstrate how to effectively use the `diffusers` library
16
-
for a variety of use cases.
15
+
🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in [diffusers/examples](https://github.com/huggingface/diffusers/tree/main/examples).
17
16
18
-
**Note**: If you are looking for **official** examples on how to use `diffusers` for inference,
19
-
please have a look at [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)
17
+
Each training script is:
20
18
21
-
Our examples aspire to be **self-contained**, **easy-to-tweak**, **beginner-friendly** and for **one-purpose-only**.
22
-
More specifically, this means:
19
+
-**Self-contained**: the training script does not depend on any local files, and all packages required to run the script are installed from the `requirements.txt` file.
20
+
-**Easy-to-tweak**: the training scripts are an example of how to train a diffusion model for a specific task and won't work out-of-the-box for every training scenario. You'll likely need to adapt the training script for your specific use-case. To help you with that, we've fully exposed the data preprocessing code and the training loop so you can modify it for your own use.
21
+
-**Beginner-friendly**: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out.
22
+
-**Single-purpose**: each training script is expressly designed for only one task to keep it readable and understandable.
23
23
24
-
-**Self-contained**: An example script shall only depend on "pip-install-able" Python packages that can be found in a `requirements.txt` file. Example scripts shall **not** depend on any local files. This means that one can simply download an example script, *e.g.*[train_unconditional.py](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py), install the required dependencies, *e.g.*[requirements.txt](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/requirements.txt) and execute the example script.
25
-
-**Easy-to-tweak**: While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data and the training loop to allow you to tweak and edit them as required.
26
-
-**Beginner-friendly**: We do not aim for providing state-of-the-art training scripts for the newest models, but rather examples that can be used as a way to better understand diffusion models and how to use them with the `diffusers` library. We often purposefully leave out certain state-of-the-art methods if we consider them too complex for beginners.
27
-
-**One-purpose-only**: Examples should show one task and one task only. Even if a task is from a modeling
28
-
point of view very similar, *e.g.* image super-resolution and image modification tend to use the same model and training method, we want examples to showcase only one task to keep them as readable and easy-to-understand as possible.
24
+
Our current collection of training scripts include:
29
25
30
-
We provide **official** examples that cover the most popular tasks of diffusion models.
31
-
*Official* examples are **actively** maintained by the `diffusers` maintainers and we try to rigorously follow our example philosophy as defined above.
32
-
If you feel like another important example should exist, we are more than happy to welcome a [Feature Request](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=) or directly a [Pull Request](https://github.com/huggingface/diffusers/compare) from you!
26
+
| Training | SDXL-support | LoRA-support | Flax-support |
27
+
|---|---|---|---|
28
+
|[unconditional image generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)||||
Training examples show how to pretrain or fine-tune diffusion models for a variety of tasks. Currently we support:
39
+
These examples are **actively** maintained, so please feel free to open an issue if they aren't working as expected. If you feel like another training example should be included, you're more than welcome to start a [Feature Request](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=) to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose.
If possible, please [install xFormers](../optimization/xformers) for memory efficient attention. This could help make your training faster and less memory intensive.
In addition, we provide **community** examples, which are examples added and maintained by our community.
65
-
Community examples can consist of both *training* examples or *inference* pipelines.
66
-
For such examples, we are more lenient regarding the philosophy defined above and also cannot guarantee to provide maintenance for every issue.
67
-
Examples that are useful for the community, but are either not yet deemed popular or not yet following our above philosophy should go into the [community examples](https://github.com/huggingface/diffusers/tree/main/examples/community) folder. The community folder therefore includes training examples and inference pipelines.
68
-
**Note**: Community examples can be a [great first contribution](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) to show to the community how you like to use `diffusers` 🪄.
69
-
70
-
## Important note
71
-
72
-
To make sure you can successfully run the latest versions of the example scripts, you have to **install the library from source** and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
43
+
Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment:
Then cd in the example folder of your choice and run
51
+
Then navigate to the folder of the training script (for example, [DreamBooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)) and install the `requirements.txt` file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you're using one of these scripts, make sure you install its corresponding requirements file.
81
52
82
53
```bash
54
+
cd examples/dreambooth
83
55
pip install -r requirements.txt
56
+
# to train SDXL with DreamBooth
57
+
pip install -r requirements_sdxl.txt
84
58
```
59
+
60
+
To speedup training and reduce memory-usage, we recommend:
61
+
62
+
- using PyTorch 2.0 or higher to automatically use [scaled dot product attention](../optimization/torch2.0#scaled-dot-product-attention) during training (you don't need to make any changes to the training code)
63
+
- installing [xFormers](../optimization/xformers) to enable memory-efficient attention
0 commit comments