You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+15-11Lines changed: 15 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,6 +33,21 @@ In order to get started, we recommend taking a look at two notebooks:
33
33
Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, but also to get an understanding of each independent building blocks in the library.
34
34
- The [Training diffusers](https://colab.research.google.com/gist/anton-l/cde0c3643e991ad7dbc01939865acaf4/diffusers_training_example.ipynb) notebook, which summarizes diffuser model training methods. This notebook takes a step-by-step approach to training your
35
35
diffuser model on an image dataset, with explanatory graphics.
36
+
37
+
## Examples
38
+
39
+
If you want to run the code yourself 💻, you can try out:
| Text-to-Image Latent Diffusion |[](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion)|
49
+
| Faces generator |[](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion)|
50
+
| DDPM with different schedulers |[](https://huggingface.co/spaces/fusing/celeba-diffusion)|
36
51
37
52
## Definitions
38
53
@@ -77,18 +92,7 @@ The class provides functionality to compute previous image according to alpha, b
77
92
pip install diffusers # should install diffusers 0.1.2
78
93
```
79
94
80
-
## Examples
81
-
82
-
If you want to run the code yourself 💻, you can try out:
0 commit comments