Skip to content

Commit 71faf34

Browse files
Update README.md
1 parent 2f1f7b0 commit 71faf34

File tree

1 file changed

+22
-72
lines changed

1 file changed

+22
-72
lines changed

README.md

Lines changed: 22 additions & 72 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,15 @@ More precisely, 🤗 Diffusers offers:
2525
- Multiple types of models, such as UNet, that can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
2626
- Training examples to show how to train the most popular diffusion models (see [examples](https://github.com/huggingface/diffusers/tree/main/examples)).
2727

28+
## Quickstart
29+
30+
In order to get started, we recommend taking a look at two notebooks:
31+
32+
- The [Diffusers](https://github.com/patrickvonplaten/notebooks/blob/master/Diffusers.ipynb) notebook, which showcases an end-to-end example of usage for diffusion models, schedulers and pipelines.
33+
Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, but also to get an understanding of each independent building blocks in the library.
34+
- The [Training diffusers](https://colab.research.google.com/gist/anton-l/cde0c3643e991ad7dbc01939865acaf4/diffusers_training_example.ipynb) notebook, which summarizes diffuser model training methods. This notebook takes a step-by-step approach to training your
35+
diffuser model on an image dataset, with explanatory graphics.
36+
2837
## Definitions
2938

3039
**Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
@@ -62,83 +71,24 @@ The class provides functionality to compute previous image according to alpha, b
6271
- Diffusers is **modality independent** and focusses on providing pretrained models and tools to build systems that generate **continous outputs**, *e.g.* vision and audio.
6372
- Diffusion models and schedulers are provided as consise, elementary building blocks whereas diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of other library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
6473

65-
## Quickstart
66-
67-
In order to get started, we recommend taking a look at two notebooks:
68-
69-
- The [Diffusers](https://colab.research.google.com/drive/1aEFVu0CvcIBzSNIQ7F71ujYYplAX4Bml?usp=sharing#scrollTo=PzW5ublpBuUt) notebook, which showcases an end-to-end example of usage for diffusion models, schedulers and pipelines.
70-
Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, but also to get an understanding of each independent building blocks in the library.
71-
- The [Training diffusers](https://colab.research.google.com/drive/1qqJmz7JJ04suJzEF4Hn4-Acb8rfL-eA3?usp=sharing) notebook, which summarizes diffuser model training methods. This notebook takes a step-by-step approach to training your
72-
diffuser model on an image dataset, with explanatory graphics.
73-
74-
### Installation
74+
## Installation
7575

7676
```
77-
pip install diffusers # should install diffusers 0.0.4
77+
pip install diffusers # should install diffusers 0.1.2
7878
```
7979

80-
### 1. `diffusers` as a toolbox for schedulers and models
81-
82-
`diffusers` is more modularized than `transformers`. The idea is that researchers and engineers can use only parts of the library easily for the own use cases.
83-
It could become a central place for all kinds of models, schedulers, training utils and processors that one can mix and match for one's own use case.
84-
Both models and schedulers should be load- and saveable from the Hub.
80+
## Examples
8581

86-
For more examples see [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) and [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)
87-
88-
#### **Example for Unconditonal Image generation [DDPM](https://arxiv.org/abs/2006.11239):**
89-
90-
```python
91-
import torch
92-
from diffusers import UNet2DModel, DDIMScheduler
93-
import PIL.Image
94-
import numpy as np
95-
import tqdm
96-
97-
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
98-
99-
# 1. Load models
100-
scheduler = DDIMScheduler.from_config("fusing/ddpm-celeba-hq", tensor_format="pt")
101-
unet = UNet2DModel.from_pretrained("fusing/ddpm-celeba-hq", ddpm=True).to(torch_device)
102-
103-
# 2. Sample gaussian noise
104-
generator = torch.manual_seed(23)
105-
unet.image_size = unet.resolution
106-
image = torch.randn(
107-
(1, unet.in_channels, unet.image_size, unet.image_size),
108-
generator=generator,
109-
)
110-
image = image.to(torch_device)
111-
112-
# 3. Denoise
113-
num_inference_steps = 50
114-
eta = 0.0 # <- deterministic sampling
115-
scheduler.set_timesteps(num_inference_steps)
116-
117-
for t in tqdm.tqdm(scheduler.timesteps):
118-
# 1. predict noise residual
119-
with torch.no_grad():
120-
residual = unet(image, t)["sample"]
121-
122-
prev_image = scheduler.step(residual, t, image, eta)["prev_sample"]
123-
124-
# 3. set current image to prev_image: x_t -> x_t-1
125-
image = prev_image
126-
127-
# 4. process image to PIL
128-
image_processed = image.cpu().permute(0, 2, 3, 1)
129-
image_processed = (image_processed + 1.0) * 127.5
130-
image_processed = image_processed.numpy().astype(np.uint8)
131-
image_pil = PIL.Image.fromarray(image_processed[0])
132-
133-
# 5. save image
134-
image_pil.save("generated_image.png")
135-
```
136-
137-
#### **Example for Unconditonal Image generation [LDM](https://github.com/CompVis/latent-diffusion):**
138-
139-
```python
140-
```
82+
If you want to run the code yourself 💻, you can try out:
83+
- [Text-to-Image Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256#usage)
84+
- [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256#inference-with-an-unrolled-loop)
85+
- [Unconditional Diffusion with discrete scheduler](https://huggingface.co/google/ddpm-celebahq-256)
86+
- [Unconditional Diffusion with continous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024)
14187

88+
If you just want to play around with some models, you can try out the following 🚀 spaces:
89+
- [Text-to-Image Latent Diffusion](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion)
90+
- [Faces generator](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion)
91+
- [DDPM with different schedulers](https://huggingface.co/spaces/fusing/celeba-diffusion)
14292

14393
## In the works
14494

@@ -166,4 +116,4 @@ This library concretizes previous work by many different authors and would not h
166116
- @ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim).
167117
- @yang-song's Score-VE and Score-VP implementations, available [here](https://github.com/yang-song/score_sde_pytorch)
168118

169-
We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models).
119+
We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models) as well as @crowsonkb and @rromb for useful discussions and insights.

0 commit comments

Comments
 (0)