Skip to content

Commit 459b8ca

Browse files
Research folder (huggingface#1553)
* Research folder * Update examples/research_projects/README.md * up
1 parent bce65cd commit 459b8ca

File tree

6 files changed

+49
-26
lines changed

6 files changed

+49
-26
lines changed

examples/dreambooth/README.md

+1-25
Original file line numberDiff line numberDiff line change
@@ -312,30 +312,6 @@ python train_dreambooth_flax.py \
312312
--max_train_steps=800
313313
```
314314

315-
## Dreambooth for the inpainting model
316-
317-
318-
```bash
319-
export MODEL_NAME="runwayml/stable-diffusion-inpainting"
320-
export INSTANCE_DIR="path-to-instance-images"
321-
export OUTPUT_DIR="path-to-save-model"
322-
323-
accelerate launch train_dreambooth_inpaint.py \
324-
--pretrained_model_name_or_path=$MODEL_NAME \
325-
--instance_data_dir=$INSTANCE_DIR \
326-
--output_dir=$OUTPUT_DIR \
327-
--instance_prompt="a photo of sks dog" \
328-
--resolution=512 \
329-
--train_batch_size=1 \
330-
--gradient_accumulation_steps=1 \
331-
--learning_rate=5e-6 \
332-
--lr_scheduler="constant" \
333-
--lr_warmup_steps=0 \
334-
--max_train_steps=400
335-
```
336-
337-
The script is also compatible with prior preservation loss and gradient checkpointing
338-
339315
### Training with prior-preservation loss
340316

341317
Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data.
@@ -428,4 +404,4 @@ accelerate launch train_dreambooth_inpaint.py \
428404
--lr_warmup_steps=0 \
429405
--num_class_images=200 \
430406
--max_train_steps=800
431-
```
407+
```

examples/research_projects/README.md

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# Research projects
2+
3+
This folder contains various research projects using 🧨 Diffusers.
4+
They are not really maintained by the core maintainers of this library and often require a specific version of Diffusers that is indicated in the requirements file of each folder.
5+
Updating them to the most recent version of the library will require some work.
6+
7+
To use any of them, just run the command
8+
9+
```
10+
pip install -r requirements.txt
11+
```
12+
inside the folder of your choice.
13+
14+
If you need help with any of those, please open an issue where you directly ping the author(s), as indicated at the top of the README of each folder.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Dreambooth for the inpainting model
2+
3+
This script was added by @thedarkzeno .
4+
5+
Please note that this script is not actively maintained, you can open an issue and tag @thedarkzeno or @patil-suraj though.
6+
7+
```bash
8+
export MODEL_NAME="runwayml/stable-diffusion-inpainting"
9+
export INSTANCE_DIR="path-to-instance-images"
10+
export OUTPUT_DIR="path-to-save-model"
11+
12+
accelerate launch train_dreambooth_inpaint.py \
13+
--pretrained_model_name_or_path=$MODEL_NAME \
14+
--instance_data_dir=$INSTANCE_DIR \
15+
--output_dir=$OUTPUT_DIR \
16+
--instance_prompt="a photo of sks dog" \
17+
--resolution=512 \
18+
--train_batch_size=1 \
19+
--gradient_accumulation_steps=1 \
20+
--learning_rate=5e-6 \
21+
--lr_scheduler="constant" \
22+
--lr_warmup_steps=0 \
23+
--max_train_steps=400
24+
```
25+
26+
The script is also compatible with prior preservation loss and gradient checkpointing
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
diffusers==0.9.0
2+
accelerate
3+
torchvision
4+
transformers>=4.21.0
5+
ftfy
6+
tensorboard
7+
modelcards

src/diffusers/pipelines/versatile_diffusion/modeling_text_unet.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -314,7 +314,7 @@ def set_attention_slice(self, slice_size):
314314
in several steps. This is useful to save some memory in exchange for a small speed decrease.
315315
316316
Args:
317-
slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
317+
slice_size (`str` or `int` or `list(int)`, *optional*, defaults to `"auto"`):
318318
When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
319319
`"max"`, maxium amount of memory will be saved by running only one slice at a time. If a number is
320320
provided, uses as many slices as `attention_head_dim // slice_size`. In this case, `attention_head_dim`

0 commit comments

Comments
 (0)