Skip to content

Commit 73bf5a1

Browse files
authored
Update README_flux.md
1 parent 464374f commit 73bf5a1

File tree

1 file changed

+9
-4
lines changed

1 file changed

+9
-4
lines changed

examples/dreambooth/README_flux.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ The `train_dreambooth_flux.py` script shows how to implement the training proced
1919
> As the model is gated, before using it with diffusers you first need to go to the [FLUX.1 [dev] Hugging Face page](https://huggingface.co/black-forest-labs/FLUX.1-dev), fill in the form and accept the gate. Once you are in, you need to log in so that your system knows you’ve accepted the gate. Use the command below to log in:
2020
2121
```bash
22+
git config --global credential.helper store
2223
huggingface-cli login
2324
```
2425

@@ -72,6 +73,12 @@ Note also that we use PEFT library as backend for LoRA training, make sure to ha
7273

7374
Now let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example.
7475

76+
```base
77+
git clone https://huggingface.co/datasets/diffusers/dog-example dog
78+
cd dog
79+
rm -rf .git*
80+
```
81+
7582
Let's first download it locally:
7683

7784
```python
@@ -106,14 +113,12 @@ accelerate launch train_dreambooth_flux.py \
106113
--gradient_accumulation_steps=4 \
107114
--optimizer="prodigy" \
108115
--learning_rate=1. \
109-
--report_to="wandb" \
110116
--lr_scheduler="constant" \
111117
--lr_warmup_steps=0 \
112118
--max_train_steps=500 \
113119
--validation_prompt="A photo of sks dog in a bucket" \
114120
--validation_epochs=25 \
115-
--seed="0" \
116-
--push_to_hub
121+
--seed="0"
117122
```
118123

119124
To better track our training experiments, we're using the following flags in the command above:
@@ -244,4 +249,4 @@ By default, trained transformer layers are saved in the precision dtype in which
244249
This reduces memory requirements significantly w/o a significant quality loss. Note that if you do wish to save the final layers in float32 at the expanse of more memory usage, you can do so by passing `--upcast_before_saving`.
245250

246251
## Other notes
247-
Thanks to `bghira` and `ostris` for their help with reviewing & insight sharing ♥️
252+
Thanks to `bghira` and `ostris` for their help with reviewing & insight sharing ♥️

0 commit comments

Comments
 (0)