You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data.
Or use the Flax implementation if you need a speedup
197
+
### Inference
198
+
199
+
Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `identifier`(e.g. sks in above example) in your prompt.
For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
218
+
219
+
____Note: The flax example don't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards.___
220
+
221
+
222
+
Before running the scripts, make sure to install the library's training dependencies:
Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `identifier`(e.g. sks in above example) in your prompt.
Copy file name to clipboardExpand all lines: examples/text_to_image/README.md
+48-32Lines changed: 48 additions & 32 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ___Note___:
7
7
___This script is experimental. The script fine-tunes the whole model and often times the model overifits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.___
8
8
9
9
10
-
## Running locally
10
+
## Running locally with PyTorch
11
11
### Installing the dependencies
12
12
13
13
Before running the scripts, make sure to install the library's training dependencies:
To run on your own training files prepare the dataset according to the format required by `datasets`, you can find the instructions for how to do that in this [document](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder-with-metadata).
85
67
If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
Or use the Flax implementation if you need a speedup
89
+
90
+
Once the training is finished the model will be saved in the `output_dir` specified in the command. In this example it's `sd-pokemon-model`. To load the fine-tuned model for inference just pass that path to `StableDiffusionPipeline`
For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
109
+
110
+
____Note: The flax example don't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards.___
111
+
112
+
113
+
Before running the scripts, make sure to install the library's training dependencies:
Once the training is finished the model will be saved in the `output_dir` specified in the command. In this example it's `sd-pokemon-model`. To load the fine-tuned model for inference just pass that path to `StableDiffusionPipeline`
126
-
127
135
128
-
```python
129
-
from diffusers import StableDiffusionPipeline
136
+
To run on your own training files prepare the dataset according to the format required by `datasets`, you can find the instructions for how to do that in this [document](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder-with-metadata).
137
+
If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
Copy file name to clipboardExpand all lines: examples/textual_inversion/README.md
+29-20Lines changed: 29 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Colab for training
11
11
Colab for inference
12
12
[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb)
13
13
14
-
## Running locally
14
+
## Running locally with PyTorch
15
15
### Installing the dependencies
16
16
17
17
Before running the scripts, make sure to install the library's training dependencies:
It should be at least 70% faster than the PyTorch script with the same configuration.
89
-
90
71
### Inference
91
72
92
73
Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `placeholder_token` in your prompt.
For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
92
+
93
+
Before running the scripts, make sure to install the library's training dependencies:
0 commit comments