Skip to content

Commit b83bdce

Browse files
authored
add openvino and onnx runtime SD XL documentation (huggingface#4285)
* add openvino SD XL documentation * add onnx SD XL integration * rephrase * update doc * add images * update model
1 parent c6ae9b7 commit b83bdce

File tree

2 files changed

+130
-18
lines changed

2 files changed

+130
-18
lines changed

docs/source/en/optimization/onnx.md

Lines changed: 53 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -23,19 +23,20 @@ Install 🤗 Optimum with the following command for ONNX Runtime support:
2323
pip install optimum["onnxruntime"]
2424
```
2525

26-
## Stable Diffusion Inference
26+
## Stable Diffusion
2727

28-
To load an ONNX model and run inference with the ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load
29-
a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
28+
### Inference
29+
30+
To load an ONNX model and run inference with the ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
3031

3132
```python
3233
from optimum.onnxruntime import ORTStableDiffusionPipeline
3334

3435
model_id = "runwayml/stable-diffusion-v1-5"
35-
pipe = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
36-
prompt = "a photo of an astronaut riding a horse on mars"
37-
images = pipe(prompt).images[0]
38-
pipe.save_pretrained("./onnx-stable-diffusion-v1-5")
36+
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
37+
prompt = "sailing ship in storm by Leonardo da Vinci"
38+
image = pipeline(prompt).images[0]
39+
pipeline.save_pretrained("./onnx-stable-diffusion-v1-5")
3940
```
4041

4142
If you want to export the pipeline in the ONNX format offline and later use it for inference,
@@ -51,15 +52,57 @@ Then perform inference:
5152
from optimum.onnxruntime import ORTStableDiffusionPipeline
5253

5354
model_id = "sd_v15_onnx"
54-
pipe = ORTStableDiffusionPipeline.from_pretrained(model_id)
55-
prompt = "a photo of an astronaut riding a horse on mars"
56-
images = pipe(prompt).images[0]
55+
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
56+
prompt = "sailing ship in storm by Leonardo da Vinci"
57+
image = pipeline(prompt).images[0]
5758
```
5859

5960
Notice that we didn't have to specify `export=True` above.
6061

62+
<div class="flex justify-center">
63+
<img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/stable_diffusion_v1_5_ort_sail_boat.png">
64+
</div>
65+
6166
You can find more examples in [optimum documentation](https://huggingface.co/docs/optimum/).
6267

68+
69+
### Supported tasks
70+
71+
| Task | Loading Class |
72+
|--------------------------------------|--------------------------------------|
73+
| `text-to-image` | `ORTStableDiffusionPipeline` |
74+
| `image-to-image` | `ORTStableDiffusionImg2ImgPipeline` |
75+
| `inpaint` | `ORTStableDiffusionInpaintPipeline` |
76+
77+
## Stable Diffusion XL
78+
79+
### Export
80+
81+
To export your model to ONNX, you can use the [Optimum CLI](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) as follows :
82+
83+
```bash
84+
optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/
85+
```
86+
87+
### Inference
88+
89+
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionPipelineXL` with `ORTStableDiffusionPipelineXL` :
90+
91+
```python
92+
from optimum.onnxruntime import ORTStableDiffusionXLPipeline
93+
94+
pipeline = ORTStableDiffusionXLPipeline.from_pretrained("sd_xl_onnx")
95+
prompt = "sailing ship in storm by Leonardo da Vinci"
96+
image = pipeline(prompt).images[0]
97+
```
98+
99+
### Supported tasks
100+
101+
| Task | Loading Class |
102+
|--------------------------------------|--------------------------------------|
103+
| `text-to-image` | `ORTStableDiffusionXLPipeline` |
104+
| `image-to-image` | `ORTStableDiffusionXLImg2ImgPipeline`|
105+
63106
## Known Issues
64107

65108
- Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.

docs/source/en/optimization/open_vino.md

Lines changed: 77 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,27 +13,96 @@ specific language governing permissions and limitations under the License.
1313

1414
# How to use OpenVINO for inference
1515

16-
🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides a Stable Diffusion pipeline compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ([see](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) the full list of supported devices).
16+
🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides Stable Diffusion pipelines compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ([see](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) the full list of supported devices).
1717

1818
## Installation
1919

2020
Install 🤗 Optimum Intel with the following command:
2121

2222
```
23-
pip install optimum["openvino"]
23+
pip install --upgrade-strategy eager optimum["openvino"]
2424
```
2525

26-
## Stable Diffusion Inference
26+
The `--upgrade-strategy eager` option is needed to ensure [`optimum-intel`](https://github.com/huggingface/optimum-intel) is upgraded to its latest version.
27+
28+
29+
## Stable Diffusion
30+
31+
### Inference
2732

2833
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionPipeline` with `OVStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
2934

3035
```python
31-
from optimum.intel.openvino import OVStableDiffusionPipeline
36+
from optimum.intel import OVStableDiffusionPipeline
3237

3338
model_id = "runwayml/stable-diffusion-v1-5"
34-
pipe = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
35-
prompt = "a photo of an astronaut riding a horse on mars"
36-
images = pipe(prompt).images[0]
39+
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
40+
prompt = "sailing ship in storm by Rembrandt"
41+
image = pipeline(prompt).images[0]
42+
43+
# Don't forget to save the exported model
44+
pipeline.save_pretrained("openvino-sd-v1-5")
45+
```
46+
47+
To further speed up inference, the model can be statically reshaped :
48+
49+
```python
50+
# Define the shapes related to the inputs and desired outputs
51+
batch_size, num_images, height, width = 1, 1, 512, 512
52+
53+
# Statically reshape the model
54+
pipeline.reshape(batch_size, height, width, num_images)
55+
# Compile the model before inference
56+
pipeline.compile()
57+
58+
image = pipeline(
59+
prompt,
60+
height=height,
61+
width=width,
62+
num_images_per_prompt=num_images,
63+
).images[0]
64+
```
65+
66+
In case you want to change any parameters such as the outputs height or width, you’ll need to statically reshape your model once again.
67+
68+
<div class="flex justify-center">
69+
<img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/stable_diffusion_v1_5_sail_boat_rembrandt.png">
70+
</div>
71+
72+
73+
### Supported tasks
74+
75+
| Task | Loading Class |
76+
|--------------------------------------|--------------------------------------|
77+
| `text-to-image` | `OVStableDiffusionPipeline` |
78+
| `image-to-image` | `OVStableDiffusionImg2ImgPipeline` |
79+
| `inpaint` | `OVStableDiffusionInpaintPipeline` |
80+
81+
You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion).
82+
83+
84+
## Stable Diffusion XL
85+
86+
### Inference
87+
88+
```python
89+
from optimum.intel import OVStableDiffusionXLPipeline
90+
91+
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
92+
pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id, export=True)
93+
prompt = "sailing ship in storm by Rembrandt"
94+
image = pipeline(prompt).images[0]
3795
```
3896

39-
You can find more examples (such as static reshaping and model compilation) in [optimum documentation](https://huggingface.co/docs/optimum/intel/inference#export-and-inference-of-stable-diffusion-models).
97+
To further speed up inference, the model can be statically reshaped as showed above.
98+
You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion-xl).
99+
100+
### Supported tasks
101+
102+
| Task | Loading Class |
103+
|--------------------------------------|--------------------------------------|
104+
| `text-to-image` | `OVStableDiffusionXLPipeline` |
105+
| `image-to-image` | `OVStableDiffusionXLImg2ImgPipeline` |
106+
107+
108+

0 commit comments

Comments
 (0)