You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/optimization/onnx.md
+53-10Lines changed: 53 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,19 +23,20 @@ Install 🤗 Optimum with the following command for ONNX Runtime support:
23
23
pip install optimum["onnxruntime"]
24
24
```
25
25
26
-
## Stable Diffusion Inference
26
+
## Stable Diffusion
27
27
28
-
To load an ONNX model and run inference with the ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load
29
-
a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
28
+
### Inference
29
+
30
+
To load an ONNX model and run inference with the ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
30
31
31
32
```python
32
33
from optimum.onnxruntime import ORTStableDiffusionPipeline
To export your model to ONNX, you can use the [Optimum CLI](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) as follows :
Copy file name to clipboardExpand all lines: docs/source/en/optimization/open_vino.md
+77-8Lines changed: 77 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,27 +13,96 @@ specific language governing permissions and limitations under the License.
13
13
14
14
# How to use OpenVINO for inference
15
15
16
-
🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides a Stable Diffusion pipeline compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ([see](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) the full list of supported devices).
16
+
🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides Stable Diffusion pipelines compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ([see](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) the full list of supported devices).
17
17
18
18
## Installation
19
19
20
20
Install 🤗 Optimum Intel with the following command:
The `--upgrade-strategy eager` option is needed to ensure [`optimum-intel`](https://github.com/huggingface/optimum-intel) is upgraded to its latest version.
27
+
28
+
29
+
## Stable Diffusion
30
+
31
+
### Inference
27
32
28
33
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionPipeline` with `OVStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
29
34
30
35
```python
31
-
from optimum.intel.openvinoimport OVStableDiffusionPipeline
36
+
from optimum.intel import OVStableDiffusionPipeline
You can find more examples (such as static reshaping and model compilation) in [optimum documentation](https://huggingface.co/docs/optimum/intel/inference#export-and-inference-of-stable-diffusion-models).
97
+
To further speed up inference, the model can be statically reshaped as showed above.
98
+
You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion-xl).
0 commit comments