You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/useful_tools.md
+20-11Lines changed: 20 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -76,9 +76,9 @@ Description of arguments:
76
76
77
77
**Note**: This tool is still experimental. Some customized operators are not supported for now.
78
78
79
-
### Evaluate ONNX model with ONNXRuntime
79
+
### Evaluate ONNX model
80
80
81
-
We provide `tools/ort_test.py` to evaluate ONNX model with ONNXRuntime backend.
81
+
We provide `tools/deploy_test.py` to evaluate ONNX model with different backend.
82
82
83
83
#### Prerequisite
84
84
@@ -88,12 +88,15 @@ We provide `tools/ort_test.py` to evaluate ONNX model with ONNXRuntime backend.
88
88
pip install onnx onnxruntime-gpu
89
89
```
90
90
91
+
- Install TensorRT following [how-to-build-tensorrt-plugins-in-mmcv](https://mmcv.readthedocs.io/en/latest/tensorrt_plugin.html#how-to-build-tensorrt-plugins-in-mmcv)(optional)
92
+
91
93
#### Usage
92
94
93
95
```bash
94
-
python tools/ort_test.py \
96
+
python tools/deploy_test.py \
95
97
${CONFIG_FILE} \
96
-
${ONNX_FILE} \
98
+
${MODEL_FILE} \
99
+
${BACKEND} \
97
100
--out ${OUTPUT_FILE} \
98
101
--eval ${EVALUATION_METRICS} \
99
102
--show \
@@ -106,7 +109,8 @@ python tools/ort_test.py \
106
109
Description of all arguments
107
110
108
111
-`config`: The path of a model config file.
109
-
-`model`: The path of a ONNX model file.
112
+
-`model`: The path of a converted model file.
113
+
-`backend`: Backend of the inference, options: `onnxruntime`, `tensorrt`.
110
114
-`--out`: The path of output result file in pickle format.
111
115
-`--format-only` : Format the output results without perform evaluation. It is useful when you want to format the result to a specific format and submit it to the test server. If not specified, it will be set to `False`. Note that this argument is **mutually exclusive** with `--eval`.
112
116
-`--eval`: Evaluation metrics, which depends on the dataset, e.g., "mIoU" for generic datasets, and "cityscapes" for Cityscapes. Note that this argument is **mutually exclusive** with `--format-only`.
@@ -118,12 +122,17 @@ Description of all arguments
0 commit comments