Skip to content

Commit e061888

Browse files
authored
Update unet readme
1 parent e94e4ed commit e061888

File tree

1 file changed

+25
-23
lines changed

1 file changed

+25
-23
lines changed

unet/README.md

Lines changed: 25 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,54 +1,57 @@
1-
# tensorrt-unet
2-
This is a TensorRT version Unet, inspired by [tensorrtx](https://github.com/wang-xinyu/tensorrtx) and [pytorch-unet](https://github.com/milesial/Pytorch-UNet).<br>
1+
# UNet
2+
This is a TensorRT version UNet, inspired by [tensorrtx](https://github.com/wang-xinyu/tensorrtx) and [pytorch-unet](https://github.com/milesial/Pytorch-UNet).<br>
33
You can generate TensorRT engine file using this script and customize some params and network structure based on network you trained (FP32/16 precision, input size, different conv, activation function...)<br>
44

5-
# requirements
5+
# Requirements
66

7-
TensorRT 7.0 (you need to install tensorrt first)<br>
8-
Cuda 10.2<br>
9-
Python3.7<br>
10-
opencv 4.4<br>
11-
cmake 3.18<br>
12-
# train .pth file and convert .wts
7+
TensorRT 7.x or 8.x (you need to install tensorrt first)<br>
8+
Python<br>
9+
opencv<br>
10+
cmake<br>
1311

14-
## create env
12+
# Train .pth file and convert .wts
13+
14+
## Create env
1515

1616
```
1717
pip install -r requirements.txt
1818
```
1919

20-
## train .pth file
21-
22-
train your dataset by following [pytorch-unet](https://github.com/milesial/Pytorch-UNet) and generate .pth file.<br>
20+
## Train .pth file
2321

24-
## convert .wts
22+
Train your dataset by following [Pytorch-UNet](https://github.com/milesial/Pytorch-UNet) and generate .pth file.<br>
2523

26-
run gen_wts from utils folder, and move it to project folder<br>
24+
Please set bilinear=False, i.e. `UNet(n_channels=3, n_classes=1, bilinear=False)`, because TensorRT doesn't support Upsample layer.
2725

28-
# generate engine file and infer
26+
## Convert .pth to .wts
2927

30-
## create build folder in project folder
3128
```
32-
mkdir build
29+
cp tensorrtx/unet/gen_wts.py Pytorch-UNet/
30+
cd Pytorch-UNet/
31+
python gen_wts.py
3332
```
3433

35-
## make file, generate exec file
34+
# Generate engine file and infer
35+
36+
Build:
3637
```
38+
cd tensorrtx/unet/
39+
mkdir build
3740
cd build
3841
cmake ..
3942
make
4043
```
4144

42-
## generate TensorRT engine file and infer image
45+
Generate TensorRT engine file:
4346
```
4447
unet -s
4548
```
46-
then a unet exec file will generated, you can use unet -d to infer files in a folder<br>
49+
Inference on images in a folder:
4750
```
4851
unet -d ../samples
4952
```
5053

51-
# efficiency
54+
# Benchmark
5255
the speed of tensorRT engine is much faster
5356

5457
pytorch | TensorRT FP32 | TensorRT FP16
@@ -61,4 +64,3 @@ the speed of tensorRT engine is much faster
6164

6265
1. add INT8 calibrator<br>
6366
2. add custom plugin<br>
64-
etc

0 commit comments

Comments
 (0)