You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: unet/README.md
+25-23Lines changed: 25 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,54 +1,57 @@
1
-
# tensorrt-unet
2
-
This is a TensorRT version Unet, inspired by [tensorrtx](https://github.com/wang-xinyu/tensorrtx) and [pytorch-unet](https://github.com/milesial/Pytorch-UNet).<br>
1
+
# UNet
2
+
This is a TensorRT version UNet, inspired by [tensorrtx](https://github.com/wang-xinyu/tensorrtx) and [pytorch-unet](https://github.com/milesial/Pytorch-UNet).<br>
3
3
You can generate TensorRT engine file using this script and customize some params and network structure based on network you trained (FP32/16 precision, input size, different conv, activation function...)<br>
4
4
5
-
# requirements
5
+
# Requirements
6
6
7
-
TensorRT 7.0 (you need to install tensorrt first)<br>
8
-
Cuda 10.2<br>
9
-
Python3.7<br>
10
-
opencv 4.4<br>
11
-
cmake 3.18<br>
12
-
# train .pth file and convert .wts
7
+
TensorRT 7.x or 8.x (you need to install tensorrt first)<br>
8
+
Python<br>
9
+
opencv<br>
10
+
cmake<br>
13
11
14
-
## create env
12
+
# Train .pth file and convert .wts
13
+
14
+
## Create env
15
15
16
16
```
17
17
pip install -r requirements.txt
18
18
```
19
19
20
-
## train .pth file
21
-
22
-
train your dataset by following [pytorch-unet](https://github.com/milesial/Pytorch-UNet) and generate .pth file.<br>
20
+
## Train .pth file
23
21
24
-
## convert .wts
22
+
Train your dataset by following [Pytorch-UNet](https://github.com/milesial/Pytorch-UNet) and generate .pth file.<br>
25
23
26
-
run gen_wts from utils folder, and move it to project folder<br>
24
+
Please set bilinear=False, i.e. `UNet(n_channels=3, n_classes=1, bilinear=False)`, because TensorRT doesn't support Upsample layer.
27
25
28
-
#generate engine file and infer
26
+
## Convert .pth to .wts
29
27
30
-
## create build folder in project folder
31
28
```
32
-
mkdir build
29
+
cp tensorrtx/unet/gen_wts.py Pytorch-UNet/
30
+
cd Pytorch-UNet/
31
+
python gen_wts.py
33
32
```
34
33
35
-
## make file, generate exec file
34
+
# Generate engine file and infer
35
+
36
+
Build:
36
37
```
38
+
cd tensorrtx/unet/
39
+
mkdir build
37
40
cd build
38
41
cmake ..
39
42
make
40
43
```
41
44
42
-
## generate TensorRT engine file and infer image
45
+
Generate TensorRT engine file:
43
46
```
44
47
unet -s
45
48
```
46
-
then a unet exec file will generated, you can use unet -d to infer files in a folder<br>
49
+
Inference on images in a folder:
47
50
```
48
51
unet -d ../samples
49
52
```
50
53
51
-
# efficiency
54
+
# Benchmark
52
55
the speed of tensorRT engine is much faster
53
56
54
57
pytorch | TensorRT FP32 | TensorRT FP16
@@ -61,4 +64,3 @@ the speed of tensorRT engine is much faster
0 commit comments