You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,7 @@ The basic workflow of TensorRTx is:
15
15
16
16
## News
17
17
18
+
-`18 Dec 2022`. [YOLOv5](./yolov5) upgrade to support v7.0, including instance segmention.
18
19
-`12 Dec 2022`. [East-Face](https://github.com/East-Face): [UNet](./unet) upgrade to support v3.0 of [Pytorch-UNet](https://github.com/milesial/Pytorch-UNet).
19
20
-`26 Oct 2022`. [ausk](https://github.com/ausk): YoloP(You Only Look Once for Panopitic Driving Perception).
20
21
-`19 Sep 2022`. [QIANXUNZDL123](https://github.com/QIANXUNZDL123) and [lindsayshuo](https://github.com/lindsayshuo): YOLOv7.
@@ -29,7 +30,6 @@ The basic workflow of TensorRTx is:
29
30
-`18 Oct 2021`. [xupengao](https://github.com/xupengao): YOLOv5 updated to v6.0, supporting n/s/m/l/x/n6/s6/m6/l6/x6.
30
31
-`31 Aug 2021`. [FamousDirector](https://github.com/FamousDirector): update retinaface to support TensorRT 8.0.
31
32
-`27 Aug 2021`. [HaiyangPeng](https://github.com/HaiyangPeng): add a python wrapper for hrnet segmentation.
32
-
-`1 Jul 2021`. [freedenS](https://github.com/freedenS): DE⫶TR: End-to-End Object Detection with Transformers. First Transformer model!
33
33
34
34
## Tutorials
35
35
@@ -75,7 +75,7 @@ Following models are implemented.
75
75
|[yolov3](./yolov3)| darknet-53, weights and pytorch implementation from [ultralytics/yolov3](https://github.com/ultralytics/yolov3)|
76
76
|[yolov3-spp](./yolov3-spp)| darknet-53, weights and pytorch implementation from [ultralytics/yolov3](https://github.com/ultralytics/yolov3)|
77
77
|[yolov4](./yolov4)| CSPDarknet53, weights from [AlexeyAB/darknet](https://github.com/AlexeyAB/darknet#pre-trained-models), pytorch implementation from [ultralytics/yolov3](https://github.com/ultralytics/yolov3)|
78
-
|[yolov5](./yolov5)| yolov5 v1.0-v6.2, pytorch implementation from [ultralytics/yolov5](https://github.com/ultralytics/yolov5)|
78
+
|[yolov5](./yolov5)| yolov5 v1.0-v7.0 of [ultralytics/yolov5](https://github.com/ultralytics/yolov5), detection, classification and instance segmentation|
79
79
|[yolov7](./yolov7)| yolov7 v0.1, pytorch implementation from [WongKinYiu/yolov7](https://github.com/WongKinYiu/yolov7)|
80
80
|[yolop](./yolop)| yolop, pytorch implementation from [hustvl/YOLOP](https://github.com/hustvl/YOLOP)|
81
81
|[retinaface](./retinaface)| resnet50 and mobilnet0.25, weights from [biubug6/Pytorch_Retinaface](https://github.com/biubug6/Pytorch_Retinaface)|
1. Prepare calibration images, you can randomly select 1000s images from your train set. For coco, you can also download my calibration images `coco_calib` from [GoogleDrive](https://drive.google.com/drive/folders/1s7jE9DtOngZMzJC1uL307J2MiaGwdRSI?usp=sharing) or [BaiduPan](https://pan.baidu.com/s/1GOm_-JobpyLMAqZWCDUhKg) pwd: a9wh
Copy file name to clipboardExpand all lines: yolov5/yololayer.cu
+16-7Lines changed: 16 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -25,12 +25,13 @@ using namespace Yolo;
25
25
26
26
namespacenvinfer1
27
27
{
28
-
YoloLayerPlugin::YoloLayerPlugin(int classCount, int netWidth, int netHeight, int maxOut, const std::vector<Yolo::YoloKernel>& vYoloKernel)
28
+
YoloLayerPlugin::YoloLayerPlugin(int classCount, int netWidth, int netHeight, int maxOut, bool is_segmentation, const std::vector<Yolo::YoloKernel>& vYoloKernel)
29
29
{
30
30
mClassCount = classCount;
31
31
mYoloV5NetWidth = netWidth;
32
32
mYoloV5NetHeight = netHeight;
33
33
mMaxOutObject = maxOut;
34
+
is_segmentation_ = is_segmentation;
34
35
mYoloKernel = vYoloKernel;
35
36
mKernelCount = vYoloKernel.size();
36
37
@@ -63,6 +64,7 @@ namespace nvinfer1
63
64
read(d, mYoloV5NetWidth);
64
65
read(d, mYoloV5NetHeight);
65
66
read(d, mMaxOutObject);
67
+
read(d, is_segmentation_);
66
68
mYoloKernel.resize(mKernelCount);
67
69
auto kernelSize = mKernelCount * sizeof(YoloKernel);
68
70
memcpy(mYoloKernel.data(), d, kernelSize);
@@ -88,6 +90,7 @@ namespace nvinfer1
88
90
write(d, mYoloV5NetWidth);
89
91
write(d, mYoloV5NetHeight);
90
92
write(d, mMaxOutObject);
93
+
write(d, is_segmentation_);
91
94
auto kernelSize = mKernelCount * sizeof(YoloKernel);
__global__voidCalDetection(constfloat *input, float *output, int noElements,
183
-
constint netwidth, constint netheight, int maxoutobject, int yoloWidth, int yoloHeight, constfloat anchors[CHECK_COUNT * 2], int classes, int outputElem)
186
+
constint netwidth, constint netheight, int maxoutobject, int yoloWidth, int yoloHeight, constfloat anchors[CHECK_COUNT * 2], int classes, int outputElem, bool is_segmentation)
0 commit comments