You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- We propose VAD, an end-to-end unified vectorized paradigm for autonomous driving. VAD models the driving scene as fully vectorized representation, getting rid of computationally intensive dense rasterized representation and hand-designed post-processing steps.
26
+
- We propose VAD, an end-to-end unified vectorized paradigm for autonomous driving. VAD models the driving scene as a fully vectorized representation, getting rid of computationally intensive dense rasterized representation and hand-designed post-processing steps.
27
27
- VAD implicitly and explicitly utilizes the vectorized scene information to improve planning safety, via query interaction and vectorized planning constraints.
28
28
- VAD achieves SOTA end-to-end planning performance, outperforming previous methods by a large margin. Not only that, because of the vectorized scene representation and our concise model design, VAD greatly improves the inference speed, which is critical for the real-world deployment of an autonomous driving system.
If you find VAD is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
48
+
If you find VAD useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.
49
49
50
50
```BibTeX
51
51
@article{jiang2023vad,
@@ -60,4 +60,4 @@ If you find VAD is useful in your research or applications, please consider givi
60
60
All code in this repository is under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
61
61
62
62
## Acknowledgement
63
-
VAD is based on the following projects: [mmdet3d](https://github.com/open-mmlab/mmdetection3d), [detr3d](https://github.com/WangYueFt/detr3d), [BEVFormer](https://github.com/fundamentalvision/BEVFormer) and [MapTR](https://github.com/hustvl/MapTR). Many thanks to their excellent contributions to the community.
63
+
VAD is based on the following projects: [mmdet3d](https://github.com/open-mmlab/mmdetection3d), [detr3d](https://github.com/WangYueFt/detr3d), [BEVFormer](https://github.com/fundamentalvision/BEVFormer) and [MapTR](https://github.com/hustvl/MapTR). Many thanks for their excellent contributions to the community.
0 commit comments