Skip to content
forked from vaesl/LRF-Net

Learning Rich Features at High-Speed for Single-Shot Object Detection, ICCV, 2019

License

Notifications You must be signed in to change notification settings

kmoker8s/LRF-Net

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Learning Rich Features at High-Speed for Single-Shot Object Detection

By Tiancai Wang†, Rao Muhammad Anwer†, Hisham Cholakkal, Fahad Shahbaz Khan, Yanwei Pang, Ling Shao

† denotes equal contribution

Introduction

Single-stage object detection methods have received significant attention recently due to their characteristic real-time capabilities and high detection accuracies. Generally, most existing single-stage detectors follow two common practices: they employ a network backbone that is pre-trained on ImageNet for the classification task and use a top-down feature pyramid representation for handling scale variations. Contrary to common pre-training strategy, recent works have demonstrated the benefits of training from scratch to reduce the task gap between classification and localization, especially at high overlap thresholds. However, detection models trained from scratch require significantly longer training time compared to their typical fine-tuning based counterparts. We introduce a single-stage detection framework that combines the advantages of both fine-tuning pre-trained models and training from scratch. Our framework constitutes a standard network that uses a pre-trained backbone and a parallel light-weight auxiliary network trained from scratch. Further, we argue that the commonly used top-down pyramid representation only focuses on passing high-level semantics from the top layers to bottom layers. We introduce a bi-directional network that efficiently circulates both low-/mid-level and high-level semantic information in the detection framework. Experiments are performed on MS COCO and UAVDT datasets. Compared to the baseline, our detector achieives an absolute gain of 7.4% and 4.2% in average precision(AP) on MS COCO and UAVDT datasets, respectively us-ing VGG backbone. For a 300×300 input on the MS COCOtest set, our detector with ResNet backbone surpasses existing single-stage detection methods for single-scale inference achieving 34.3 AP, while operating at an inference time of 19 milliseconds on a single Titan X GPU.

About

Learning Rich Features at High-Speed for Single-Shot Object Detection, ICCV, 2019

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published