Add support for user-specific datasets by Hongkai Zhang
A PyTorch implementation of FaceBoxes: A CPU Real-time Face Detector with High Accuracy. The official code in Caffe can be found here.
- Fix a cython version bug when compiling NMS.
- Add support for user-specific datasets.
- Some useful tools such as visualizations, .gitignore, etc.
Form your data like following:
data
├── dataset name
│ ├── images
│ ├── annotations
│ ├── img_list.txt
│ ├── test_img_list.txt
Each line in img_list.txt and test_img_list.txt should be like this:
dirname/imagename.jpg dirname/imagename.xmlHere the dirname does not contain images/annotations. In other words, the real image path is images/dirname/imagename.jpg and the annotation path is annotations/dirname/imagename.jpg.
Note: I also write the annotation path into test_img_list.txt for evaluation. You may replace it with something else since it is not required when testing.
--pin_memoryto speed up data loading (need large memory)
--resize ratioto decide the ratio-sor--show_imageto visualize the detection results--vis_thres thresto show the results with scores >=thres
You can evaluate the performance like follows (only AP50 now):
python tools/eval_voc_ap.py path/to/detfile path/to/img_list dataset_name| Dataset | Original Caffe | PyTorch Implementation |
|---|---|---|
| AFW | 98.98 % | 98.47% |
| PASCAL | 96.77 % | 96.84% |
| FDDB | 95.90 % | 95.44% |
Please cite the paper in your publications if it helps your research:
@inproceedings{zhang2017faceboxes,
title = {Faceboxes: A CPU Real-time Face Detector with High Accuracy},
author = {Zhang, Shifeng and Zhu, Xiangyu and Lei, Zhen and Shi, Hailin and Wang, Xiaobo and Li, Stan Z.},
booktitle = {IJCB},
year = {2017}
}
-
Install PyTorch >= v1.0.0 following official instruction.
-
Clone this repository. We will call the cloned directory as
$FaceBoxes_ROOT.
git clone https://github.com/hkzhang95/FaceBoxes.PyTorch.git- Compile the nms:
./make.shNote: Codes are based on Python 3+.
- Download WIDER FACE dataset, place the images under this directory:
$FaceBoxes_ROOT/data/WIDER_FACE/images- Convert WIDER FACE annotations to VOC format or download our converted annotations, place them under this directory:
$FaceBoxes_ROOT/data/WIDER_FACE/annotations- Train the model using WIDER FACE:
cd $FaceBoxes_ROOT/
python3 train.pyIf you do not wish to train the model, you can download our pre-trained model and save it in $FaceBoxes_ROOT/weights.
- Download the images of AFW, PASCAL Face and FDDB to:
$FaceBoxes_ROOT/data/AFW/images/
$FaceBoxes_ROOT/data/PASCAL/images/
$FaceBoxes_ROOT/data/FDDB/images/- Evaluate the trained model using:
# dataset choices = ['AFW', 'PASCAL', 'FDDB']
python3 test.py --dataset FDDB
# evaluate using cpu
python3 test.py --cpu- Download eval_tool to evaluate the performance.
-
A huge thank you to SSD ports in PyTorch that have been helpful:
Note: If you can not download the converted annotations, the provided images and the trained model through the above links, you can download them through BaiduYun.