|
| 1 | +# SuperPoint |
| 2 | + |
| 3 | +The PyTorch implementation is from [magicleap/SuperPointPretrainedNetwork.](https://github.com/magicleap/SuperPointPretrainedNetwork) |
| 4 | + |
| 5 | +The pretrained models are from [magicleap/SuperPointPretrainedNetwork.](https://github.com/magicleap/SuperPointPretrainedNetwork) |
| 6 | + |
| 7 | + |
| 8 | +## Config |
| 9 | + |
| 10 | +- FP16/FP32 can be selected by the macro `USE_FP16` in supernet.cpp |
| 11 | +- GPU id and batch size can be selected by the macro `DEVICE` & `BATCH_SIZE` in supernet.cpp |
| 12 | + |
| 13 | + |
| 14 | +## How to Run |
| 15 | +1.Generate .wts file from the baseline pytorch implementation of pretrained model. The following example described how to generate superpoint_v1.wts from pytorch implementation of superpoint_v1. |
| 16 | +``` |
| 17 | +git clone https://github.com/xiang-wuu/SuperPointPretrainedNetwork |
| 18 | +cd SuperPointPretrainedNetwork |
| 19 | +git checkout deploy |
| 20 | +// copy tensorrtx/superpoint/gen_wts.py to here(SuperPointPretrainedNetwork) |
| 21 | +python gen_wts.py |
| 22 | +// a file 'superpoint_v1.wts' will be generated. |
| 23 | +// before running gen_wts.py python script make sure you cloned private fork and checkout to deploy branch. |
| 24 | +``` |
| 25 | + |
| 26 | +2.Put .wts file into tensorrtx/superpoint, build and run |
| 27 | +``` |
| 28 | +cd tensorrtx/superpoint |
| 29 | +mkdir build |
| 30 | +cd build |
| 31 | +cmake .. |
| 32 | +make |
| 33 | +./supernet -s SuperPointPretrainedNetwork/superpoint_v1.wts // serialize model to plan file i.e. 'supernet.engine' |
| 34 | +``` |
| 35 | + |
| 36 | +## Run Demo using SuperPointPretrainedNetwork Python Script |
| 37 | +The live demo can be run by inffering TensorRT generated engine file or by the pre-trained pytorch weight file , the `demo_superpoint.py` script is modified to infer automatically by either using TensorRT or PyTorch based on the provided input weight file. |
| 38 | +``` |
| 39 | +cd SuperPointPretrainedNetwork |
| 40 | +python demo_superpoint.py assets/nyu_snippet.mp4 --cuda --weights_path tensorrtx/superpoint/build/supernet.engine |
| 41 | +// provide absolute path to supernet.engine as input weight file |
| 42 | +python demo_superpoint.py assets/nyu_snippet.mp4 --cuda --weights_path superpoint_v1.pth |
| 43 | +// execute above command to infer using pytorch pre-trained weight files instead of tensorrt engine file. |
| 44 | +``` |
| 45 | + |
| 46 | +## Output |
| 47 | +As from the below result there is no significant difference in the inferred output! |
| 48 | +<table> |
| 49 | +<th> |
| 50 | +PyTorch |
| 51 | +</th> |
| 52 | +<th> |
| 53 | +TensorRT |
| 54 | +</th> |
| 55 | +<tr> |
| 56 | +<td> |
| 57 | +<img src="https://user-images.githubusercontent.com/107029401/177322379-2782ca66-bcac-4cf6-b6d3-e1b4d4a8e171.gif"/> |
| 58 | +</td> |
| 59 | +<td> |
| 60 | +<img src="https://user-images.githubusercontent.com/107029401/177322387-c945b903-f233-4a43-bfd3-530c46f4f4db.gif"/> |
| 61 | +</td> |
| 62 | +</tr> |
| 63 | +</table> |
| 64 | + |
| 65 | +## TODO |
| 66 | +- [ ] Optimizing post-processing using custom TensorRT layer. |
| 67 | +- [ ] Benchmark validation for speed accuracy tradeoff with [hpatches](https://github.com/hpatches/hpatches-benchmark) dataset |
0 commit comments