Our new Re:InterHand dataset has been released, which has much more diverse image appearances with more stable 3D GT. Check it out at here!
This repo is official PyTorch implementation of Bringing Inputs to Shared Domains for 3D Interacting Hands Recovery in the Wild (CVPR 2023).
- Prepare
human_model_filesfolder following belowDirectorypart and place it atcommon/utils/human_model_files. - Move to
demofolder. - Download pre-trained InterWild from here.
- Put input images at
images. The image should be a cropped image, which contain a single human. For example, using a human detector. We have a hand detection network, so no worry about the hand postiions! - Run
python demo.py --gpu $GPU_ID - Boxes, meshes, MANO parameters, and renderings are saved at
boxes,meshes,params, andrenders, respectively.
The ${ROOT} is described as below.
${ROOT}
|-- data
|-- demo
|-- common
|-- main
|-- output
datacontains data loading codes and soft links to images and annotations directories.democontains the demo codecommoncontains kernel code. You should putMANO_RIGHT.pklandMANO_LEFT.pklatcommon/utils/human_model_files/mano, where those are available in here.maincontains high-level codes for training or testing the network.outputcontains log, trained models, visualized outputs, and test result.
You need to follow directory structure of the data as below.
${ROOT}
|-- data
| |-- InterHand26M
| | |-- annotations
| | | |-- train
| | | |-- test
| | |-- images
| |-- MSCOCO
| | |-- annotations
| | | |-- coco_wholebody_train_v1.0.json
| | | |-- coco_wholebody_val_v1.0.json
| | | |-- MSCOCO_train_MANO_NeuralAnnot.json
| | |-- images
| | | |-- train2017
| | | |-- val2017
| |-- HIC
| | |-- data
| | | |-- HIC.json
| |-- ReInterHand
| | |-- data
| | | |-- m--*
- Download InterHand2.6M [HOMEPAGE].
imagescontains images in 5 fps, andannotationscontains theH+Msubset. - Download the whole-body version of MSCOCO [HOMEPAGE].
MSCOCO_train_MANO_NeuralAnnot.jsoncan be downloaded from [here]. - Download HIC [HOMEPAGE] [annotations]. You need to download 1) all
Hand-Hand Interactionsequences (01.zip-14.zip) and 2) some ofHand-Object Interactionseuqneces (15.zip-21.zip) and 3) MANO fits. Or you can simply runpython download.pyin thedata/HICfolder. - Download ReInterHand[HOMEPAGE] at
data/ReInterHand/data.
You need to follow the directory structure of the output folder as below.
${ROOT}
|-- output
| |-- log
| |-- model_dump
| |-- result
| |-- vis
logfolder contains training log file.model_dumpfolder contains saved checkpoints for each epoch.resultfolder contains final estimation files generated in the testing stage.visfolder contains visualized results.
- Prepare
human_model_filesfolder following aboveDirectorypart and place it atcommon/utils/human_model_files.
In the main folder, run
python train.py --gpu 0-3to train the network on the GPU 0,1,2,3. --gpu 0,1,2,3 can be used instead of --gpu 0-3. If you want to continue experiment, run use --continue.
- Checkpoint trained on IH26M (H+M) + MSCOCO. FYI, all experimental results of the paper is from a checkpoint trained on IH26M (H) + MSCOCO.
- Checkpoint trained on IH26M (H+M) + MSCOCO + ReInterHand (Mugsy_cameras).
- Checkpoint trained on IH26M (H+M) + MSCOCO + ReInterHand (Ego_cameras).
- Place the checkpoint at `output/model_dump'.
- Or if you want to test with our own trained model, place your model at
output/model_dump. - For the evaluation on InterHand2.6M dataset, we evaluated all methods in the paper on
human_annotsubset of interHand2.6M usingdata/InterHand26M/aid_human_annot_test.txt.
In the main folder, run
python test.py --gpu 0-3 --test_epoch 6to test the network on the GPU 0,1,2,3 with snapshot_6.pth. --gpu 0,1,2,3 can be used instead of --gpu 0-3.
@inproceedings{moon2023interwild,
author = {Moon, Gyeongsik},
title = {Bringing Inputs to Shared Domains for {3D} Interacting Hands Recovery in the Wild},
booktitle = {CVPR},
year = {2023}
}
@inproceedings{moon2023reinterhand,
title = {A Dataset of Relighted {3D} Interacting Hands},
author = {Moon, Gyeongsik and Saito, Shunsuke and Xu, Weipeng and Joshi, Rohan and Buffalini, Julia and Bellan, Harley and Rosen, Nicholas and Richardson, Jesse and Mize Mallorie and Bree, Philippe and Simon, Tomas and Peng, Bo and Garg, Shubham and McPhail, Kevyn and Shiratori, Takaaki},
booktitle = {NeurIPS Track on Datasets and Benchmarks},
year = {2023},
}
This repo is CC-BY-NC 4.0 licensed, as found in the LICENSE file.






