This repo is forked. The original readme contains many details about the package.
This readme contains instructions for retraining a model based on your data
Check out This Package for a wrapper to use this segmenter in ROS
I retrained on the YCB Video dataset.
- download the dataset
- Move (or create a symlink to)
YCB_Video_Datasetin thedatafolder - prepare the dataset and training/test file lists by running
prepare_ycb_video_dataset.py - Make your config file in the
configdirectory. Choose your model, dataset, hyperparams, etc. python train.py --cfg [your config file] --gpus [your gpu(s)]
To tailor the segmenter to your custom environment, I suggest, but have not tried:
- Take pictures of your scene WITHOUT any ycb objects
- Overlay the YCB objects from the
data_synportion of the YCB_Video_Dataset. These are YCB objects on a transparent background. - Include these at a 50/50 ratio with the original YCB Video dataset.
Alternatively, you could create your own dataset by segmenting the YCB objects from a bunch of pictures you take. That seems like a lot of work though....