Skip to content

bsaund/semantic-segmentation-pytorch

 
 

Repository files navigation

Overview

This repo is forked. The original readme contains many details about the package.

This readme contains instructions for retraining a model based on your data

Use this segmenter in ROS

Check out This Package for a wrapper to use this segmenter in ROS

Train from the YCB Video Dataset

I retrained on the YCB Video dataset.

  1. download the dataset
  2. Move (or create a symlink to) YCB_Video_Dataset in the data folder
  3. prepare the dataset and training/test file lists by running prepare_ycb_video_dataset.py
  4. Make your config file in the config directory. Choose your model, dataset, hyperparams, etc.
  5. python train.py --cfg [your config file] --gpus [your gpu(s)]

Add your custom images

To tailor the segmenter to your custom environment, I suggest, but have not tried:

  1. Take pictures of your scene WITHOUT any ycb objects
  2. Overlay the YCB objects from the data_syn portion of the YCB_Video_Dataset. These are YCB objects on a transparent background.
  3. Include these at a 50/50 ratio with the original YCB Video dataset.

Alternatively, you could create your own dataset by segmenting the YCB objects from a bunch of pictures you take. That seems like a lot of work though....

About

Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.0%
  • Jupyter Notebook 4.1%
  • Shell 0.9%