CloPeMa segmentation (package)

GrabCut GMM Segmentation wrapper

Clopema_segmentation package contains ROS wrapper to the library stored in 'segmentation' package (which is part of the clopema_cvut as well).
ROS wrapper provide node for GMM 'learning' and service for image segmentation.

Learn GMM Model

To learn model, create folder (for example named 'templates_CVUT') which will contains images of the table desk. That images should have only colors, which are supposed to be segmented out.
When images are placed into the folder 'templates_CVUT', type folowing command and result will be stored into the clopema_description calibration folder.

rosrun clopema_segmentation segmentation_gc_gmm_learn _templates:=templates_CVUT

Segmentation

Service called '/seg_gc_gmm' is part of the node 'segmentation_gc_gmm_service'. Example of using this service for segmentation is shown in node:
rosrun clopema_segmentation segmentation_gc_gmm_example.

rosrun clopema_segmentation segmentation_gc_gmm_service
#Example:
roslaunch clopema_launch xtion1.launch
rosrun clopema_segmentation segmentation_gc_gmm_example

User Documentation

Training

First the model must be trained from different views of the table. For this we have prepared a simple script that will do that automatically assuming that the table is calibrated (see table calibration). In order to train the colour model from the second table (the one in front of the robot) using the first xtion (the one on the r1 arm) one can simple run the following command:

$ rosrun clopema_segmentation seg_train_auto.py -x 1 -t 2 

Alternatively when the -x option is omitted the script will subscribe to standard image and camera_info topic which can be remapped using the classical ROS approach:

$ rosrun clopema_segmentation seg_train_auto.py -t 2 image:=/camera/image camera_info:=/camera/camera_info

Segmentation

Developer Documentation

Table cutting

In order to cut the table area from an image and fill the rest of the image with a predefined color, run the following command:

rosrun clopema_segmentation cut_image.py

It connects to 2 topics: camera_info and image_in

It estimates the position of the table corners using tf from camera_info, creates a polygonal masks using OpenCV and fills the outside with a defined color. The resulting image is sent to image_out

The node requires two ROS parameters: /fill and /target_frames
/fill contains the color used for filling in RGB format. Example: To fill the outside are of the table with a black color, use [0,0,0]
/target_frames contains the info about the frames describing the table corners. Example: [t2_leg_1,t2_leg_2, t2_leg_3,t2_leg_4]

Segmentation

The segmentation process - distinguishing the garment from the table - consists of 2 parts: learning and classification. Before you run the following code, make sure that the camera is switched to a manual mode to ensure the same white balance and other parameters throughout the whole process.

Learning

To obtain training images that will be used for the learning phase, launch:

roslaunch clopema_segmentation save_imgs.launch

The node connects to a predefined image topic, cuts the outside are of the table and starts displaying it. Hit 's' to save the image, 'q' to quit the script. Images are saved to a folder specified in the launchfile and are saved in the format 00*_rgb.png. Maximum 9 training images can be stored.

5 default training images can be found in /clopema_segmentation/scripts/img

Once you have saved the training images, launch:

roslaunch clopema_segmentation train.launch

The script loads a certain amount of the training images (path and number must be specified in the launch file) and saves the computed transformation matrix T to a parameter /clopema_segmentation/T

Classification

When the transformation matrix T is computed and stored as a ROS parameter, launch:

roslaunch clopema_segmentation segmentation.launch 

The script connects to the image and camera_info topic and starts segmenting. The original, cut and segmented images are displayed.

All files can be found in clopema_cvut/clopema_segmentation/scripts and clopema_cvut/clopema_segmentation/launch