Robot head to robot transformation - experiment with Panda target

Source codes and data

All source codes needed for repeat the experiment can be found in git Robot head stack - branch: integration-cvut.
There is ROS package rh_integration, where these folders are:
  • src/ - all source codes
  • output/ - calibration matrices, calibration target positions in base_link coordinates from the robot, and detected calibration target positions by CTU Panda detector

In src/ folder, there are these files:

1. ROS node - integration_getImages.cpp - allows to capture set of images by stereo head, where the robot arm equipped with recognizable target sticked on the gripper position the target into several tens of poses. In each the left and right images are captured. The set of predefined positions could be modified in integration_getImages.h, positions settings are really tied with Robot head position, respectively with a visible area of cameras.

2. m-file integration_calibration.m

  1. detectPandaTarget.m - detection of panda targets (CTDetect/ folder)
    • parameter pandaWindowSize = 20; -> default value, if no target is detected, it is autoomatically increased of 10, up to 60, then it returns (0,0,0) point
    • loadDetected = true; -> doesn't detect Panda target, loads detected previously
  2. calibration of camera from the above set and getting transformation between Robot head and the robot

Results of the experiment in Prague

First, we uploaded captured images on repository to folder: /files/misc/Prague_robothead_integration.
On these data, following experiments were processed.

1. Data captured using ROS node, the results are known poses of targets (via robot positions)

2. Detected target positions in stereo images using Panda detector

3. Stereo head was calibrated using UG procedure with known target (chessboard in robot hand). The results are projection matrices of both cameras.

4. The cameras were calibrated via standard procedure from detected target postitions. The reprojection errors were up to 2 pixels, which corresponds to few tenth
of milimeter, that is very close to the accuracy of the robot.

5. 3-D positions of panda targets were reconstructed using stereo head

-Note: residuals arrows in following images lead from measured positions (robot positions) to reconstructed positions (with given scaling).

  • using projection matrices obtained from UG chessboard method (residuals have scale 2)

    mean of residuals (x,y,z) = -0.0079, -0.0162, -0.0278
    std of residuals (x,y,z) = 0.0040, 0.0031, 0.0021
  • using projection matrices obtained by panda method. (residuals have scale 150)

    mean of residuals (x,y,z) = 1.0e-04 * (-0.0270, 0.2198, 0.1336)
    std of residuals (x,y,z) = 1.0e-03 * (0.1009, 0.1565, 0.3419)

6. Both reconstructed points were compared the target positions as reported by robot direct kinematics. The Panda calibration data seems to be quite accurate,
the residuals in space were in the order of tenth of milimeter that is close to accuracy of the robot. The points stereoreconstructed using projection matrices
from chessboard are in much lower agreement with the robot data, the errors are in the order of small centimeters, the main point seems to be magnification.

The possible drawbacks of the experiment performed:

  • The same data from panda positions were used for both for camera calibration and for accuracy evaluation. To defend us, 33 points were
    captured offering 66 equations to determine 11 parameters in projection matrix.
  • We are not sure about the quality of a chessboard target, both its dimensions, orthogonality and flatness. We will investigate it and make
    perhaps new one, plastic foil on the glass.

base_link_positions.png (13.8 KB) Karel Krehnac, 01/22/2014 10:53

left_reproj_residuals.png (12.1 KB) Karel Krehnac, 01/22/2014 10:53

3d_panda_residuals.png (15.1 KB) Karel Krehnac, 01/22/2014 16:33

3d_stereo_residuals.png (15.4 KB) Karel Krehnac, 01/22/2014 16:33