Robotic bin picking is a process where a robot acquires a part from a bin, orients it properly and places it in a different place. In the field of robotics, this has traditionally been a challenging problem. Parts can overlap with each other, objects need to be recognized in 3D space and the whole process needs to be fast and accurate. Bin picking has enormous implications in industries where sorting parts to feed a robot is mostly done by people. Here, we have tried to use a single Kinect sensor as a 3D imaging system and use the data to control and operate a robotic arm. The Kinect sensor is able to record an image along with the depth value of every pixel. By fixing the position of the Kinect and calibrating the robot, it is possible to create a transformation matrix which allows us to transform the Kinect coordinates to the robot coordinates. Using the transformation matrix, we then use the image recorded by the Kinect to create a 3-D model of the given environment. The 3D model can subsequently be used for pattern matching of a 3D object and to develop an efficient path for the robot to take. Using our system, we have been able to accurately control the robot by interacting with it through the Kinect data. We have also been able to create a basic 3D model.
Before we can start giving coordinates to the robot, we need a way to transform coordinates from a Kinect image (pixel row, column and value) into the robot's coordinates. We achieve this by computing a transformation matrix. The transformation matrix allows us to transform coordinates easily from one coordinate space to another. To construct the transformation matrix, we first tell the robot to go to pre determined points. We then perform pattern matching on the Kinect image to get the location of the arm in our image. We then use a MATLAB script to construct and store the transformation matrix.
To make the process of quantifying the error in calibration easier, we use a simple Labview VI. This program is very similar to the calibration program because we give pre determined coordinates to the robot and detect where the arm is through pattern recognition. We then transform the Kinect image coordinates into the robot's coordinates and compare the values. Below is a comparison of the errors in our calibration. The image on the right showns the current errors in calibration.
The next step is to construct a 3D model of the environment based on a Kinect image. We acheive this by taking a snapshot of the current image and transforming every pixel into the robot's coordinates. We then only plot the range of coordinates which correspond to the area of interest. Below, we have two images of the same marker at two different positions. Notice that they have the same height in our model although in the Kinect image they would have different sizes since they were placed at different distances from the Kinect.