User Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cs:fusion:start [2019/04/16 21:43]
Andrew Rink
cs:fusion:start [2019/04/16 21:54] (current)
Andrew Rink [How it Works]
Line 18: Line 18:
 $ rosrun robosub fusion $ rosrun robosub fusion
 </​code>​ </​code>​
 +
 +===== How it Works =====
 +The basic concept of the node is simple: take the bounding box points provided by the SLAM system and attach the labels from the object detection system so we get an identified object in our map. Both SLAM and object detection treat the cameras as 2D images when finding objects, so by calculating the closest x,y coordinate pairs within a single image we can reasonably determine which labels belong to which bounding boxes. Since SLAM provides both a 3D vector to the bounding box and the x,y coordinate within the image, the end result is a 3D vector to a labeled object in the map. The final step of the fusion node is to transform the object from its current frame to the "​world"​ frame and publish that to /tf.