This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cs:slam:start [2019/08/18 15:58] Steve Hemm [Tracking] |
cs:slam:start [2019/08/18 16:36] (current) Steve Hemm [Overview] |
||
---|---|---|---|
Line 5: | Line 5: | ||
In robot navigation, a SLAM algorithm is used to construct a map of the robot's environment, while simultaneously locating the robot within that map. There are many different SLAM algorithms, but we are currently using a visual based system using the sub's right and left cameras. This allows us to link the system to Object Detection. | In robot navigation, a SLAM algorithm is used to construct a map of the robot's environment, while simultaneously locating the robot within that map. There are many different SLAM algorithms, but we are currently using a visual based system using the sub's right and left cameras. This allows us to link the system to Object Detection. | ||
- | The specific system we are using is [[https://github.com/raulmur/ORB_SLAM2|ORB-SLAM2]], an open source feature based visual slam system which we modified for the sub. | + | The specific system we are using is [[https://github.com/raulmur/ORB_SLAM2|ORB-SLAM2]], an open source feature based visual slam system which we modified for the sub. The paper describing the system can be found [[https://arxiv.org/pdf/1610.06475.pdf|here]]. |
The algorithm works by detecting features (such as edges and corners) in an image, and locates them in space using triangulation with other known map points. | The algorithm works by detecting features (such as edges and corners) in an image, and locates them in space using triangulation with other known map points. | ||
Line 51: | Line 51: | ||
==== Tracking ==== | ==== Tracking ==== | ||
- | -Tracking localizes the camera by comparing features in a local map. | + | -Tracking localizes the camera by comparing features in a local map. |
-Detects features using the [[https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_fast/py_fast.html|FAST Algorithm]]. | -Detects features using the [[https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_fast/py_fast.html|FAST Algorithm]]. | ||
Line 62: | Line 62: | ||
- | The tracking part localizes the camera and decides when to insert a new keyframe. Features are matched with the previous frame and the pose is optimized using motion-only bundle adjustment. The features extracted are FAST corners. (for res. till 752x480, 1000 corners should be good, for higher (KITTI 1241x376) 2000 corners works). Multiple scale-levels (factor 1.2) are used and each level is divided into a grid in which 5 corners per cell are attempted to be extracted. These FAST corners are then described using ORB. The initial pose is estimated using a constant velocity motion model. If the tracking is lost, the place recognition module kicks in and tries to re-localize itself. When there is an estimation of the pose and feature matches, the co-visibility graph of keyframes, that is maintained by the system, is used to get a local visible map. This local map consists of keyframes that share map point with the current frame, the neighbors of these keyframes and a reference keyframe which share the most map points with the current frame. Through re-projection, matches of the local map are searched on the frame and the camera pose is optimized using these matches. Finally is decided if a new Keyframe needs to be created, new keyframes are inserted very frequently to make tracking more robust. A new keyframe is created when at least 20 frames has passed from the last keyframe, and last global re-localization, the frame tracks at least 50 points of which less then 90% are point from the reference keyframe. | + | |
==== Local Mapping ==== | ==== Local Mapping ==== | ||