User Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cs:slam:start [2019/05/01 14:56]
Steve Hemm [Tracking]
cs:slam:start [2019/08/18 16:36] (current)
Steve Hemm [Overview]
Line 1: Line 1:
-====== SLAM (Page still under construction) ​======+====== SLAM ======
 ====== Simultaneous Localization and Mapping ====== ====== Simultaneous Localization and Mapping ======
  
Line 5: Line 5:
 In robot navigation, a SLAM algorithm is used to construct a map of the robot'​s environment,​ while simultaneously locating the robot within that map. There are many different SLAM algorithms, but we are currently using a visual based system using the sub's right and left cameras. This allows us to link the system to Object Detection. In robot navigation, a SLAM algorithm is used to construct a map of the robot'​s environment,​ while simultaneously locating the robot within that map. There are many different SLAM algorithms, but we are currently using a visual based system using the sub's right and left cameras. This allows us to link the system to Object Detection.
  
-The specific system we are using is [[https://​github.com/​raulmur/​ORB_SLAM2|ORB-SLAM2]],​ an open source feature based visual slam system which we modified for the sub.+The specific system we are using is [[https://​github.com/​raulmur/​ORB_SLAM2|ORB-SLAM2]],​ an open source feature based visual slam system which we modified for the sub. The paper describing the system can be found [[https://​arxiv.org/​pdf/​1610.06475.pdf|here]]
  
 The algorithm works by detecting features (such as edges and corners) in an image, and locates them in space using triangulation with other known map points. The algorithm works by detecting features (such as edges and corners) in an image, and locates them in space using triangulation with other known map points.
Line 51: Line 51:
 ==== Tracking ==== ==== Tracking ====
  
--Tracking localizes the camera by comparing features in a local map.+-Tracking localizes the camera by comparing features in a local map.  
  
--Detects features using the FAST algorithm.+-Detects features using the [[https://​docs.opencv.org/​3.0-beta/​doc/​py_tutorials/​py_feature2d/​py_fast/​py_fast.html|FAST Algorithm]].
  
--Describes features using ORB algorithm.+-Describes features using [[https://​opencv-python-tutroals.readthedocs.io/​en/​latest/​py_tutorials/​py_feature2d/​py_orb/​py_orb.html|ORB Algorithm]].
  
 -Selects a new keyframe. -Selects a new keyframe.
  
 -If localization is lost, uses Place Recognition module to relocate. -If localization is lost, uses Place Recognition module to relocate.
 +
 +
 +
 ==== Local Mapping ==== ==== Local Mapping ====
 +
 +-Keyframes are added to co-visibility graph Spanning Tree.
 +
 +-New Map points are creates by triangulating matching ORB features from different keyframes.
 +
 +-Validity of map point is checked by seeing if it is found in other keyframes where it is predicted to be. Must be seen by at least 3 other keyframes.
 +
 +
 ==== Loop Closing ==== ==== Loop Closing ====
 +
 +-Loop closing is when the sub recognizes that it has returned to a previous location and adjust map points to accommodate.
 +
 +-To detect possible loops, check bag of words vectors in Place Recognition module of the current keyframe and its neighbors in the co-visibility graph.
 +
 +-If loop candidate is found preform similarity transform.
 +
 +-Fuse map points and preform bundle adjustment.
 ==== Map ==== ==== Map ====
 +
 +Each map point stores:
 +    *Its 3D position in the world coordinate system.
 +
 +    *ORB descriptor.
 +
 +    *The maximum dmax and minimum dmin distances at which the point can be observed, according to the scale invariance limits of the ORB features.
 ==== Place Recognition ==== ==== Place Recognition ====