This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cs:slam:start [2019/05/01 15:21] Steve Hemm [Map] |
cs:slam:start [2019/08/18 16:36] (current) Steve Hemm [Overview] |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== SLAM (Page still under construction) ====== | + | ====== SLAM ====== |
====== Simultaneous Localization and Mapping ====== | ====== Simultaneous Localization and Mapping ====== | ||
Line 5: | Line 5: | ||
In robot navigation, a SLAM algorithm is used to construct a map of the robot's environment, while simultaneously locating the robot within that map. There are many different SLAM algorithms, but we are currently using a visual based system using the sub's right and left cameras. This allows us to link the system to Object Detection. | In robot navigation, a SLAM algorithm is used to construct a map of the robot's environment, while simultaneously locating the robot within that map. There are many different SLAM algorithms, but we are currently using a visual based system using the sub's right and left cameras. This allows us to link the system to Object Detection. | ||
- | The specific system we are using is [[https://github.com/raulmur/ORB_SLAM2|ORB-SLAM2]], an open source feature based visual slam system which we modified for the sub. | + | The specific system we are using is [[https://github.com/raulmur/ORB_SLAM2|ORB-SLAM2]], an open source feature based visual slam system which we modified for the sub. The paper describing the system can be found [[https://arxiv.org/pdf/1610.06475.pdf|here]]. |
The algorithm works by detecting features (such as edges and corners) in an image, and locates them in space using triangulation with other known map points. | The algorithm works by detecting features (such as edges and corners) in an image, and locates them in space using triangulation with other known map points. | ||
Line 51: | Line 51: | ||
==== Tracking ==== | ==== Tracking ==== | ||
- | -Tracking localizes the camera by comparing features in a local map. | + | -Tracking localizes the camera by comparing features in a local map. |
- | -Detects features using the FAST algorithm. | + | -Detects features using the [[https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_fast/py_fast.html|FAST Algorithm]]. |
- | -Describes features using ORB algorithm. | + | -Describes features using [[https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_orb/py_orb.html|ORB Algorithm]]. |
-Selects a new keyframe. | -Selects a new keyframe. | ||
-If localization is lost, uses Place Recognition module to relocate. | -If localization is lost, uses Place Recognition module to relocate. | ||
+ | |||
+ | |||
+ | |||
==== Local Mapping ==== | ==== Local Mapping ==== | ||
Line 81: | Line 84: | ||
Each map point stores: | Each map point stores: | ||
- | 1.Its 3D position in the world coordinate system. | + | *Its 3D position in the world coordinate system. |
- | 2.ORB descriptor. | + | *ORB descriptor. |
- | 3.The maximum dmax and minimum dmin distances at which the point can be observed, according to the scale invariance limits of the ORB features. | + | *The maximum dmax and minimum dmin distances at which the point can be observed, according to the scale invariance limits of the ORB features. |
==== Place Recognition ==== | ==== Place Recognition ==== | ||