This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | |||
cs:slam:start [2019/08/18 16:20] Steve Hemm [Tracking] |
cs:slam:start [2019/08/18 16:36] (current) Steve Hemm [Overview] |
||
---|---|---|---|
Line 5: | Line 5: | ||
In robot navigation, a SLAM algorithm is used to construct a map of the robot's environment, while simultaneously locating the robot within that map. There are many different SLAM algorithms, but we are currently using a visual based system using the sub's right and left cameras. This allows us to link the system to Object Detection. | In robot navigation, a SLAM algorithm is used to construct a map of the robot's environment, while simultaneously locating the robot within that map. There are many different SLAM algorithms, but we are currently using a visual based system using the sub's right and left cameras. This allows us to link the system to Object Detection. | ||
- | The specific system we are using is [[https://github.com/raulmur/ORB_SLAM2|ORB-SLAM2]], an open source feature based visual slam system which we modified for the sub. | + | The specific system we are using is [[https://github.com/raulmur/ORB_SLAM2|ORB-SLAM2]], an open source feature based visual slam system which we modified for the sub. The paper describing the system can be found [[https://arxiv.org/pdf/1610.06475.pdf|here]]. |
The algorithm works by detecting features (such as edges and corners) in an image, and locates them in space using triangulation with other known map points. | The algorithm works by detecting features (such as edges and corners) in an image, and locates them in space using triangulation with other known map points. |