This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
cs:vision:start [2017/01/14 22:37] Sean Kallaher |
cs:vision:start [2017/09/11 15:53] James Irwin |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Vision ====== | ====== Vision ====== | ||
- | <WRAP alert> | ||
- | This page is **stale** and needs to be reviewed. It may be deleted or radically changed in the near future. | ||
- | </WRAP> | ||
===== Core Functionality ===== | ===== Core Functionality ===== | ||
- | * The vision system on the sub is completely managed by the vision module. The current daemon implementation allows for the vision module to be spawned independently or as a child of the command module. Be sure to comment out communicator initialization code before running any unit tests as the vision system will wait on the broker. | + | * The vision system on the sub is performed by multiple vision nodes. Each currently contains a number of different processors, explained later, which perform a task on the latest images captured from the [[cs:cameras:start|cameras]]. These images are synchronized with each other in time. |
- | * Cameras are managed under an umbrella class named Capture which includes flycapture and opencv cameras in one interface for convenience. Refer to the source code for specifics on how to use the class as it is a bit hacky in order to get the build to work on OSX which does not have flycap support. | + | * There is one vision node spawned for each task required to be performed using the [[https://github.com/PalouseRobosub/robosub/blob/master/launch/vision.launch|vision.launch]] file. For more information on using this launch file, see [[cs:vision:start#Running the Vision System in ROS|Running the Vision System in ROS]] below. |
- | * The vision system's main functionality is implemented through the creation and use of vision processes. The VisionProcess class inherits all of the process elements from base class Process and adds convenient vision specific functions for accessing camera framebuffers from shared memory among other things. | + | *The processors used by the vision node currently are as follows and described below. |
- | * Vision processes utilize the FilterTree class for all basic image filtering operations. This allows a dynamic insertion of new algorithm implementations into a vision process as desired. It also allows the vision GUI tool to access and change parameters for the vision pipeline in each process. | + | * [[cs:vision:start#ColorProcessor|ColorProcessor]] |
+ | * [[cs:vision:start#StereoProcessor|StereoProcessor]] | ||
+ | * [[cs:vision:start#FeatureProcessor|FeatureProcessor]] | ||
- | ===== Writing a Filter ===== | + | ===== Processors ===== |
+ | ---- | ||
- | * The building blocks of the algorithms in the vision system utilize the filter. A filter can be thought of as an operation that inputs and outputs an image. Filters are connected together in a pipe and filter architecture with the FilterTree class. | + | ==== ColorProcessor ==== |
- | * In order to ensure proper serialization and deserialization at runtime and when tweaking algorithms within the GUI, you need to make sure that OpenCV has all of its required wrapper functions implemented. You can look at other filter implementations as an example on how it should be structured. Refer to OpenCV's [[http://docs.opencv.org/doc/tutorials/core/file_input_output_with_xml_yml/file_input_output_with_xml_yml.html|filestorage]] documentation for details. | + | |
- | ===== Writing a Vision Process ===== | + | === Purpose === |
+ | Performs color masking according to the parameter file for the node. | ||
- | * Vision processes should be thought of as standalone vision threads that accomplish a reusable vision functionality for the sub to utilize at the AI level. They can utilize multiple cameras and trees if desired, although one tree and one camera is most common. | + | === Structure === |
- | * The first thing you should do is create a .hpp and .cpp file for the process you want to make. Since settings file and tree loading are automatic, make sure you have an identically named settings and tree file in the settings folder or the process will not load and will log the error. | + | The ColorProcessor contains a FilterSet. This FilterSet will sequentially apply each Filter it contains which are each a single OpenCV operation on an image. |
- | * You must then include and inherit from vision process and implement each function as necessary. You can look at other processes as examples on how it should look. NOTE: Since filtering trees operate on filters and filters only return images as output, you will have to run algorithms that output something other than an image outside of the tree. (Ex. Finding the center of an object and returning it.) There are filters which output colored visuals for algorithms such as hough lines, circles, and histograms, but they are mainly for the GUI and visual verification. Runtime implementations should be outside of the filtertree in vision processes for things such as this. | + | |
- | * To spawn the vision process you just created, you can send the appropriate spawn command to the vision system along with the name of the process. Refer to the communicator documentation for specifics on commands. | + | === Parameters === |
+ | The parameters for the ColorProcessor part of a vision node should be formatted as follows: | ||
+ | filters: | ||
+ | - [filter name]: {[filter parameters]} | ||
+ | - [filter name]: {[filter parameters]} | ||
+ | Where the [filter name] is known by the ColorProcessor allowing it to create the proper filter and the [filter parameters] are parsed by the filter. An unlimited number of filters is allowed. Each filter in the list will operate on the result of the previous filter's output. | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ==== StereoProcessor ==== | ||
+ | |||
+ | === Purpose === | ||
+ | Utilizes the two forward facing cameras to determine the distance an object is away from the cameras. This requires the cameras to be [[cs:camera_calibration:start|calibrated]]. | ||
+ | |||
+ | === Structure === | ||
+ | This processor is currently under construction and is not finalized. This section will be changed in the future. | ||
+ | |||
+ | === Parameters === | ||
+ | The StereoProcessor currently does not use any parameters. This may change in the future, however. | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ==== FeatureProcessor ==== | ||
+ | |||
+ | === Purpose === | ||
+ | More accurately identifies contours in the image as being a particular object. | ||
+ | |||
+ | === Structure === | ||
+ | The FeatureProcessor contains a single ObstacleDetector which is an abstract class that the detectors for each obstacle in the course inherit from. Each detector must also contain a static init function which takes an ObstacleDetector pointer and initializes it to a new instance of that detector. | ||
+ | |||
+ | === Parameters === | ||
+ | The parameters used by the FeatureProcessor of each vision node are formatted as follows: | ||
+ | features: | ||
+ | detector: [detector name] | ||
+ | params: {[detector parameters]} | ||
+ | There should be only one of these blocks in each parameter file where [detector name] is the string mapped to the init function for the desired detector and [detector parameters] is the set of parameters required by the detector. | ||
+ | |||
+ | ---- | ||
===== Cameras ===== | ===== Cameras ===== | ||
Line 29: | Line 66: | ||
===== Running the Vision System in ROS ===== | ===== Running the Vision System in ROS ===== | ||
+ | |||
+ | After making sure that the [[cs:cameras:start|cameras]] are running if needed, use the following command to start the vision system. | ||
roslaunch robosub vision.launch | roslaunch robosub vision.launch | ||
Line 37: | Line 76: | ||
rightImage:=[newTopic] | rightImage:=[newTopic] | ||
(bottom camera to be implemented) | (bottom camera to be implemented) | ||
- | the topics for the simulator are /camera/(left|right|bottom). | + | the topics for the simulator are |
+ | /camera/(left|right|bottom). | ||
- | To use simulator color parameters, append | + | To use simulator color parameters and listen to simulator topics by default (they can still be overwritten using the above leftImage and rightImage remaps), append |
simulated:=true | simulated:=true | ||
(This feature could change in the near future) | (This feature could change in the near future) | ||
Line 45: | Line 85: | ||
If you would like to see the images that the system is using, you can run the following command: | If you would like to see the images that the system is using, you can run the following command: | ||
- | rosparam set /{vision node name}/processing/doImShow true | + | rosparam set /[vision node name]/processing/doImShow true |
You will then see many windows open each with a unique image. | You will then see many windows open each with a unique image. | ||
+ | |||
+ | ===== UML ===== | ||
+ | |||
+ | {{ :cs:vision:visionuml.png?direct&1000 | Vision UML}} | ||
+ | |||
+ | {{ :cs:vision:visionuml.pdf | Vision UML }} |