User Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cs:vision:start [2016/11/02 21:15]
Sean Kallaher [Vision]
cs:vision:start [2017/09/12 17:49] (current)
Sean Kallaher
Line 6: Line 6:
 ===== Core Functionality ===== ===== Core Functionality =====
  
-  * The vision system on the sub is completely managed ​by the vision ​moduleThe current daemon implementation allows for the vision module to be spawned independently or as child of the command module. Be sure to comment out communicator initialization code before running any unit tests as the vision system will wait on the broker+  * The vision system on the sub is performed ​by multiple ​vision ​nodesEach currently contains ​number ​of different processors, explained later, which perform a task on the latest images captured from the [[cs:​cameras:​start|cameras]]. These images are synchronized with each other in time
-  * Cameras are managed under an umbrella class named Capture which includes flycapture and opencv cameras in one interface ​for convenience. Refer to the source code for specifics ​on how to use the class as it is a bit hacky in order to get the build to work on OSX which does not have flycap support+  * There is one vision node spawned ​for each task required ​to be performed using the [[https://​github.com/​PalouseRobosub/​robosub/​blob/​master/​launch/​vision.launch|vision.launch]] file. For more information ​on using this launch file, see [[cs:​vision:​start#​Running ​the Vision System ​in ROS|Running ​the Vision System in ROS]] below
-  * The vision system'​s main functionality is implemented through ​the creation and use of vision ​processes. The VisionProcess class inherits all of the process elements from base class Process ​and adds convenient vision specific functions for accessing camera framebuffers from shared memory among other things+  *The processors used by the vision ​node currently are as follows ​and described below
-  Vision processes utilize the FilterTree class for all basic image filtering operations. This allows a dynamic insertion of new algorithm implementations into a vision ​process as desired. It also allows the vision ​GUI tool to access and change parameters for the vision ​pipeline in each process.+    [[cs:vision:​start#​ColorProcessor|ColorProcessor]] 
 +    * [[cs:vision:​start#​StereoProcessor|StereoProcessor]] 
 +    * [[cs:vision:​start#​FeatureProcessor|FeatureProcessor]]
  
-===== Writing a Filter ​=====+===== Processors ​===== 
 +----
  
-  * The building blocks of the algorithms in the vision system utilize the filter. A filter can be thought of as an operation that inputs and outputs an image. Filters are connected together in a pipe and filter architecture with the FilterTree class. +==== ColorProcessor ====
-  * In order to ensure proper serialization and deserialization at runtime and when tweaking algorithms within the GUI, you need to make sure that OpenCV has all of its required wrapper functions implemented. You can look at other filter implementations as an example on how it should be structured. Refer to OpenCV'​s [[http://​docs.opencv.org/​doc/​tutorials/​core/​file_input_output_with_xml_yml/​file_input_output_with_xml_yml.html|filestorage]] documentation for details.+
  
-===== Writing a Vision Process =====+=== Purpose ​=== 
 +Performs color masking according to the parameter file for the node.
  
-  * Vision processes should be thought of as standalone vision threads that accomplish ​reusable vision functionality for the sub to utilize at the AI levelThey can utilize multiple cameras and trees if desired, although one tree and one camera is most common+=== Structure === 
-  ​* ​The first thing you should ​do is create a .hpp and .cpp file for the process you want to make. Since settings file and tree loading ​are automatic, make sure you have an identically named settings and tree file in the settings folder or the process ​will not load and will log the error+The ColorProcessor contains ​FilterSetThis FilterSet will sequentially apply each Filter it contains which are each a single OpenCV operation on an image
-  * You must then include and inherit ​from vision process and implement each function as necessaryYou can look at other processes as examples on how it should look. NOTESince filtering trees operate on filters ​and filters only return images as output, you will have to run algorithms that output something other than an image outside of the tree(ExFinding ​the center of an object ​and returning it.) There are filters ​which output colored visuals for algorithms such as hough lines, circles, and histograms, but they are mainly ​for the GUI and visual verificationRuntime implementations should be outside ​of the filtertree in vision ​processes for things such as this. + 
-  ​* To spawn the vision process you just created, you can send the appropriate spawn command ​to the vision system along with the name of the process. Refer to the communicator documentation for specifics on commands.+=== Parameters === 
 +The parameters for the ColorProcessor part of a vision node should ​be formatted as follows: 
 +  filters: 
 +      - [filter name]: {[filter parameters]} 
 +      - [filter name]: {[filter parameters]} 
 +Where the [filter name] is known by the ColorProcessor allowing it to create the proper filter ​and the [filter parameters] ​are parsed by the filter. An unlimited number of filters is allowed. Each filter ​in the list will operate on the result of the previous filter'​s output
 + 
 +---- 
 + 
 +==== StereoProcessor ==== 
 + 
 +=== Purpose === 
 +Utilizes the two forward facing cameras to determine the distance an object is away from the camerasThis requires the cameras to be [[cs:camera_calibration:​start|calibrated]]. 
 + 
 +=== Structure === 
 +This processor is currently under construction ​and is not finalized. This section ​will be changed in the future. 
 + 
 +=== Parameters === 
 +The StereoProcessor currently does not use any parametersThis may change in the future, however. 
 + 
 +---- 
 + 
 +==== FeatureProcessor ==== 
 + 
 +=== Purpose === 
 +More accurately identifies contours in the image as being a particular ​object. 
 + 
 +=== Structure === 
 +The FeatureProcessor contains a single ObstacleDetector ​which is an abstract class that the detectors ​for each obstacle in the course inherit fromEach detector must also contain a static init function which takes an ObstacleDetector pointer and initializes it to a new instance ​of that detector. 
 + 
 +=== Parameters === 
 +The parameters used by the FeatureProcessor of each vision ​node are formatted ​as follows: 
 +  ​features: 
 +      detector: [detector name] 
 +      params: {[detector parameters]} 
 +There should be only one of these blocks in each parameter file where [detector name] is the string mapped ​to the init function for the desired detector and [detector parameters] is the set of parameters required by the detector. 
 + 
 +----
  
 ===== Cameras ===== ===== Cameras =====
  
-  * To specify what cameras the vision system uses and references at runtime, you must edit the cameras ​array in the vision module json settings file. Cameras ​are referenced by an name **id** which vision processes also useand a serial number. For standard webcams, this should be the integer assigned by the OS. (Typically '​0'​ for single ​camera ​setups. For point grey cameras this should ​be the serial number of the camera ​as the SDK can initialize from this number+See the [[cs:cameras:​start|Cameras]] page for more information on the details ​and use of the cameras
-  ​* ​To edit Point Grey cameras, you should ​use the Windows GUI tool provided ​in the SDK. //NOTE: Changing ANY camera settings should be analyzed before carried out as they can have huge impacts on the software ​system ​for things ​like calibration, vision ​process assumptions,​ and resolution calculations.// + 
-  * If any camera changes are made, make sure to extensively test all vision code before moving on to full integration tests as lot of things most likely broke in the process.+ 
 +===== Running the Vision System ​in ROS ===== 
 + 
 +After making sure that the [[cs:​cameras:​start|cameras]] ​are running if neededuse the following command to start the vision system. 
 + 
 +  roslaunch robosub vision.launch 
 + 
 +To remap left and right camera topics, append 
 +  leftImage:​=[newTopic] 
 +and/or 
 +  rightImage:​=[newTopic] 
 +(bottom ​camera ​to be implemented) 
 +the topics for the simulator are  
 +  /camera/​(left|right|bottom)
 + 
 +To use simulator color parameters and listen to simulator topics by default (they can still be overwritten using the above leftImage and rightImage remaps), append 
 +  simulated:​=true 
 +(This feature could change ​in the near future) 
 +After this, you will see the launch file spin up multiple nodes. The vision ​system ​is running! 
 + 
 +If you would like to see the images that the system is usingyou can run the following command: 
 +  rosparam set /[vision ​node name]/processing/doImShow true 
 +You will then see many windows open each with unique image. 
 + 
 +===== UML =====
  
 +{{ :​cs:​vision:​visionuml.png?​direct&​1000 | Vision UML}}
  
 +{{ :​cs:​vision:​visionuml.pdf | Vision UML }}