basic_specs-flea3-gige.pdf
fl3-ge-imaging-performance.pdf
fl3-ge_gettingstarted.pdf
flea3-ge-technical-reference.pdf
Derived from here: Linux Streaming Fix
CAUSE: When streaming images from a GigE Vision camera on Linux Ubuntu 8.04 systems, a high number of lost data packets may be observed. In FlyCapture SDK applications, dropped packets result in IMAGE_CONSISTENCY_ERRORS returned.
ANSWER:
To fix, try one or both of the following:
Increase packet delay time using the FlyCapture2 API or the FlyCap2 program. Increase the amount of memory Linux uses for receive buffers using the sysctl interface. Whereas the system standard (default) and maximum values for this buffer default to 128 KB and 120 KB respectively, increasing both of these parameters to 1 MB significantly improves image streaming results. Note: On some ARM boards, you may need to increase the receive buffer size to greater than 1 MB before noticing improved streaming results. Increasing the buffer size can enhance receive performance, but it also uses more memory.
The following sysctl command updates the receive buffer memory settings:
sudo sysctl -w net.core.rmem_max=1048576 net.core.rmem_default=1048576
Note: In order for these changes to persist after system reboots, the following lines must be manually added to the bottom of the /etc/sysctl.conf file:
net.core.rmem_max=1048576 net.core.rmem_default=1048576
Once changes are persisted, they can be reloaded at any time by running the following command in sysctl:
sudo sysctl -p
Point Grey supplies drivers for our Gig E cameras. You will need to have installed them:
sudo aptitude install ros-indigo-pointgrey-camera-driver
If you would like to simply enumerate the cameras connected to the network, run:
rosrun pointgrey_camera_driver list_cameras
Sometimes the cameras are a little fiddly and you may need to cycle power to them before they show up on this list. After you have confirmed that any and all cameras you would like to use are connected, you can manually begin publishing of the images from each camera:
roslaunch pointgrey_camera_driver camera.launch
This can become a little tedious after a while, so the current Robosub repository contains a launch file of its own. This can be used as follows:
roslaunch robosub cameras.launch
The cameras should begin publishing on the /camera/[left|right|bottom]/image topic. If you wish to change the serial number used for the left/right camera simply remap it when launching.
roslaunch robosub cameras.launch left_serial:=12345678 right_serial:=12345679
In the future, a downward facing camera is also planned to be added, though many of the steps will be similar.
One more note, when using the Point Grey drivers, the cameras will publish a WFOVImage message so many standard ros systems may not be able to use the data without it being republished. A republisher has been implemented in the Robosub repository and can be run with the following command:
rosrun robosub camera_repub
This will republish the WFOVImage message messages as sensor_msgs/Image messages on the /camera/[left|right|bottom]/undistorted topic which aligns with the undistortion nodes so nodes are agnostic as to whether or not undistortion is being performed.