Visual Pose Estimation
A system for global localization of the rover from stereo camera or LIDAR data and overhead images is being developed and will be field tested for accuracy. There have been several accomplishments towards this goal.
As shown in the video, the P3 rover is now powered up and controllable (via a cell phone). This provides the team with an initial platform for testing the visual pose estimation system.
The stereo cameras (Prosilica GC1290C) are powered up and can be accessed via both the AVT PvAPI SDK and through the Prosilica driver in ROS. Some images from the stereo cameras are given below.
Panoramas were generated from eight slightly overlapping images from the stereo cameras using OpenCV's Stitcher class. A panorama of the Gates-Hillman Highbay is given below.
Next steps for the team include generating 3D point clouds from the disparity images and completing the translation of the localization simulation code (MATLAB) to a robust implementation (C++).