On March 30th, 2013 we conducted our third skylight rigging field test. The goals of this test were:
A) To determine how much payload our rocket could carry 100 meters and if this agreed with our simulation predictions and
B) Determine the effectiveness of our anchor in the Robot City environment.
Following several field tests (Hummer, Zipline, and static hazard detection), the next stage of this project was to do field testing via flying. After several considerations, we had finally decided to drive out to Virginia Tech to use their RC helicopter for flying our sensor package. However, due to new mounting frame and new cameras, we needed to do several system integration tests to make sure we have accurate data collection.
The first main task the software team had to tackle was to get a good image fusion method. We explored four different methods of image fusion.
1. Image Analysis
1.1 High Dynamic Range (HDR)
While the future goal of our project is to develop a system to allow a robot to traverse a line across the lunar skylight and lower safely to the bottom of the skylight, we are currently designing a system capable of 3D skylight cave wall mapping. Currently, quarry tests are expected to take place February 23rd. We are working to accomplish these objective by:
Since our last blog, we have advanced in many things. This is a first out of a three week blog series where we are going to update you on our advances in Mechanics, Electronics, and Software approaches for the Firefleye project.
Our launcher went through quite a few iterations. Starting from a flywheel launcher:
Image: Flywheel Launcher