Sunday, March 1, 2015

Sense and Avoid

Introduction

    The most reliable presently developed sensor to meet the Federal Aviation Administration's requirement for sense and avoid for Unmanned Aerospace System (UAS) is radar. Radar systems are ideal in situations where normal optical vision is occluded, such as inclement weather (Barnhart, Shappee, & Mashall, 2011). Radar also offers advantages during night time when there is less visible light to illuminate hazards. The airspace below 18,000 feet MSL is the altitude where a sense and avoid system are necessary, however the majority of aircraft that operate under see and avoid operate below 10,000 feet. The majority of midair collisions occur within 3 miles of the airport, with 50% being below 1000 feet in altitude (Narinder and Wiegmann, 2001).

Capability

Haze is a common visibility restrictor typically encountered by aircraft that operate below 10,000 feet. Haze can significantly reduce visible light and obstruct hazards from timely optical acquisition. However, radar can penetrate hazy environments, giving radar equipped sense and avoid aircraft a hazard acquisition advantage over unaided aircraft which must visually acquire flight hazards.

Radar can also look through dust. Radar sensors offers a capability of transmission through dust, snow, fog, and spindrift (Pagels, Hagelen, Briese, & Tessmann, 2009 ). Hovering aircraft like helicopters and tiltrotor aircraft typically brownout when transitioning to an in ground effect hover while operating in dry unimproved desert environments. Sometimes these brownouts can occur at altitudes as high as 125 feet during a final approach of an aircraft's landing. Hazards quickly become occluded by the brownout. Pilots must rely on their drift indicators, groundspeed, radar altitude, experience and estimation in order to avoid these obstacles once they are inside a dust cloud. However, radar can penetrate through the dirt and dust encountered in these dry dusty environments. Radar could provide a look through capability, making obstacle avoidance more of a science and less of a calculated risk.


Capabilities

The limiting factor with radar in the past has been its size. Radar systems are typically too heavy for practical use on small and medium sized UAS. However, Northrop Grumman has developed a small lightweight radar the AN/ZPY-1 STARLite Small Tactical Radar – Lightweight which is light enough, 39-65lbs, to be carried on small and medium UAS (AN/ZPY-1 STARLite Small Tactical Radar – Lightweight). STARLite is designed as a tactical sensor for Intelligence, Surveillance and Reconnaissance. Although STARLite was not designed specifically as a sense and avoid system, its small size and Synthetic Aperture Radar (SAR) capability is an ideal sensor for sense and avoid integration.

The radar is mounted on a rotating mechanical gimbal with a 360 degree field of regard, the antenna itself has a 110 degree field of view. In addition to SAR, it has a dismount moving target indicator mode that can track a person walking on the ground from a range of 4.3 nm. STARLite occupies 1.2 cubic feet, and requires less than 750W of power (AN/ZPY-1 STARLite Small Tactical Radar – Lightweight).

Conclusion

    Although STARLite was designed as an Intelligence, Surveillance and Reconnaissance sensor, it has the potential to provide a sense and avoid capability requirement for unmanned systems. STARLite can provide a look through hazard identification capability when environmental factors would limit optical acquisition of those hazards. This capability does not come cheap. STARLite costs about $400,000 per unit (ER/MP Gray Eagle: Enhanced MQ-1C Predators for the ArmIy, 2014). Synthetic Aperture Radar is located onboard the UAS. It is a sensor that would enable a UAS to maintain a sense and avoid capability even if a signal with a ground based station or like system is lost. The fact that a UAS would still be able to provide a sense and avoid capability independent of external systems makes STARLite a present day viable solution to the sense and avoid capability requirements.


 

References

AN/ZPY-1 STARLite Small Tactical Radar – Lightweight. (n.d). Retrieved from http://www.northropgrumman.com/Capabilities/starlite/Pages/default.aspx

Barnhart, R. K., Shappee, E., & Marshall, D. M. (2011). Introduction to Unmanned Aircraft Systems. London, GBR: CRC Press. Retrieved from http://www.ebrary.com

ER/MP Gray Eagle: Enhanced MQ-1C Predators for the Army. (2014). Defense Industry Daily.

Narinder, T., and D. Wiegmann. 2001. Analysis of mid-air collisions in civil aviation. Proceedings of the 45th annual meeting of the Human Factors and Ergonomics Society.

Pagels, A., Hagelen, M., Briese, G., & Tessmann, A. (2009). Helicopter assisted landing system – millimeter-wave against brown-out. Paper presented at the 1-3. doi:10.1109/GEMIC.2009.4815878

Sunday, February 22, 2015

The Inner Voice of Unmanned System Control

Introduction
The innovative technology from Think-A-Move (TAM) called SPEAR is comprised of an in ear microphone that captures speech inside the ear canal which eliminates background noise and captures voice commands. SPEAR is designed to work with a joystick operated controlled unit and, a portable laptop. SPEAR is currently conducting field testing.
    Traditionally, Unmanned Ground Vehicles (UGV) are tele-operated through a joystick-based Operator Control Unit (OCU), which tends to be big and bulky, requiring the operator to be confined to a military vehicle or some other stationary position. Moreover, joystick control of the robot requires the operator to employ both of his hands, leaving him defenseless in case of an attack (Chhatpar, Blanco, Czerniak, Hoffman, Juneja, Pruthi, Liu, Karlsen, Brown, 2009). Ttraditional ground station control systems require the operator to be secured when operating in the hostile environments typically encounter in combat scenarios. Furthermore the operator's mobility and situational awareness is degraded. In combat situations the degradation of the operator's situational awareness could become fatal. Augmenting traditional joystick actuated controlling units with SPEAR increases the operator's mobility and situational awareness.

 

How it works
    The system design provides 30 dB of passive noise reduction, enabling it to work well in high-noise environments, where the performance of traditional speech systems, using external microphones, degrade significantly; it also utilizes a proprietary speech recognition engine to process the detected signal and additional algorithms enabling it to maintain a high level of accuracy in high noise environments… The performance of traditional speech recognition systems, which utilize an external microphone, degrades significantly when ambient noise levels increase. This is because of the decrease in signal-to-noise ratio which results from increase in ambient noise levels. Given the loud noise levels typically experienced in a military operational environment, this significantly limits the practical application of traditional speech recognition systems (Chhatpar, et al, 2009).
    Once the SPEAR receives voice commands those commands are processed via a portable notebook which the operator could carry in a backpack. A wired connection carries the signal from the TAM earpiece, and ends in a standard 3.5mm audio jack that can be plugged into a computer soundcard (Brown, Blanco, Czerniak, Hoffman, Hoffman, Juneja, Ngia, Pruthi, Liu, 2010). Those commands are processed and relayed to the UGV. The UGV can execute voice commands from SPEAR and commands from the traditional joystick-based Operator Control Unit at the same time.

 

Challenges
    Research conducted in 2010 demonstrated how important it is to tailor speech commands to the target audience. Before training the soldiers in the experiment, less than 10% of the commands the Soldiers thought should be used were the commands that were programmed into the speech-control system. Even after training and using many of the commands during a simulation task, only 34% of the Soldiers remembered the commands that the system designers programmed. Commands that were initially intuitive ("Take picture" and "Label alpha") were correctly used by 72% and 83%, respectively, of the Soldiers after training. Conversely, less intuitive phrases such as "Activate exclusion zone" were not remembered by any of the Soldiers, even after training (Redden, Carstens, Pettitt, 2010).
    Another area of concern was differentiating between causal speech and speech commands. Many of the commands like "take picture" could be used in conversation with other soldiers. Having the UGV execute unintended commands during missions could be inconvenient or detrimental to the mission depending on the situation. For instance, mistakenly giving the command to "cut the wire" while explaining the details of the mission to a superior could have dire consequences during an explosive disposal operation.

 

Recommended Improvements
    Improvements to the command data base are already in progress. Other recommended improvements are the use of a "begin command" key word and in "end command" key word. An uncommon word that would not be used in everyday speech would be most suitable. That way commands would not be activated when commands were not intended to be activated. Feedback from the SPEAR software could also ask for confirmation for sensitive commands like in the "cut the wire" example above.
    The inclusion of additional user feedback references like Tactile Situational Awareness System, head tracking, gesture recognition sensors and the use of look through display and augmented reality glasses like Google Glass or Microsoft HoloLens would free up a soldiers hands for personal defense and increase the soldier's situational awareness. The soldier would be able to see through his own eyes and the camera of the UGV by utilizes a look through type display. Object recognition software could highlight and/or magnify objects of concern that the UGV sees or the soldier sees display on the look through glasses. For instance a data base of objects commonly used to hide Improvised Explosive Devices (IED) could scan the background and identify likely threats at the discretion of the operator. The operator could command the UGV camera to track his head movements during intricate operations like diffusing an IED. Tactile feedback could alert the operator when the UGV is approaching obstacles or approaching dangerous rollover angles. Gesture recognition would allow the operator to use hand signals to control the UGV when the tactical situation dictates silence. Each of these technologies are still in its infancy. Complimentary integration strategies of each control should be considered during the continued development of these technologies.


    
References
Brown, J., Blanco, C., Czerniak, J., Hoffman, B., Hoffman, O., Juneja, A., Ngia, L., Pruthi, T., Liu, D. (2010). Soldier experiments and assessments using SPEAR speech control system for UGVs. Paper presented at the, 7664(1) doi:10.1117/12.852507

Chhatpar, S. R., Blanco, C., Czerniak, J., Hoffman, O., Juneja, A., Pruthi, T., Liu, D., Karlsen, R., Brown, J. (2009). Field experiments using SPEAR: A speech control system for UGVs. Paper presented at the, 7332(1) doi:10.1117/12.818854

Glass. Retrieved February 22, 2015 from http://www.google.com/glass/start/

Microsoft HoloLens. Retrieved February 22, 2015 from http://www.microsoft.com/microsoft-hololens/en-us

Redden, E. S., Carstens, C. B., Pettitt, R. A., & ARMY RESEARCH LAB ABERDEEN PROVING GROUND MD HUMAN RESEARCH AND ENGINEERING DIRECTORATE. (2010). Intuitive speech-based robotic control

Sunday, February 8, 2015

NASA’s Global Hawk Pacific Data Delivery

NASA's Global Hawk Pacific (GoPac) mission was the first mission using the Global Hawk Aircraft to study trace gases, aerosols, and the dynamics of the upper troposphere and lower stratosphere (Dunbar) using ten different exteroceptive instruments packages.

Instrument Packages
Airborne Compact atmospheric mapper – two spectrometers to measure how sunlight is absorbed and scattered throughout the atmosphere, and two high definition cameras to identify cloud types and features on Earth's surface.

High Definition Video System - forward looking, time lapsed imagery to identify cloud types and provide situational awareness for the plane, allowing the mission team to change altitude and course to investigate interesting atmospheric phenomena

Microwave Temperature Profiler – This radio meter detects naturally-occurring emissions of microwaves from oxygen molecules in the atmosphere a measurement that is translated in a picture of temperature field above, at, and below the flight path of the plane

Focus Cavity aerosol spectrometer and Nuclei-mode Aerosol sized spectrometer – These spectrometers measure these size distribution and abundance of aerosols. Aerosols play an important but incompletely understood role in climate change and atmospheric dynamics.

Ultra-high sensitivity aerosol spectrometer – This spectrometer looks down a column of air from the plane down to Earth's surface and measures the properties of light in the atmosphere to determine the concentration and size of aerosol particles.

Unmanned Aerial System Hygrometer – This advanced form of Hygrometer uses a continuous beam of laser light and two mirrors to sense the amount water present in the air. Water vapor is a potent greenhouse gas.

UAS Ozone- NOAA Unmanned Aerial System Ozone Instrument – The instrument directly samples the ozone in the atmosphere. Taking a sample of air from outside the aircraft and passing it between a lamp that emits ultraviolet (UV) radiation and a UV detector.

Unmanned Aerial System Chromatograph for Atmospheric Trace Species – the instrument collects air samples and uses two Chromatographers to separate out different molecules and detect the presence and amount of greenhouse gases, ozone-depleting gasses, carbon monoxide, and hydrogen.

Cloud Physics Lidar – This instrument pulses laser light into the atmosphere and observes the reflections – a process known as light detection and ranging, or LIDAR – to reveal the structure and brightness of clouds and aerosols.

Meteorological Measurement System – This package of instruments measures atmospheric temperature, pressure, air turbulence and the direction and speed of the winds (both horizontally and vertically) immediately around the plane. (Dunbar)

Global Hawks flight Characteristics
Global Hawk's maximum endurance of 42 hours and an on-station endurance of 24 hours, range of 3000 NM, and maximum altitude of 65,000 feet which is above significant weather occurrences (Ivancic and Sullivan) enables it to monitor the development of weather patterns in the Pacific Ocean. Electrical sensors are power by a 28 VDC, 186.3 A (5.2 KW) engine driven generator and a 115 VAC, 3-phase 400 Hz, 71.8 A/phase (8.3 KVA) hydraulic powered generator (NASA Armstrong…). The electricity created by these two generators power all of the electrical sensors on the Global Hawk.

Data Delivery
The Global Hawk mission used a 2 Mbps bidirectional link provided by the Ku-band satellite link. The system was capable of 50 Mbps but the cost to operate at such rates was prohibitive (Ivancic and Sullivan). The Global Hawk used a method to deliver data to ground stations called store and forward. The aircraft network uses a standard ethernet TCP/IP LAN with airworthy switches using 10/100T ports. The Link Module system acts as a router between the aircraft and Global Hawk Operation Center networks. Additionally, the Link Module system acts as an on-board file server and database as well as a wide-band router (Sorenson). The data collected from the instrument packages is stored internally within the aircraft and transferred to a ground station when signal coverage of the KU band supports data transfer. This way scientists can evaluate the data as close to real-time as possible. Data is backed up onboard in case of signal failure or corruption.

Recommended improvements
The data relay operation of the GoPac Global Hawk is heavily reliant on satellite coverage to transfer data to ground stations throughout its flight. The Global Hawk actually losses satellite signal coverage around 75 degrees north latitude and during satellite handoff. Because of the store and forward protocol the Global Hawk is able to retain data and transfer once satellite communications are restored. However, if the Global Hawk would be lost during this time data may not be recoverable.

One Possible solution is the use of smaller high endurance Unmanned Aerial Systems (UAS) that can act as network nodes with ground stations. These small UASs could also act as data system backups by also utilizing the same store and forward protocols that Global Hawk utilizes. If one of the UASs is lost the remaining UASs would still be able retain and transfer valuable data. Utilizing UAS platforms would decrease or eliminate the reliance on satellite network coverage therefore decreasing high operational data transfer costs.

References:
Dunbar, Brian. August 2013. GPAC 2010. Retrieved February 7, 2015 from http://www.nasa.gov/externalflash/Glopac/

Ivancic, William D.; Sullivan, Donald V. Delivery of Unmanned Aerial Vehicle Data (January 2011)
Sorenson, Carl. Global Hawk Payload Network Communications Guide. (November 2008).

NASA Armstrong Fact Sheet: Global Hawk High-altitude, long-endurance science aircraft. Feb 28, 2014). Retrieved February 7, 2015 from http://www.nasa.gov/centers/armstrong/news/FactSheets/FS-098-DFRC.html


Sunday, February 1, 2015

Unmanned System Sensor Integration and Placement

Sensor placement is a critical design decision that is based on the objective that an unmanned system will be tasked to perform.
Blade Nano QX

First person view (FPV) racing is intense. A small remotely piloted unmanned aeronautical system that can zip through obstacles at 100mph, the Blade Nano QX is small, capable, durable and affordable. 
The heart of the Blade Nano QX is a multisensor 4-1 module that receives radio signals from a handheld transmitter. It provides electric speed control, attitude gyros and control mixing that provides a stable flying platform for the user. When a user makes a control input, the mixing control references the UAS's current attitude via the internal gyros then adjusts the control surfaces needed to accomplish the desired input. The mixing control limits control inputs from the user. In stability flight mode, the mixing unit limits user inputs when reaching pitch attitudes and bank angles that could lead to an unusual attitude and cause the UAS to depart controlled flight. The limiting feature of the mixing control can be placed into agility mode so the user can fly aerobatic flight profiles and achieve high gain control response in contrast to stability mode.
The eyes of the Blade Nano QX is the Spektrum™ ultra micro FPV camera. The FPV camera is the main sensor that enables the user to fly in FPV. This camera is capable of 720p HD video without transmission lag to the user's headset via the SpiroNET circular polarized antenna system. The Fat Shark Teleporter V4 5.8GHz headset with digital head tracking adjusts the camera in reference to the user's head movement via a gimbal. The panning of the FPV camera allows the user to scan ahead of the Blade Nano QX for obstacles along the desired flight path. The camera is placed inside to nose of the UAS. The FPV camera placement along the center cordline of the airframe makes it possible for the user to fly at such a high rate of speed and avoid smashing into obstacles. Placement of the camera in the center of the UAS enables the user to estimate obstacle clearances and fly the best path.   
The Blade 4-channel 2.4GHz transmitter enables the user to remotely control the Blade Nano QX. A LED light on the transmitter indicates when the signal between the transmitter and the receiver is lost. The Blade Nano QX also has a low battery sensor/indicator on the receiver. All of these sensors integrated in the Blade Nano QX enable the user too safely and effectively control the Blade Nano QX when flying racing circuits at outrageous speeds through obstacles from first person perspective. But it's the camera placement that makes it possible.

DJ Phantom 2 vision+
"The most important thing, of course, if you are flying to shoot, is to see what your composition is," -filmmaker Philip Bloom (Hansen)
When conducting aerial motion video and stills below 400ft above ground level (AGL) the most important thing is stability. Not only is the Phantom 2 Vision+ stable but it is also one of the easiest remotely piloted UASs available. The Phantom 2 is so easy to use that everyone, not just the hardcore hobbyist, can use it to capture stunning 1080p videos and 14 megapixel still photographs.    
The Phantom 2 Vision+ uses an f2.8 lens paired with a 14-megapixel 1/2.3-inch CMOS sensor that can capture Adobe DNG raw and JPEG images and video at up to 1080p at 30fps and 720p at 60fps. You can also control ISO, exposure compensation and white balance, and choose from a 140-, 120-, or 90-degree field of view (Top-notch). Videos and photographs are stored on a 4GB removable SD card. The camera is stabilized with a 3-axial stabilized gimbal. The user can control the ISO, exposure compensation and white balance, and choose from a 140-, 120-, or 90-degree field of view(Top-notch). The placement of the camera underneath the UAS on the gimbal enables the Phantom 2 Vision+ to create professional videos and pictures. The location of this camera prevents the frame of UAS from interfering with the pictures. 
The key piece that makes the Vision+ easy to fly for beginners is the built-in DJI Naza-M flight control system. It's made up of an inertial sensor, barometric altimeter, a compass, GPS, LED flight [indicator sensors] and a controller that gets them all to work together(Top-notch). You can set up to 16 GPS way points via the Vision app. The Phantom 2 Vision+ can be tracked on a smart phone via GPS and a digital map. The digital map also displays no fly areas like airports. If the Phantom 2 Vision+ loses the signal from the handheld transmitter, it will continue a mission if operating in mission mode with GPS waypoints. If the transmitter signal is lost when Phantom 2 Vision+ is being operated manually the Phantom 2 Vision+ will trigger 'Return-to-Home', meaning the Phantom 2 Vision+ will automatically fly back to its takeoff point and land safely (Phantom 2 Vision+). 
The Phantom 2 Vision+ is not the most expensive system, nor does the Phantom 2 Vision+ have all the top of the line features on the market. However, the Phantom 2 Vision+ is an extremely easy to use platform. Its usability enables new users to fly and create stunning high quality videos and pictures.  
References: 
Hansen, Eric. The Best Drones. (January 8, 2015). Retrieved January 31, 2015 from http://thewirecutter.com/reviews/best-drones/

Nano QX FPV RTF with SAFE® Technology. (n.d.) Retrieved January 31, 2015 from http://www.horizonhobby.com/nano-qx-fpv-rtf-with-safe-technology-blh7200 
Nano QX Quad-Copter Manual. (January 7, 2013). Retrieved January 31, 2015 from www.horizonhobby.com/pdf/BLH7600-Manual_EN.pdf

Phantom 2 Vision+. (n.d.). Retrieved January 31, 2015 from http://www.dji.com/product/phantom-2-vision-plus/feature 
Phantom 2 Vision+ User Manual (EN) v1.8. (January 30, 2015). Retrieved January 31, 2015 from http://www.dji.com/product/phantom-2-vision-plus/download

Top-notch eye in the sky. (October 14, 2014). Retrieved January 31, 2015 from http://www.cnet.com/products/dji-phantom-2-vision-plus/
   


Sunday, January 25, 2015

Unmanned Maritime Search and Rescue, ROAZ II

January 25, 2015

 

Overview


On my first deployment as a United States Marine Corps Aviator with the 26th Expeditionary Unit, I was immediately inculcated with respect for the necessity of risk management while operating at sea. Within the first month of deployment during a long range airlift operation from ship to shore, I lost an engine on my dual engine CH-46E helicopter. The engine had failed after takeoff. The first thing that I thought of as looked down at the expanse of ocean between me and salvation back on the boat was, "thank God I can swim." But in hind sight I realize how lucky we were to have been able to limp back the USS Iwo Jima with one engine not only failed but smoldering in fire. We had been barely able to land within inches of the edge of the flight deck only by severely overtorquing the transmission of our over encumbered aircraft.
Later in the deployment my appreciation for my good fortune increased when three sailors fell into the Gulf of Aden while trying to deploy a raft from the deck of the USS San Antonio. They fell a distance of about 15 feet. Two of the sailors were immediately rescued and unfortunately the third was never recovered despite the entire episode being watched by several other sailors. After that I had a great respect for the expansiveness of the seas.    
 

Search and Rescue (SAR) in the maritime environment involves considerable risk to SAR teams where survival times are short and minimum response time is crucial for recovery and survival of victims, see table (1). Unmanned systems are being developed to augment SAR. Unmanned systems can operate in adverse environments where low visibility and high sea states can place human rescuers at high risk unnecessarily. This article analyzes the operation of the ROAZ II that is being integrated into a search and rescue system developed by ICARUS and the European Union.
Table (1)
Water Temperature
Exhaustion or Unconsciousness in
Expected Survival Time
70–80° F (21–27° C)
3–12 hours
3 hours – indefinitely
60–70° F (16–21° C)
2–7 hours
2–40 hours
50–60° F (10–16° C)
1–2 hours
1–6 hours
40–50° F (4–10° C)
30–60 minutes
1–3 hours
32.5–40° F (0–4° C)
15–30 minutes
30–90 minutes
<32° F (<0° C)
Under 15 minutes
Under 15–45 minutes
(Cold Water Survival)

ROAZ II

As part of the ICARUS project, particularly in the maritime scenario, multiple heterogeneous unmanned platforms (by air or surface) will cooperate, in order to detect and assist castaways (Matos et al, 2013). One of those systems is the ROAZ II. ROAZ is a twin hull autonomous surface vehicle developed for oceanic robotics research and monitoring. It is based on a high density polyethylene 4.2m long catamaran capable of operating in ocean environment equipped with electric propulsion (two 2kW thrusters). With an autonomy of operation (depending on the battery configuration and speed) 8h to 10hr it can reach 10knots of maximum velocity (Matos et al, 2013). The ROAZ II was used as the delivery platform for an Unmanned Capsule which is capable of delivering and deploying a life raft with survival items to sea stranded victims. 

Internal Systems

The internal systems are monitored by the supervising module [which] is responsible [for] [monitoring] a set of internal values, such as temperature, power consumption, or available energy, and send alert messages or trigger energy [sp] behaviors whenever faults are detected. It also monitors the communication link (Matos et al, 2013). This systems is comprised of a proprioceptive sensors that are all networks to the supervising module.  

Navigation and Rescue

With the exception of the GPS receiver antennae autonomous navigation is primarily accomplished through a the use of proprioceptive sensors like a …GPS unit for absolute positioning (Novatel SmartAntenna, superstar II) and an IMU sensor coupled with magnetometer providing orientation, attitude velocities and accelerations. The IMU used is a Microstrain 3DM-GX1 module combining three angular rate gyros with three orthogonal accelerometers and three orthogonal magnetometers outputs orientation, angular rate and acceleration at a rate of more than 50Hz (Martins et al).
The ROAZ II also has the capability of using exteroceptive sensors for vision based target acquisition and tracking. ROAZ II [is] equipped with conventional cameras (both for on board image processing and for video transmission). ROAZ II system is equipped with a thermographic [sp] infrared camera (Fig. 5) capable of resolution up to 0.1ºC of temperature difference. The vision system processes [images] in real time, with edge detection and object identification, extracting target image characteristics (position, orientation) (Martins et al). Target position can be calculated into a 3D and 2D coordinate plane. A four state kalman filter is used to estimate target position and velocities from the raw vision data (Martins et al). From the target calculation the ROAZ II is able to maintain a fixed distance from the target.
Two other exteroceptive sensors, a side tracking radar provides bottom imaging capabilities and an external IEEE 802.11 a b/g Ethernet modem with external antenna which allows the ROAZ II to be operated remotely (Martins et al).
Once the ROAZ is within a nominal distance from its target under the ICARUS system it is then able to deploy a smaller Unmanned Capsule (UCAP) which is responsible for deploying a life raft and survival items within minimal distance to the stranded victim. The ROAZ is capable of holding multiple UCAPs and aid several victims. 

Unmanned Aeronautical System (UAS) Integration

The ICARUS project is an emerging endeavor. As further tests are conducted other systems like UAS are to be integrated into the unmanned SAR system. UAS can travel faster and cover ground more efficiently than surface and underwater systems. For example, a UAS that operates below 10,000 feet typically can operate between 45kts and 200kts depending on aircraft design compared to a maritime surface system that typically operate between 15kts and 35kts also depending on the systems design. Integrating a UAS into the ICARUS system with ROAZ and UCAP will add an additional dimension and improve launch to the rescue times which are critical for survivability.

Improvements

Because the ROAZ is a twin hull design, in the event that the ROAZ is capsized it would be rendered ineffective. A counter balance system comprised of weights and self-inflating bags integrated into platform designed to pivot into an upright position in the event that it is capsized would allow the ROAZ to continue rescue missions despite being capsized.
An additional improvement would be the development and integration of a modular coordination system. This system could be designed so that multiple unmanned systems could take advantage of the full spectrum of space (underwater, surface, and air). Incorporation of multiple systems would further improve efficiency and response times as long as those systems are integrated in complementary organization.   

Conclusion

Both manned and unmanned systems can be equipped with similar proprioceptive and exteroceptive sensors. However, the real advantages of using unmanned systems to augment manned SAR operations is the reduction of operating cost in both people and materials. The use of unmanned systems in adverse environments reduces the overall risk to the human rescuer as well as decreasing time to rescue for the victims. In short unmanned systems increase survivability.

 

References
Aníbal Matos, Eduardo Silva, Nuno Cruz, José Carlos Alves, Duarte Almeida, Miguel Pinto, Alfredo Martins, José Almeida, Diogo Machado. Development of an Unmanned Capsule for LargeScale Maritime Search and Rescue. (2013). Retrieved on January 25, 2015 from http://oceansys.fe.up.pt/publications/2013_MatosSilvaCruzAlvesAlmeida.pdf

Cold Water Survival. (n.d.). Retrieved on January 25, 2015 from http://www.ussartf.org/cold_water_survival.htm

Alfredo Martins, Hugo Ferreira, Carlos Almeida, Hugo Silva, José Miguel Almeida, Eduardo Silva. ROAZ and ROAZ II Autonomous Surface Vehicle Design and Implementation. (n.d.) Retrieved on January 25, 2015 from www.researchgate.net/.../224982209_ROAZ_and_ROAZ_II_


 


 


 


 


 


Sunday, January 18, 2015

Bluefin Inertial Navigation


 
     Inertial Navigation Systems (INS) use the acceleration detected by sensors like lasers, accelerometers, gyroscopes, etc. and algorithmic equations to calculate their position relative to the frame of the Earth by measuring the centrifugal force from the rotation of the Earth.  The underwater environment increases the complexity of calibrating an INS when compare to systems used in aviation since aviation INSs drift errors are typically updated by the onboard Global Positioning System (GPS).  However, INS is not dependent on GPS in order to function proper.  GPS merely helps eliminate INS drift error which is inherent to modern systems.  In contrast, INSs that operate in underwater environments are only able to utilize GPS corrections when the vehicle is at the surface.  The article Achieving High Navigation Accuracy Using Inertial Navigation Systems in Autonomous Underwater Vehicles by Robert Panish and Mikell Taylor for the Bluefin Robotics Corporations demonstrates the calibration methods of two different INS systems, the T-24 Ring Laser Gyro (RGL) and PHINS III Fiber Optic Gyro (FOG), which are both used on their BlueFin Autonomous Underwater Vehicle.  The operational advantages of each system is minimal in comparison through the evaluation of each INS calibration method presented in this article.
     The T-24 uses RLGs, which means the beam path is created by a set of mirrors redirecting the laser into a loop (Panish and Taylor), while the PHINS III, the laser beam travels through a long optical fiber to create the beam path (Parnish).  Both systems measure the change of the laser’s frequency as a result of the laser being bent by acceleration forces outside of the system.   The measured change in frequency allows the INS to calculate its linear acceleration.  When this linear acceleration is referenced with the Earth’s rotation the INS is able to calculate the systems location by the corresponding it to a unique acceleration vector on each point of the Earth.  Each of these systems uses of a Dopper Velocity Log (DVL), depth sensor and sound speed log (Panish) with the INS in order to calculate its position over time and eliminate drift error.  And both systems use GPS corrections during calibration in order to minimize the initial drift error.  However, the actual methods of each is calibration is unique. 
    During the calibration of the PHINS III INS, inertial sensors and GPS are used to determine its motion.  The DVL velocities are recorded.  The difference in known motion from the INS and GPS in comparison to the DVL motion determines its calibration parameters.  This is done by sending the vehicle on a 5km track line with continuous GPS contact and monitoring the convergence of roll, pitch, and heading misalignment angles (Panish).  Dissimilarly, during the calibration procedure of the T-24 INS the vehicle is submerged and follows a box shaped pattern with surfacing at each of the corners.    Each side takes no less than fifteen minutes.  Using the GPS fixes at each corner it determines its internal biases, scale factors, and misalignment angles (Panish).  Roll and pitch are determined from a simple measurement of the direction of the gravity vector from the accelerometers measurements.  While heading is determined by using the time derivative of the gravity vector, easterly, and the Gravity vector in order to calculate north.  While the alignment of the PHINS III took less time to calibrate than the T-24, the PHINS III was more susceptible to sea state and required more monitoring because of the possibility of collision with other surface ships during calibration. 
     Although both of these methods measure accelerations differently, both of these methods calibrated the two INS systems within a drift error less 0.1% of distance traveled.  The drift error of these two systems far exceeds the design specifications for the system.  The minimization of drift error during calibration is important when evaluating these systems since it will not be able to utilize the GPS to correct for drift while being submerged for long durations and distances.  Since the most notable difference in an operational point of view is the calibration method, each of these systems provides exceptional navigation accuracy that can be used to collect high quality oceanographic data (Panish).      
Robert Panish and Mikell Taylor.  Achieving High Navigation Accuracy Using Inertial Navigation Systems in Autonomous Underwater Vehicles. (2011) Retrieved January 18, 2015, from http://www.bluefinrobotics.com/news-and-downloads/papers-and-articles/