Active imaging systems to perform the strategic surveillance of an
aircraft environment in bad weather conditions
Nicolas Riviere*a, Laurent Hespela, Romain Ceolatoa, Florence Droueta
aOnera, The French Aerospace Lab, F31000 Toulouse, France
ABSTRACT
Onera, the French Aerospace Lab, develops and models active imaging systems to understand the relevant physical
phenomena impacting on their performances. As a consequence, efforts have been done both on the propagation of a
pulse through the atmosphere (scintillation and turbulence effects) and, on target geometries and their surface properties
(radiometric and speckle effects). But these imaging systems must operate at night in all ambient illuminations and
weather conditions in order to perform the strategic surveillance of the environment for various worldwide operations or
to perform the enhanced navigation of an aircraft. Onera has implemented codes for 2D and 3D laser imaging systems.
As we aim to image a scene even in the presence of rain, snow, fog or haze, Onera introduces such meteorological
effects in these numerical models and compares simulated images with measurements provided by commercial imaging
systems.
Keywords: Active imaging, bad weather conditions, surveillance, defense, security, aeronautics
1. INTRODUCTION
1.1 General context
Weather conditions influence the incidence of aircraft (A/C) accidents in a number of ways. As an example, one can
remember the 11th April 2011 when an Air France A380, taxiing along the runway of JFK Airport in New York, clipped
the wing of a smaller Comair CRJ jet, sending it into a spin. Air France said 495 people and 25 crew members were on
the Airbus A380 bound for Paris, while the Comair Regional Jet, which had just landed, was carrying 62 passengers and
four crew members. There were no reports of injuries but both aircraft have been grounded pending an investigation.
This accident is mainly due to bad weather conditions (strong rain) and low visibility (night vision).
Onera identified different sensor technologies in order to detect, to recognize and to localize objects of the scene in all
weather conditions (rain, snow, fog, haze, dust wind…) to provide enhanced visions to the crew. One distinguishes two
kinds of sensors to fulfill the target: large Field of View (FOV) sensors (passive visible/IR Camera, Radar) and, high
spatial resolution - small FOV optical laser sensors prototypes.
On the one hand, large FOV sensors technologies are often chosen to offer the best solution to the pilot to maneuver on
the taxi way. On the other hand, the small FOV sensors study is dedicated to recognize and to localize fixed and mobile
objects on the taxiway. Onera will improve the obstacle detection, the localization of obstacles previously detected and
the recognition of these objects. Bad weather conditions (rain, snow, fog, haze, dust wind…) impacts the visibility and
also the contrast of the scene. Several technologies can be considered like 2D flash ladar or 3D ladar (laser scanner, new
Focal Plane Area…).
1.2 Main objective
Onera develops new laser imaging systems and simulates images obtained with such systems by modeling all physical
phenomena including meteorological ones. For small FOV applications, we distinguish 2D and 3D active imaging
devices as physical phenomena can be different in these two cases. In this paper, we first explain the meteorological
events impacting on the environment under bad weather conditions. Then, we will describe 2 codes to model 2D and 3D
active imaging systems. First experimental results are presented in section 4: we focus the study on 3D applications.
*riviere@onera.fr; phone 33 562 252 624; fax 33 562 252 588; onera.fr
Electro-Optical Remote Sensing, Photonic Technologies, and Applications V, edited by Gary W. Kamerman,
Ove Steinvall, Keith L. Lewis, Gary J. Bishop, John D. Gonglewski, Proc. of SPIE Vol. 8186
81860H · © 2011 SPIE · CCC code: 0277-786X/11/$18 · doi: 10.1117/12.897994
Proc. of SPIE Vol. 8186 81860H-1
Downloaded From: http://spiedigitallibrary.org/ on 01/06/2014 Terms of Use: http://spiedl.org/terms
2. BAD WEATHER CONDITIONS
2.1 Introduction
Current vision systems are designed to perform in clear weather. In any outdoor application, there is no escape from
“bad” weather. Weather conditions differ mainly in the types and sizes of the particles involved and their concentrations
in space. Efforts have gone into measuring particle sizes and concentrations for a variety of conditions. Given the small
size of air molecules, relative to the wavelength of visible light, scattering due to air is rather minimal. We will refer to
the event of pure air scattering as a clear day (or night). Larger particles produce a variety of weather conditions which
we will briefly describe below.
Most critical bad weather conditions are linked to disruptive events introducing low visibilities and a disorientation of an
A/C on a taxiway. Snow could introduce a loss of the visibility. Moreover, light elements are dimmed, the surrounding is
turned white and, in case of associated wind, pilot lost his bearings. Fog events with direct sunlight turn white or blurred
the surrounding. A strong rain induces specular reflection and whiteout: this phenomenon is generally limited to less than
30 minutes but induces traffic delay on an airport. Sandstorm and smoke reduce the surrounding perception. Rain and
snow could increase stopping distances and pilots need to anticipate. As a consequence, the pilot workload increases
with such bad weather conditions.
2.2 Meteorological events
Haze is constituted of aerosol which is a dispersed system of small particles suspended in gas. Haze has a diverse set of
sources including volcanic ashes, foliage exudation, combustion products, and sea salt. Particles produced by these
sources respond quickly to changes in relative humidity and act as nuclei of small water droplets when the humidity is
high. Haze particles are larger than air molecules but smaller than fog droplets. Haze tends to produce a distinctive gray
or bluish hue and is certain to affect visibility.
Fog evolves when the relative humidity of an air parcel reaches saturation. Then, some of the nuclei grow by
condensation into water droplets. Fog and certain types of haze have similar origins and an increase in humidity is
sufficient to turn haze into fog. This transition is quite gradual and an intermediate state is referred to as mist. While
perceptible haze extends to an altitude of several kilometers, fog is typically just a few hundred feet thick. A practical
distinction between fog and haze lies in the greatly reduced visibility induced by the former. There are many types of fog
(eg. radiation or advection fog) which differ from each other in their formation processes.
We also consider rain and snow as typical meteorological events. The process by which cloud droplets turn to rain is a
complex one. Rain causes random spatial and temporal variations in passive images and must be dealt with differently
from the more static weather conditions mentioned above. For active images, rain introduces bad pixel returns. Similar
arguments apply to snow, where the flakes are rough and have more complex shapes and optical properties.
3. ACTIVE IMAGING MODELS
3.1 2D active imaging models
3D imaging can be done with both scanning and non-scanning systems. “3D gated viewing laser radar” or “2D flash
ladar” is a non-scanning laser radar system that applies the so-called gated viewing technique. The gated viewing
technique applies a pulsed laser and a fast gated camera. There are ongoing research programs with 3D gated viewing
imaging at several kilometers range with a range resolution and accuracy better than ten centimeters.
Laser range gated imaging is also an active night vision which utilizes a high powered pulsed light source for
illumination and imaging. Range gating is a technique which controls the laser pulses in conjunction with the shutter
speed of the camera's detectors. Gated imaging technology can be divided into single shot, where the detector captures
the image from a single light pulse to multi-shot, where the detector integrates the light pulses from multiple shots to
form an image. One of the key advantages of this technique is the ability to perform target recognition as opposed to
detection with thermal imaging.
Proc. of SPIE Vol. 8186 81860H-2
Downloaded From: http://spiedigitallibrary.org/ on 01/06/2014 Terms of Use: http://spiedl.org/terms
Onera has developed an active laser illumination system with time gating camera: GIBI (Gated Imager with Burst
Illumination) [01]. It is an active burst illumination imaging system generating short pulses to illuminate a target and
makes available long-range night vision in complete darkness as well as under degraded weather conditions such as rain,
fog or haze. We model ones of the most important physical phenomena witch can influence our system: PIAF is the
French acronym for a Physical Model in Active Imaging.
3.2 PIAF code: a 2D active imaging tool
The general architecture of Piaf code is structured around several modules in order to estimate separately the influence of
various parameters as the shape of the laser beam, the turbulence effect, the interaction with a simple target. As shown in
figure 1a, input parameters are extracted from a general data base. Data are adapted from measurements or generated by
physical models. Piaf code can be used to understand the influence of each parameter introduced: it is possible to directly
access to several parameters during a run. Moreover, the modular concept allows us to modify or to adapt the structure of
our code to another experimental device instead of Gibi. We consider a segmented approach where each step is presented
on figure 1b and corresponds to a physical model associated to a separated module in Piaf code [02].
PRE-PROCESSING KERNEL
I / O
Simplified models
DATA BASE
Intermediate outputs
PICTURES
SC
EN
A
R
II
Data generated by
physical models POST-PROCESSING
Data extracted
from papers
Measurements
Internal modules
External modules
Input / output data
Target
Laser
Sensor device
2. Propagation through
the turbulence
1. Decompostion on a Hermito-Gaussian base
4. Propagation through the turbulence 3. Interaction between the
EM field and the scene
5. Detection
Figure 1. a) General architecture of Piaf code and b) general approach adopted to study a 2D active imaging system.
The first step is to read a scenario, to verify input parameters and to decompose the initial wavefront of the source on a
Hermito-Gaussian base. This mathematical pre-processing approach decomposes a non-Gaussian transverse laser beam
on a Hermito-Gaussian base where each mode is independent and incoherent with each other [03] [04] [05]. During the
laser beam propagation, the transverse spatial profile is evolving and changing along with the diameter of the beam.
There is a category of laser beams whose transverse spatial profile is preserved during propagation, despite the effects of
diffraction [06] [07] [08]. Indeed, the transverse spatial profile keeps unchanged by Huygens-Fresnel Transform for this
kind of beams [01].
Each mode is then propagated along the line of sight until focal plane before incoherently summing them. We introduce
physical and simplified models computed to realize the propagation through the atmosphere. For short distances of
propagation, the beam illumination is mainly dependent on the spatial properties of the laser beam. However, when we
consider higher distances and NIR laser propagation close to the ground, the intensity distribution is no longer uniform.
It presents “speckle pattern” due to atmospheric turbulence effects. Different ways of modeling can be investigated.
The first idea is to consider an EM model where the propagation of planes waves through an extended 3D random media
is done using a multiple phase screen technique [09]. This end-to-end propagation model is implemented at Onera
(PILOT code) but is considered as an external tool. It is a reference model and we use it both for the out coming and the
return path. The second idea is to introduce simplified models to provide instantaneous laser illuminations according to
the Cn² profile and to use them when it is possible. By this way, we intend to reduce CPU time and the memory size
needed by a run. Simplified models developed at Onera are validated using PILOT code [10]. Some results and
comparison are displayed on figure 2. An experimental validation of propagation models was done [14].
Proc. of SPIE Vol. 8186 81860H-3
Downloaded From: http://spiedigitallibrary.org/ on 01/06/2014 Terms of Use: http://spiedl.org/terms
PILOT simplified
Weak turbulence regime
Strong turbulence regime
PILOT simplified
σ²χ = 0.5
σ²χ < 0.3
Application to a real case: GIBI
Mean amplitude PSD P(χ)
__ simplified approach
__ PILOT code
simplified
Figure 2. Comparison between PILOT code and a simplified approach.
Once the laser beam has been propagated through the atmosphere on a single path, we estimate the interaction with the
scene. Coherent and incoherent contributions influence the luminance detected by a device. We take into account the
speckle and the target reflectance with different ways of study. On the one hand, we introduce physical models
demanding long CPU time. On the other hand, we decided to simplify the calculus by introducing basic models.
Coherent light scattering is more affected by a variation of microscopic parameters of the target. Incoherent illuminance
corresponds to the laser irradiance scattered by the atmosphere and to the direct or scattered solar illuminance. We focus
our attention on the estimation of the light reflected by different targets. We use an Onera data base to describe 3D
geometry objects. Each facet is done by its spatial position and by optical properties used to determine scattering light
(e.g. used to model a BRDF). Next figure shows a typical 3D object introduced in both Piaf and Matlis code.
Figure 3. 3D model of a plane.
Backscattered coherent radiance calculus considers speckle and BRDF (exact or radiative models [11] and, simplified
He-Torrance or Stam models are implemented). Theses effects are linked to the coating characterization (roughness and
facet orientation), the laser wavelength, the spectral bandwidth, the laser beam size or the pupil device. SIPHOS
code [12] is an Onera tool which models the speckle and generates data bases. An EM approach can’t be generalized to
the entire plane surface of the target. Even if our simplified approach considers coherent effects as a radiometric balance,
two simple cases are also adopted:
rough surfaces where we consider lambertian surfaces with random phases or surfaces cancelling coherent
effects (phase is set to zero),
mirror surfaces with phase conservation.
Proc. of SPIE Vol. 8186 81860H-4
Downloaded From: http://spiedigitallibrary.org/ on 01/06/2014 Terms of Use: http://spiedl.org/terms
We assume multiple scattering between different facets as incoherent scattering. For typical ranges of optical thickness
representing paint coatings (from 0 to 9), Yang et al. [13] demonstrate that the degree of spatial coherence is
significantly higher in simple scattering than in multiple scattering. After a large number of scattering events, a low
coherent contribution remains: the degree of coherence tends to a constant value.
An imaging module is dedicated to obtain the active image from the field back-scattered by the scene. This module takes
into account the anisoplanetism. The sampling by the detector is then carried out and we apply the radiometric balance.
Artifacts associated with an imaging device are modeled and we add a gain factor between the number of detected
electrons and the number of charge carrier into the substrate. Piaf code models different noises added to the average
current: photon noise, Schottky noise, quantization distortion, dark current, fixed spatial noise... Considering a mosaic of
sensors, each sensor has its own characteristics which imply a local variation of parameters on the entire matrix of
detection. It gives a uniform fluctuating aspect on the image. This local variation is usually minimized by certain
techniques during a measurement, but can not be completely cancelled over the domain of measurement. In addition, we
can model a random number of dead pixels. Finally, we add a noise of quantification. Despite the name of noise, this
term refers to the application of a deterministic process of digital sampling of analogue data.
Next figure illustrates one result of the global validation that we have performed. We compare numerical results and
images obtained with our Gibi system (figure 4). We consider two different targets having bar patterns: three and seven
cycles. The distance between the scene and the laser source is set to 870m. The experimental value of the integrated Cn2
is near 3.10-13m-2/3 (saturation regime). Results obtained with Piaf code are closed to the measurements and mean profiles
derived from the image are similar, except if we consider the aliasing effect.
Figure 4. Comparison of target images considering a) one instantaneous image or b) a mean image of a “10 images”
sequence.
Ones Piaf code has been validated considering clear weather, we aim to introduce bad weather conditions. A general
study is currently done to know how adverse conditions (as fog, rain, haze or fog) could be introduced as an additional
module in Piaf code [15].
3.3 Introduction to 3D active imaging models
The purpose of a 3D scanner is usually to create a point cloud of geometric samples on the surface of an object. Then,
these points can be used to extrapolate the shape of the object (a process called reconstruction). 3D scanners are very
analogous to cameras. Like cameras, they have a cone-like FOV, and like cameras, they can only collect information
about surfaces that are not obscured. While a camera collects colour information about surfaces within its FOV, 3D
scanners collect distance information about surfaces within its FOV and the retro-diffused flux by the surface to the
detector device. The “picture” produced by a 3D scanner describes the distance to a surface at each point in the picture.
If a spherical coordinate system is defined in which the scanner is the origin and the vector out from the front of the
scanner is φ=0 and θ=0, then each point in the picture is associated with a φ and θ. Together with distance, which
corresponds to the r component, these spherical coordinates fully describe the three dimensional position of each point in
the picture, in a local coordinate system relative to the scanner. The intensity backscattered by the scene mainly depends
on the type of material (backscattered reflectance for a fixed wavelength). Its value can be stored and associated to each
point in the picture: this technology is often called “4D active imaging”.
a) Piaf result Gibi result b) Piaf result Gibi result
Proc. of SPIE Vol. 8186 81860H-5
Downloaded From: http://spiedigitallibrary.org/ on 01/06/2014 Terms of Use: http://spiedl.org/terms
For most situations, a single scan will not produce a complete model of the scene. Multiple scans from different
directions are usually required to obtain information about all sides of the scene. These scans have to be brought in a
common reference system, a process that is usually called alignment or registration, and then merged to create a
本文档为【激光主动成像】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑,
图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。