Radar: WaveSense’s ultar-wide band radar works by sending a pulse of electromagnetic radiation into the ground and measuring reflections that originate from scattering points below the surface. Reflections occur at the interface between objects that have different electromagnetic properties, such as pipes, roots, and rocks in the surrounding “dirt.” However, it is not these discrete objects but rather the natural inhomogeneity in subterranean geology that often dominates WaveSense’s radar reflection profiles. Soil layers and variations in moisture content cause reflections in the data. Thus, WaveSense paints a complete picture of the subsurface environment. With few exceptions, nearly every discrete object and soil feature is captured, provided that it is not significantly smaller than a wavelength and that it has sufficient contrast with the surrounding soil. The premise of subsurface imaging for localization is that these underground features are sufficiently unique and static to permit their use as identifiers of the precise location at which their reflections were collected.
Mapping: The first step in the WaveSense process is to develop a map of the environment below the road. In this first step, the radar data of subterranean “objects” are simply collected along with GPS tags to form the initial database of subsurface features. This subsurface map is then used as a reference data-set in order to estimate vehicle location on subsequent visits.
Tracking: Next, online localization is performed in several steps. When the vehicle is in motion, data are periodically fetched from the database for matching. A local grid of baseline data is always maintained. A search region surrounding the initial location estimate contains “particles” (points on the grid) representing candidate locations and orientations. An algorithm iteratively evaluates the particles to narrow the search for the maximum correlation within the vehicle’s five-dimensional space (easting, northing, height, roll, and heading). After several iterations, the highest-correlation particle is chosen as the most likely estimate of the vehicle’s current location and orientation. The search region is updated and either expanded or shrunk to reflect this new estimate.
See in 4D: Existing autonomous vehicle and ADAS technologies seek to recreate the perfect human driver by emulating human vision and cognition. But autonomous and driver-assisted vehicles can and must become safer than human drivers. Instead of imitating visual human driving, WaveSense’s ground-penetrating radar helps autonomous and driver-assisted vehicles see what humans cannot: that which lies below the ground.
WaveSense’s subterranean radar images enable a whole new dimension of sight. With the addition of subsurface data to above-ground camera and LIDAR sensor information, self-driving and ADAS-enabled cars now have a complete toolkit to work with when making driving decisions.
Safety is everyone’s #1 priority in the industry. Fusing several independent approaches—from cameras or LIDAR and GPS/INS to GPR—is the best way to ensure robustness, so that no one technology can cause a significant (and deadly) error.