Simcenter SCAPTOR What is Lidar?

2020-12-27T02:57:39.000-0500
Simcenter SCAPTOR

Summary


Details

Lidar is a device that measures the distances to objects using light.  An example of how data gathered from a lidar can be visualized with a computer is shown in Figure 1.
 
User-added image
Figure 1: Example of a lidar measurement at a traffic intersection.  
 
In the intersection, there are two stationary physical lidars at the center of the black areas on the ground. The black areas are the blindspots of the lidar.

Rings around the black areas indicate the distance from lidar. Distances to objects are measured using light. Shadows are formed behind objects (both moving and not moving) where the lights does not penetrate.

This article covers the following aspects of lidar:
1. What is Lidar?
2. Lidar History and Types
   2.1 Flash versus Scanning
   2.2 Multi-Beam
   2.3 Solid State
3. Practical Aspects of Lidar
   3.1 Data Storage and Point Cloud
   3.2 Interface
   3.3 Beam Divergence
   3.4 Reflection Strength
   3.5 Multi-Bounce
   3.6 Motion and Timing Effects
4. Application of Lidar
   4.1 Automotive
   4.2 Astronomy
   4.3 Agriculture
   4.4 Video Games
5. Lidar versus Other Sensors
   5.1 Radar
   5.2 Ultrasonic
   5.3 Camera
   5.4 Fusion


1. What is Lidar?

Lidar is an acronym of Light Detection and Ranging, a distance measurement method that uses light in the form of a pulsed or scanned laser to illuminate a target area. The illumination data collected can be used to create a three-dimensional (3D) representation of the target area based on the reflections.

Lidar is essentially an active illuminating camera that takes a picture of distance, rather than color.  It is based on the transmission of light in the invisible spectrum and the subsequent sensing of that light source being reflected by objects in the field of view (FOV) as shown in Figure 2.
 
User-added image
Figure 2: A lidar projects light rays into a Field of View (FOV) to determine the distance (depth amplitude) to objects.
 
Similar to radar (Radio Detection and Ranging), the time delay between the transmission and reflected return (Time Of Flight or TOF) allows the calculation of the distance to a given geometrical point. 

While radar employs radio waves in the 77 GigaHertz frequency range. Lidar uses light waves in the near infrared region around 200 TeraHertz (typically wavelengths of 905 or 1550 nanometer).

2. Lidar History and Types

Laser-based remote sensing was developed and used by NASA researchers in the 1970s, with a focus primarily on gauging the properties of ocean water and the atmosphere.  Commercial systems in the 1990s were capable of 2000 to 25000 pulses per second and were generally used for topographical map surveying.  Modern lidar hardware has the capability to scan over one million points per second with an accuracy of 5mm. 

2.1 Flash versus Scanning

The light source for lidar can be either a single laser pulse (flash) or a continuously scanning beam that moves in one or more directions.  Flash lidar utilizes a single large laser spot to illuminate the entire measurement Field of View (FOV) all at once, hence the detected reflections generate an image for the entire FOV at one instance in time or position.  The disadvantage of flash lidar systems is that the diffusion of the laser light source reduces the transmitted power as well as the resulting energy reflected back to the sensor.  This results in a low Signal-to-Noise (SNR) ratio and reduces the measurement range.

Scanning lidar focuses the light energy into multiple individual laser beams with a narrow focus, which increases the SNR and accuracy.  However, as the beam must progress through multiple points within the FOV, the individual positions are not all measured at the same instant in time.  This requires additional computations to generate an image of the objects in the FOV relative to a single reference point.  Scanning lidar systems are better positioned for medium and long-distance ranges, and typically fall into one of three categories: Pulsed lidar, Amplitude Modulated Continuous Wave (AMCW), or Frequency Modulated Continuous Wave (FMCW).

An early version of scanning lidar was developed by teams for use in self-driving vehicle competitions sponsored by Defense Advanced Research Projects Agency (DARPA). An example of this type of system is shown in Figure 3.
 
User-added image
Figure 3: Scanning lidar system mounted on vehicle rooftop.
 
This approach relied upon a mechanically driven system that rotates a spinning laser on a cone, essentially creating multiple lidars operating simultaneously.

Though the scanning FOV and overall range is very good for these types of systems, the moving parts are prone to failure, and the overall size of the transducer assembly is generally too large to package in many applications.  

2.2 Multi-Beam or Discrete-Flash

A crossover between the scanning and flash lidar, the is multi-beam lidar concept.  It attempts to combine advantages of both: all measurements at once (global shutter) and individual laser beams. It’s also referred to as ‘discrete-flash’.

2.3 Solid State

Current trends in scanning lidar have brought to market much more compact devices, relying on solid state approaches that eliminate the need for mechanical drive systems (Figure 4).  
 
User-added image
Figure 4: Examples of solid state lidar.
 
These ‘single chip’ designs promise greatly reduced packaging volumes and lower overall cost when produced in large scale.

3. Practical Considerations

There are many practical aspects to using lidar systems for measuring distances.  A few are discussed here:

3.1 Data Storage and Point Cloud

The grid of points generated by a lidar system are typically referred to as a ‘Point Cloud’.  A single point cloud is a snapshot in time and can be comprised of millions of individual geometric locations. An example point cloud is shown in Figure 5.
 
User-added image
Figure 5: Point cloud image of a torus, Wikipedia Commons (https://commons.wikimedia.org/w/index.php?curid=19541285)
 
Interpretation of point cloud data can be done with an open source C++ software interface.  Example programs are available at the Point Cloud Library (PCL) at www.pointcloud.org.  

3.2 Interface

Digital standards for interfacing to lidar vary. For most research lidar units, a Gigabit Ethernet interface is used to record the point cloud data to a computer.  For some low resolution lidars or smart lidars, a CAN bus interface can be used. The actual digital data is comprised of a grid of points, with values for distance and amplitude (reflectivity of objects).

3.3 Beam Divergence

As a light beam travels from a lidar, it grows wider or more dispersed as shown in Figure 6
 
User-added image
Figure 6: Light beam dispersion and response versus distance.
 
The return response of the two beams is different due to the dispersion. Lidar processing algorithms need to account for this information to be able to create accurate point clouds. 

3.4 Reflection Strength

Not only can light beams disperse, the strength of the reflected beam can vary depending on the color of the object.

For example, black vehicles are notoriously difficult for lidars to detect. Black car paint absorbs a lot of energy so not much light is reflected.  Red car paint, on the other hand, is very reflective and produces high energy returns. 

Figure 7 shows the difference in reflected energy between a black vehicle and a red vehicle.
 
User-added image
Figure 7: The reflected strength (indicated by color) of the light beam energy can vary with the color of the object.  In this case, a black colored object reflects less light energy (left, mostly purple dots) at the same distance as a red colored object (right, mostly yellow dots). 
.
The image shows the comparison between the energy returned from both vehicles. This is indicated by the color of the reflections. Each dot is one reflection, with purple being the least energy and yellow corresponding to the highest reflected energy.

The car with black paint (left)  produces less reflection energy in general versus the red painted car (right) that reflects a high amount of energy.  Note that in both cases the windows have purple dots, because they do not reflect well, while the tires are yellow due to the high level of reflections from the wheel covering.

3.5 Multi-Bounce

When a lidar beam interacts with an object it generates a reflection of energy. This energy can continue to interact with multiple different objects, generating a new reflection of energy for each object. 

It is important to understand these multiple energy returns in order to be as accurate as possible. Otherwise this can cause the appearance of points (in a point cloud that is generated based on the sensor reading) that do not exist in real life as shown in Figure 8
 
User-added image
Figure 8: Arrows indicate where the reflection (left) of two vehicles in a water film create corresponding reflected images below the actual vehicle (right).
 
The lidar light beam reflects off the original car onto the water film, which then reflects back to lidar. This is an example of the multi-bounce phenomenon.  Ideally, the lidar processing algorithm should recognize the reflected images and remove them from the resulting point cloud.

Addressing these types of imperfections are critical to ensure that autonomous systems can behave in a safe manner.  It is necessary to be able to interpret lidar data that includes noise.  

3.6 Motion and Timing Effects

Most current lidar systems launch each beam in a scan pattern at a slightly different time. While this scanning pattern typically happens very fast, it means that objects moving in the environment will move slightly between each beam shot. Effectively, this means that each beam “sees” the object in a slightly different position as shown in Figure 9.
User-added image
Figure 9: Point images with timing considerations (left side) and without timing considerations (right side). Note these are for illustrative purposes, not actual scans.
 
This can make objects appear distorted in the resulting point cloud.

Not only is the timing within the lidar important, but the timing of the lidar with respect to any other measurement devices in a self driving car is also important.  File formats like the Simcenter SCAPTOR (*.brec) file format are optimized to handle this large volume of data as well as the synchronization of multiple sensor types.  This is done with data compression algorithms that can handle widely varying incoming data rates from lidar, radar, cameras, ultrasonics, etc., while ensuring proper synchronization during playback or analysis.

4. Applications of Lidar

Lidar is used in a wide range of applications:

4.1 Automotive

Lidar is commonly used in the development of self-driving or autonomous vehicles. As of 2020, only one production vehicle utilizes a scanning lidar as shown in Figure 10.
 
User-added image
Figure 10: In 2020, one production vehicle contained a lidar sensor.
 
Oftentimes lidar is used in prototype vehicles to get data that can be used to improve the camera algorithms that will end up in production cars.  This is the case even if the production car will not have a lidar sensor.  

Lidars should not be confused with ultrasonic sensors. Ultrasonic sensors are commonly used for driver assistance and crash avoidance systems in production cars. Unlike lidar, ultrasonic sensors use sound waves and work over a shorter distance.

4.2 Astronomy

From the pioneering work of NASA in the 1970s, lidar measurement technology is being used more frequently in astronomy for atmospheric and land surface sensing.  Some examples include:
  • A worldwide network of observatories uses lidars to measure the distance to reflectors placed on the moon, allowing the position of the moon to be measured with millimeter precision.
  • MOLA, the Mars Orbiting Laser Altimeter, uses a lidar instrument in the NASA Mars Global Surveyor to produce a global topographic survey of the planet.
  • Aeolus, the first European lidar mission and the first spaceborne lidar to measure global winds, was launched by the European Space Agency (ESA). 

4.3 Agriculture 

Lidar is being used increasing in agriculture to help with crop planning and field management.  For example, three dimensional models of farmland can be used to determine natural resources, soil types, and soil erosion in an area.  This helps identify the best crops and times to plant for greater yields.  

4.4 Video Games

Lidar surveys of race tracks are used in racing video games to replicate actual tracks with centimeter to millimeter precision.

There are many other uses for lidar measurements, only a few application areas were mentioned here.

5. Lidar versus Other Sensors

There are several other types of sensors used for measuring in three dimensions shown in Figure 11 below:
 
User-added image
Figure 11: Radar, lidar, camera devices mounted on vehicle.
 
Each device has differences when compared to lidar:

5.1 Radar

Radar utilizes radio waves for the transmission and measurement of distance, and hence performs well regardless of the amount of ambient light or adverse weather conditions.  However, it has difficulty detecting small objects, and is prone to interference.  Lidar is not prone to electro-magnetic interference and can detect very small objects, though due to the light beam divergence is not as effective over longer distances like radar.  

5.2 Ultrasonic

Ultrasonic sensors operate over short distances and are most often used in parking assist systems or short-range detection of vehicles surrounding a car traveling on the road (Figure 12).
 
User-added image
Figure 12: Ultrasonic sensors are often seen in bumpers of vehicles for short-range detection of obstacles.
 
Lidar can measure over longer distances that ultrasonic devices.  This is because ultrasonics use sound waves while lidar uses light waves.

5.3 Camera

Cameras (Figure 13) are less expensive than lidars.  However, lidars have some advantages over cameras.  For example, the active illumination however allows lidars to perform much better than a camera at night.  
 
User-added image
Figure 13: Camera
 
Lidar outputs geometrical, measured data requiring less computation to interpret than camera images. Both rain and snow adversely affect performance of camera and lidar systems, reducing their effective range.  

More about using cameras for Autonomous vehicle applications in the knowledge article: Camera Recording Tips for Autonomous and Advanced Driving Assistance Systems

5.4 Fusion

As no single camera or sensor is perfectly suited to all conditions.  Most autonomous vehicle control systems need to rely on the ‘fusion’ of multiple devices in order to maximize their effectiveness.  

Lidar is understood to fill the gap between camera (high resolution, short range) and radar (low resolution, long range) as shown in the spider charts in Figure 14.
User-added image
Figure 14: Spider charts of the functional areas of three different devices.
 
A spider chart that plots the overlap of radar, cameras, and lidar is shown in Figure 15. A greater functional area is covered by the fusion of all of these devices.
User-added image
Figure 15: A combination, or fusion, of all three devices (radar, camera, lidar) covers the functional attributes more completely.
 
More about sensors used in autonomous applications in the knowledge article: AV and ADAS Sensors

Hope this introduction to lidar is useful.  Questions?  Email peter.schaldenbrand@siemens.com.

Related Links:

KB Article ID# KB000045168_EN_US

Contents

SummaryDetails

Associated Components

SCAPTOR Hardware SCAPTOR Software