Simcenter SCAPTOR AV and ADAS Sensors

2021-11-23T03:58:45.000-0500
Simcenter SCAPTOR

Summary


Details

A variety of sensors are commonly used in Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS).

Sensors include radar, lidar, cameras, and ultrasonic devices as shown in Figure 1.
 
User-added image
Figure 1: Sensors for use in self-driving and advanced driver safety systems.
 
Each sensor has advantages and disadvantages for sensing the surroundings.  Attributes like day/night operation, distance, scan details, and update rates are all important considerations.

This article compares the following sensors:
    1.    GNSS/GPS
    2.    Radar
    3.    Ultrasonic
    4.    Lidar
    5.    Camera
    6.    Summary


1. GNSS/GPS

Global Navigation Satellite System (GNSS) devices are used to determine the speed and location of a vehicle.  GNSS is not used to identify objects around the vehicle.

GPS (Global Positioning System) and GNSS both refer to satellite-based navigation systems.  GNSS is a more generic name, while GPS refers to United States based satellite system. GNSS systems from various countries include:
  • GPS: United States
  • Galileo:  European Union
  • GLONASS: Russia
  • Beidou:  China
  • QZSS:  Japan
  • IRNSS:  India
The main instrumentation part consists of an antenna (Figure 2) attached to a GNSS/GPS receiver.
 
User-added image
Figure 2: Antenna is attached to the GNSS/GPS receiver. The antenna needs to remain in line of sight to the satellite.
 
The antenna needs to be in view of the sky as to pick up signals from satellites. The receiver receives the precise position (via a time delay) from multiple satellites that are in orbit around the earth (Figure 3).
 
User-added image
Figure 3: GNSS/GPS system relies on multiple satellites to determine the location and speed of the receiver unit (red dot).  
 
Using the information from multiple satellites (at least three for three-dimensions), the receiver determines its location and speed on the earth via triangulation.

The output of a global positioning instrument is shown in Figure 4.
 
User-added image
Figure 4: GNSS/GPS location on map. 
 
The GNSS/GPS information is used to determine the location and speed of the autonomous vehicle itself.  In the next sections, sensors for detecting objects in the surroundings of an autonomous vehicle are covered.

2. Radar

Radar utilizes radio waves for the transmission and measurement of distance. The acronym radar is for Radio Detection and Ranging.

Radar sends out radio waves and measures the signal that is reflected as shown in Figure 5.
 
User-added image
Figure 5: Vehicle equipped with radar emits radio waves (incident signal) and uses the reflected signal to determine the distance to an object.
 
Radar can be used to identify objects in the far distance and their speed.  It performs well regardless of the amount of ambient light or adverse weather conditions.  However, it has difficulty detecting small objects, and is prone to interference.  

Using the doppler effect, radar is one of the few sensors that can detect the relative speed of objects around it.  It does this by using a two-dimensional Fourier transform. 

There a are a variety of algorithms for processing radar data (Figure 6). 
 
User-added image
Figure 6: A single Range-doppler radar (left) and a four-receiver array radar (right) shown in a polar plot.
 
One advantage of the radio waves used in radar is that they bounce off objects.  This property allows radar to “see around” objects.

Take the example shown in the movie below.  A vehicle equipped with radar and a collision avoidance system slows down for an accident that happens to another vehicle that cannot be visibly seen.



Direct Youtube link: https://youtu.be/G_sGQJh4A08


Because radar waves bounce off the road, the accident that occurs in front of the forward vehicle is detected. The radar bounces off the road underneath the vehicle immediately in front. It can “see” what is happening in front of the vehicle blocking the view of the accident.  This allows the vehicle to slow down before the crash is even visible to the forward camera.

When recording data from a radar system, the potential outputs include: the raw radar signals, range-doppler maps, and object lists.  The exact output to be recorded would depend on the desired use case for the recorded data.

Radar compared to other sensors:
  • Lidar: Sensors like lidar cannot “see around” objects because light waves do not “bounce” off objects as readily as radar waves. Lidar is not prone to electro-magnetic interference and can detect very small objects, though due to the laser light divergence is not as effective over longer distances like radar.  
  • Ultrasonic sensors: Radar can detect faraway objects, whereas ultrasonic sensors using sound waves can only detect close objects.  For example, ultrasonics can be used to detect vehicles in the adjacent lane.
  • Cameras: Radar is an active sensor – it sends out a signal and measures a response.  By comparison, cameras are passive sensors.  They rely entirely on the ambient light to function.  This means that cameras are less effective at night than the daytime.  Radars, on the other hand, are equally effective at night and daytime.

A spider chart of the different radar attributes is shown in Figure 7.  
 
User-added image
Figure 7: Radar attribute ratings for autonomous and self-driving applications.
 
The ratings cover three areas:

In-Vehicle Usability
  • Ruggedness:  Is the sensor prone to break down?  Does it have moving parts, or is it solid state?
  • Data and processing: How much compute power is needed to process the data.  The higher the number, the less processing required.
  • Cost: Is product that is mass produced to the point that the unit cost is very low?
Reliability
  • Robustness to interference: Can electromagnetic fields interfere with operation of the sensor?  Can it be thrown off by the presence of water?
  • Day-night independence: Will the sensor operate equally well in daytime and nighttime?
  • Weather independence: Does snow or rain greatly interfere with sensor operation?
Sensing Capabilities
  • Depth: The accuracy with which the sensor detects distance to an obstacle.  Higher the number, the more accurate.
  • Resolution: How fine are the details that the sensor can distinguish?
  • Classification: Does the sensor easily allow classifying a detected object?  Car, bicycle, pedestrian?
  • Velocity: How accurately and robustly can the sensor determine the speed of a detected object?
  • Frame-Rate: How quickly does sensor information update?  

The further from the center of the spider chart, the higher/better the attribute is rated.  For example, radar works well in all types of weather and equally well at night as during the day.  On the other hand, radar has poor resolution for identifying object clearly.

2. Ultrasonic

Ultrasonic sensors use sound waves to identify objects surrounding a vehicle.  Ultrasonic sensors operate over short distances and are most often used in parking assist systems or short-range detection of vehicles surrounding a car.

When recording an ultrasonic device, potential outputs include a one-dimensional distance in front of the sensor, a combined two-dimensional map around the vehicle, or an object list.

An example of an ultrasonic sensor is shown in Figure 8.
 
User-added image
Figure 8: Ultrasonic sensors can typically be seen in the bumpers of vehicles.
 
Ultrasonic devices are commonly found in today’s vehicles.  They are used to detect nearby vehicles for parking assistance, blind spot avoidance, and collision avoidance.

Radar, lidar, and cameras can “see” over much farther distances. However, ultrasonic sensors are relatively less expensive, compact, and are easily disguised in the bumpers of vehicles.

A spider chart of the different ultrasonic attributes is shown in Figure 9.  
 
User-added image
Figure 9: Ultrasonic sensor ratings for autonomous and self-driving applications.
 
The further from the center of the spider chart, the higher/better the attribute is rated.  For example, ultrasonic sensors are cost effective and work the same independent of daylight or nighttime conditions.  On the other hand, ultrasonic sensors do not work well over long distances.

3. Lidar

Lidars are promising technology for self-driving vehicles.  There is a lot of research and development going into lidar systems to make them cheaper and more reliable (e.g., transition to solid state).

Lidars are not currently found in many production vehicles.  This is due to the generally higher expense of the lidars compared to other sensors that are currently on the market.

By sending and receiving back light waves lidars can see far in the distance, and do not rely on ambient light. However, weather precipitation affects lidar adversely.

An example of a scanning lidar is shown in Figure 10.
 
User-added image
Figure 10: Scanning lidar bounces light off objects in the Field of View (FOV).
 
The light used in lidar systems is not visible to the human eye.  It produces images in a point cloud as shown in Figure 11.
 
User-added image
Figure 11: Point cloud produced by lidar on top of vehicle.
 
Lidars produce more detailed images than radar.

A lot of research and development is being done on lidars to reduce their price and increase their reliability.  Early generations of lidar had mechanical scanners for sending out the light.  New lidars are solid state and less expensive (Figure 12).
 
User-added image
Figure 12: Examples of solid state lidar.
 
A spider chart of the different lidar attributes is shown in Figure 13.  
 
User-added image
Figure 13: Lidar attribute ratings for autonomous and self-driving applications
 
The further from the center of the spider chart, the higher/better the attribute is rated.  For example, lidars identify objects with high resolution and perform independent of ambient light conditions.  On the other hand, lidars are expensive compared to other sensors and are prone to interference from weather conditions like rain and snow.  Scanning lidars are also prone to breaking when subjected to continuous vibrations.

More information in the knowledge article: What is Lidar?

4. Camera

Cameras are commonly used in self-driving and advanced driver assistance systems.  Cameras are passive – they do not send out a signal like radar, lidar, and ultrasonic devices.  Cameras depend on ambient light, so work better in daylight than nighttime.

Traditional cameras produce video from which objects can be identified.  New generation “smart” cameras also output identified objects automatically (Figure 14).
 
User-added image
Figure 14: Smart cameras (right) output images and identified objects.  Traditional cameras (left) output only images.
 
Objects that have been identified are referred to as perception data.  An example of perception data overlaid with the original image is shown in Figure 15.
 
User-added image
Figure 15: Perception data overlaid with video image.
 
Both rain and snow adversely affect performance of camera systems, reducing their effective range.  

Cameras used in Autonomous Vehicle (AV) and Advanced Driving Assistance development testing produce raw video.  Raw video, not compressed video like *mp4 files, is needed to test and develop sensor systems.

A spider chart of the different radar attributes is shown in Figure 16.  
 
User-added image
Figure 16: Camera attribute ratings for autonomous and self-driving applications.
 
The further from the center of the spider chart, the higher/better the attribute is rated.  For example, cameras identify objects with high resolution.  On the other hand, cameras are affected by ambient lighting conditions and perform worse in nighttime versus daylight.

Raw video output from cameras is much larger than the corresponding output of lidars, radars, and ultrasonic devices. More information about acquiring video in the knowledge article: Camera Recording Tips for Autonomous and Advanced Driving Assistance Systems.

5. Summary

No single sensor is the “best” in all categories.  Autonomous vehicle control systems need to rely on the ‘fusion’ of multiple sensors to maximize the overall effectiveness for sensing objects in its surroundings.

The attribute ratings of multiple sensors are overlaid in the spider chart Figure 17.
 
User-added image
Figure 17: Attribute ratings overlaid from camera, radar, lidar, and ultrasonic sensors.
 
The combination of sensors ensures “dis-similar” redundancy. The sensors, all being used for the same ultimate purpose, employ entirely different physics for sensing their surroundings. The different (or dis-similar) physics helps ensure that one sensor succeeds even if others fail in any specific situation.

For example, while one sensor may have difficulty in one situation (camera at night), other sensors do not (lidar and radars).  Or one sensor may determine the speed of a surrounding object (radar), while another has higher uncertainty (cameras “guess” speed).  If it is raining, some sensors are less effected (radar) while others are more effected (lidar and camera).  There are multiple examples where this redundancy between sensors helps improve the overall effectiveness of identifying objects in the surroundings of an autonomous vehicle.

KB Article ID# KB000048455_EN_US

Contents

SummaryDetails

Associated Components

SCAPTOR Hardware SCAPTOR Software