Maker Pro

The Main Hardware Challenges Facing Self-Driving Vehicles

June 16, 2020 by Emmanuel Ikimi
Share
banner

Despite tremendous progress in autonomous vehicle production over the years, hardware challenges persist as one of the most legitimate concerns affecting the industry.

This article sheds more light on the most pertinent hardware challenges affecting the development of driverless vehicles.

 

Sensory Elements and Limitations in Self-Driving Vehicles

Autonomous cars and trucks require hardware components for the continuous sensing of operational and environmental conditions. Some of the most critical sensory elements in driverless vehicles include the following components.

 

Light Detection and Ranging

LIDAR sensors in self-driving cars and trucks generate 3D representations of objects in their surroundings by beaming millions of laser light pulses in multiple directions and measuring the reflected wavelengths and time taken to return.

Automotive LIDAR sensors employ microelectromechanical systems mirrors (MEMS mirrors), having electrical and mechanical structures on one substrate. MEMS mirrors-based LIDAR contains a laser module that beams laser pulses in multiple directions around the vehicle. Multiple scanning mirrors capture the reflected pulses, and a data acquisition unit measures the distance to the object from the time elapsed between transmission and detection.

The most significant challenge with integrating LIDAR into autonomous vehicles is cost constraints. Integrating a sizeable number of sensors into the multiple sections of autonomous vehicles can easily add several thousand dollars to production costs, making mass production too demanding for smaller manufacturers.

LIDAR also faces its fair share of technical challenges: consider that mechanical shock and vibrations (which are frequent during regular automobile use) can impair the technology’s accuracy and reliability. Moreover, a report by MIT revealed that the best 3D image mapping technology can only spot objects “a few centimetres at distances of more than 100 metres”.

 

Autonomous vehicles on a highway.

Four self-driving cars travel on an open road. They are surrounded by communication graphics to represent their automotive radar functionalities and annotated with the statistics of their mapped journey. 

 

Radio-Frequency Ranging Sensors 

RF ranging sensors utilise millimetre-wave frequency pulses to detect the direction and speed of objects and long-range obstacles (such as people, trees, and city infrastructure) within the vehicle’s vicinity. RF sensors support forward collision warning systems, which can minimise the risk of road accidents.

RF ranging sensors are usually of two types: they are either diode detector-based or heat-based. The former uses special diodes, e.g. Schottky and Gallium arsenide diodes as rectifiers to produce signal input. Heat-based sensors utilise thermistors or thermocouples for their operation.

Mid and long-range radar (used for emergency braking and adaptive cruise control) and short and ultrashort radar (used for rear collision avoidance, blind-spot/pedestrian detection) both lack the resolution to capture fine details (such as smaller objects) in a vehicle’s surroundings.

The problem mostly lies in the sampling size, which is typically based on a frequency of 24GHz. A move to 77GHz, which is dependent on the development of more powerful self-driving chips, will improve the resolution and accuracy of autonomous vehicle radar.

 

Ultrasound Sensors

These sensors transmit short ultrasonic impulses from the front and rear of self-driving vehicles and analyse the echo signals that bounce off various surrounding objects.

Ultrasonic sensors consist of an aluminium structure with a diaphragm that houses a piezoceramic element. In transmission mode, the sensor receives an input signal from the engine control unit (or ECU), which causes the diaphragm to vibrate, producing ultrasound impulses in multiple directions.

It then detects the reflected echo waves as they bounce off objects around the vehicle and sends it to the piezoceramic element. This then converts the vibrations into analogue signals, and in turn, sends that information to a processing system—for it to be converted into a digital signal.

The main drawback of ultrasonic sensors in self-driving vehicles is the limited resolution. Unlike LIDAR sensors, ultrasonic sensors lack the resolution to recognise fine detail, small objects, and objects moving at high speeds. Ultrasonic sensors also have a shorter field of view and cannot distinguish between colours.

 

Visual Spectrum Cameras

Self-driving vehicles contain high-resolution, omnidirectional cameras that allow autonomous software to distinguish objects and obstacles 360 degrees around the vehicle. Advanced visual cameras also utilise infrared (thermal) sensors that are capable of detecting wavelengths below the visible spectrum.

Visual spectrum cameras in autonomous vehicles have a considerable margin of error due to environmental conditions that impair visibility, such as fog, snow, and haze. Additionally, they cannot ‘see’ properly during glare.

 

Self-Driving Chips

Autonomous chips are system on chip solutions that provide artificial intelligence capabilities to run neural networks. The main functions of self-driving chips involve processing large volumes of data obtained from the various sensors and enabling the system to make intelligent decisions like a human. Their processing capacities are in trillions of operations per second (aka TOPS).

Self-driving chips may contain some or all of the following components on a single PCB:

  • A central processing unit (or CPU)

  • A graphics processing unit (or GPU)

  • An image signal processor

  • A stereo/optical flow accelerator

  • A programmable vision accelerator

 

An intelligent vehicle's interior.

An intelligent vehicle's front-side interior, whose infotainment screen reflects the vehicle's intelligent transport system functionalities. 

 

The main challenge in building self-driving chips for Level 5 autonomy (i.e. full self-driving vehicles) is the prodigious volumes of data that come in from the sensors, which must be processed constantly. Moreover, every driving scenario is different and presents unique complexities.

At the time of writing, processing/computing capacities are limited, although chipmakers such as NVIDIA (which popularised the term, ‘GPU’) are stepping up to the challenge of building more powerful AV chips.

 

Fully Autonomous Driving: A Two-Pronged Challenge

It is necessary to think of the challenges that face self-driving vehicles as twofold: they concern both the inside and outside of the cars.

Both facets equally impact the safety of drivers and pedestrians. Within self-driving cars, automakers grapple with gathering as much data as possible from the surroundings—using automotive LIDAR, radar, visual spectrum sensors, and more—to allow for safe navigation.

Pedestrian behaviour and the driving habits of other road users is another critical piece of the puzzle. For example: how would an autonomous vehicle react to a child suddenly running in front of it on a busy road? Or a brash driver swerving suddenly into its lane?

Needless to say, automakers will need to prepare autonomous vehicles to dynamically respond to human behaviour—which can hardly fit into a predictable model—as the current technology progresses to reach Level 5 autonomy.

Related Content

Comments


You May Also Like