Maker Pro

AImotive’s aiWare3P Neural Network Interference Engine Delivers Impressive Acceleration for Automotive AI

January 10, 2020 by Luke James

AImotive, a global supplier of scalable modular automated driving technologies, begins shipping the latest release of its aiWare3 neural network (NN) hardware interference IP to its largest customers.

Hungary-based AImotive, a leading developer of software and hardware-based automated driving technologies, has started shipping its aiWare3 neural network (N) hardware interference engine intellectual property. 

The aiWare3P IP core, announced in 2018, offers a hardware NN accelerator for high-resolution automotive vision applications. With the flexibility to be deployed within a system-on-chip (SoC) or as a standalone NN accelerator, the core is provided as fully synthesizable RTL, and its low-level microarchitecture is designed to use less CPU or shared memory resources than other available hardware NN accelerators. 


Significantly Improved Performance

The new IP core incorporates new features that result in significantly improved performance, lower power consumption, and less consumption of host CPU. It also boasts a simpler layout for larger chip designs. 

Marton Fesher, senior vice president of hardware engineering at AImotive, said, "Our production-ready aiWare3P release brings together everything we know about accelerating neural networks for vision-based automotive AI inference applications… We now have one of the automotive industry's most efficient and compelling NN acceleration solutions for volume production L2/L2+/L3 AI."


Almotive company logo.


Zero Host CPU Intervention Required

New features of the aiWare3P hardware IP include support for a larger portfolio of pre-optimized embedded activation and pooling functions. This ensures that 100% of most neural networks execute within the aiWare3P core without any need for host CPU intervention. This dramatically reduces external memory bandwidth requirements, making it all the more attractive for use in modern automotive applications. 

Other key upgrades include higher efficiency for a wider range of neural network functions thanks to improved on-chip data reuse and movement, upgraded external memory bandwidth management, and advanced cross-coupling between C-LAM convolution engines and F-LAM function engines. 

aiWare3P is currently being deployed in a range of L2/L2+ production solutions, as well as being adopted for studies of more advanced sensor applications. Some of AImotive’s customers include ON Semiconductor and Nextchip. 

aiWare3P will be shipping to all customers from this month onward. 

Related Content


You May Also Like