Maker Pro

Swiss Startup aiCTX AG Develops First 1M Neuron Chip for Optimal Vision Processing

April 30, 2019 by Sam Holland
Share
banner

Swiss brain chip company, aiCTX AG, announces the world’s first neuromorphic AI processor—the DynapCNN—that is event-driven and fully-asynchronous.

This dynamic visual AI processor is for low power, ever-active, real-time dynamic visual applications, and it is expected to open many options for dynamic vision processing as it brings forward—for the first time—event-based vision applications to devices that don’t have much power.

Since its visual processing is based on pixel-level event-driven computing, this neuromorphic convolutional neural network (CNN) processor outshines the old-fashioned frame-based static vision processing technology by introducing a new dynamic visual treatment for an ultra-low delay.

 

DynapCNN: Features

The DynapCNN is a highly configurable, scalable, and purely asynchronous 12 mm chip that is manufactured in 22-nanometre (nm) technology. Besides supporting multiple CNN architectures and having scalability for large-scale pulse CNNs, DynapCNN integrates more than one million pulsed neurons and 4 million programmable parameters.

Its event-triggered operation based on the efficient network structure and the exclusively asynchronous circuit ensures that the chip has ultra-low power consumption and ultra-low delay characteristics while being great at processing dynamic information.

 

DynapCNN: What Makes it More Efficient

According to aiCTX, DynapCNN is the most power-efficient (100 to 1000 times more efficient) way of processing data from event-based and dynamic vision sensors. In addition, this processor delivers 10x shorter latencies and can handle ultra-low-power artificial intelligence processing thanks to its event-driven design, asynchronous digital logic, and custom IPs.

DynapCNN, therefore, is the first kind of ASIC chip to combine the energy efficiency of machine learning with event-driven neural mimicry within a single device. Its energy-saving feature implies that AI operations can run throughout while local data processing can be initiated on the terminal device.

According to Sadique Sheik, who is a senior R&D engineer at aiCTX, when processors are able to do computations locally, it is cost and energy-efficient, and this is because no energy is required to send large amounts of sensory data to the cloud.

This is a powerful measure when providing users with data and privacy protection, and it is a feature that lacks in traditional deep learning ASICS, according to Dr. Qiao Ning, aiCTX’s CEO.

 

Image courtesy of Flickr.

 

Due to its event-driven computing mechanism of asynchronous digital circuits, DynapCNN does not depend on a high-speed clock, but instead, it is triggered by any variation in the visualised scene, which means that any pixel changes—as a result of the moving object—are processed in real time.

The chip’s continuous computation guarantees ultra-low-latency of less than 5ms, which roughly represents a 10x enhancement from the conventional deep learning solutions available in the real-time vision processing market.

Unlike traditional image processing systems that process video data frame by frame even if the target object does not change in front of the camera, DynapCNN delivers always-on vision processing, and since it is event-driven, power consumption can be reduced to almost zero for real-time visual processing if the target object does not change.

The power consumption of the chip is further reduced by applying sparse calculations to process the movement of target objects, according to Dr. Ning.

Evidently, this neuromorphic CNN Processor provides an unprecedented mix of ultra-low power consumption and low-latency performance. These are its two main selling points.

 

DynapCNN: Applications

DynapCNN should have a wide range of applications, especially when considering the growing demand for low-power real-time intelligent processing in mobile terminals and the Internet of Things (IoT). The chip can be used to implement a range of AI models owing to its high flexibility and reconfigurability, and mainly because its event-triggered operation mechanism enables it to achieve sub-mW power levels.

This new generation processor integrates an interface circuit that can connect to most dynamic cameras for gesture recognition, high-speed moving object tracking, face recognition, behaviour recognition, and categorisation in dynamic image processing.

For real-time visual processing, most of the application scenarios will be based on recognising moving targets, such as face recognition, tracking and positioning of moving objects, gesture recognition, and so on.

Furthermore, since DynapCNN can provide an ultra-low-latency dynamic visual solution, something which significantly reduces the recognition response time by more than 10x, the processor is a perfect choice for any high-speed scenes like high-speed aircraft.

 

Image courtesy of aiCTX.

 

This chip brings together the perks of traditional and neuromorphic deep learning and should be a favourite pick for ultra-low power dynamic image processing and point cloud signal processing.

In point cloud signal processing, for example, real-time processing can be undertaken on signals like LiDAR to achieve behaviour recognition, object recognition, region division, and image segmentation. In real-time vision processing, almost all the applications are for movement driven tasks like presence detection, gesture recognition, face detection, and so on.

Generally, DynapCNN will be helpful in a variety of AI edge computing applications that demand ultra-low-latency and ultra-low-power features, including IoT applications, security, wearable healthcare systems, and co-processors for mobile and embedded devices, among others.

 

DynapCNN: Availability

According to the Swiss startup, DynapCNN’s corresponding development kits will be made available in the third quarter of 2019, which suggests that the chip is likely to be sampled in 2020 and will become the first commercial pulsed neural network brain chip.

When announcing the DynapCNN, aiCTX’s commitment to subverting the traditional von Neumann architecture is gaining traction. The chip promises to significantly reduce the lag times induced by visual processing in real time, and its thermal efficiency is 100 to 1000 times higher than existing solutions.

This has two meanings: the new release will accelerate artificial intelligence algorithms in visual signal processing, and it will open the prospect of long-life battery-operated equipment.

As expected, the chip should outshine the traditional frame-based static vision processing technology to open up a new era of dynamic visual disposition. It will open new opportunities for the dynamic processing of artificial vision, and it will shed new light on event-driven applications.

After all, AI processing does not have to be a power-hungry application.

Related Content

Comments


You May Also Like