Maker Pro

The First Programmable Memristor Computer: Impacts on Machine Learning and AI Industries

September 04, 2019 by Lianne Frith
Share
banner

Until now, memristors have primarily been a solution looking for a problem. In a recent development, however, researchers from the University of Michigan claim to have built the first programmable memristor computer.

With the ability to run machine learning algorithms and process AI on small devices, the computer offers the potential to do more computing in memory. If these claims are true, it could have a significant impact on electrical engineering and the ability to develop AI applications. 

 

What Can the Programmable Memristor Computer Do?

Electrical engineers have been searching for a way to do more computing in memory rather than in a processor’s computing core. In-memory computing has the potential to accelerate AI and neuromorphic computing by cutting down power consumption. However, until now, while the concept has been long-considered, demonstrations have been standalone, either with chips developed for a particular AI problem or memristor arrays operated through external computers. 

The University of Michigan claims to have created a programmable memristor computer that can run three standard types of machine learning algorithms as well as offering great advancements in the development of AI applications, and it works all on its own. 

The programmable memristor computer could allow for AI applications to be processed directly on small devices, such as smartphones and sensors. 

Without the need for commands to be sent to the cloud for processing, the response times could speed up considerably, not to mention the improvements in security and privacy that would result from on-chip processing.  

 

Michigan State University engineers

Michigan State University memrister computer researchers Wei Lu and Seung Hwan Lee. Image courtesy of Michigan State University, engineering department. 

 

According to the announcement by the University of Michigan, the new way of arranging the computer components on-chip could drastically change computing with energy consumption being cut by a factor of 100

 

How Does the Programmable Memristor Computer Work?

The computer has been designed by professor Wei Lu and his team, to integrate a memristor array with the other elements needed to program and run it. The components include an array of 5,832 memristors, a standard OpenRISC processor, communication channels and 648 interpreters in the form of digital or analogue converters. 

Memristor arrays are particularly suitable for machine learning problems as they turn data into vectors and map them in matrices. Input vectors can then be compared with vectors that are stored in memory. When there is a match, the system knows that the input data has the required trait. 

The team integrated the memristor array directly on-chip and developed machine learning algorithms onto the memristor array. They put the chip through three tests, machine learning algorithms, which aimed to prove its programmability:

  • Perceptron: The computer had to recognise Greek letters, even when the image was noisy, and achieved 100% accuracy,

  • Sparse coding: The computer had to build an efficient network of artificial neurons, learning its task and removing neurons that aren’t needed. The computer was able to find the most efficient way to reconstruct images and identify patterns with 100% accuracy. 

  • Dual-layer neural network: The computer had to find patterns in complex data using unsupervised learning. From mammography test scores, the neural network identified important features and then distinguished between malignant and benign tumours with 94.6% accuracy. 

At its maximum frequency, the chip consumed around 300 milliwatts and performed 188 billion operations per second per watt. It has the potential for greater speeds, however. The chip was built using a 180-nanometer semiconductor manufacturing process. Moving the chip to something like 40-nanometer technology could drop power consumption to almost 40 milliwatts while further boosting performance.

 

Diagram emulating a memrister function

A diagram emulating a memristor (an IV curve tracer), demonstrating the early work into memristor technology in 2011. Image courtesy of Wikimedia Commons.

 

What Made Development Possible?

The idea of using memristors in computing is by no means a new one. The work on non-volatile resistive switching, otherwise known as memristive behaviour, has been going on for decades. Studies of memristive behaviour date back to the early 1800s. Further developments in the 1970s coincided with advances in silicon technology and digital computing, and the nascent technology was put on a back burner.

In 1998 Hewlett Packard Labs first reported their monolithic nanoscale memristive device. It had an internal storage layer that could be dynamically reconfigured through electrical stimulation, creating a memory effect. The programmed state was not lost when the power was removed, creating a new type of functionality.

Further research in 2011 highlighted the potential of memristor-based memory design for low power consumption and good scalability. A study looked at using a dual-element memory structure to reduce energy consumption while retaining programming speed. The dual-element memory achieved up to an 80% reduction in energy consumption

Lu and his colleagues have been suggesting the potential of electronics based on memristive systems for some time. They have long believed that they could help make advancements in on-chip memory and storage, in-memory computing, and biologically inspired computing. 

The missing piece of the puzzle was trying to differentiate between small variations in the current passing through a memristor device. While neuromorphic computing easily aligns with the analogue nature of memristors, they were not precise enough for numerical calculations in ordinary computers. 

According to their study on memristor-based partial equations, Lu and his colleagues made the breakthrough by digitising the outputs and defining the ranges a bit values. This has allowed them to map large mathematical problems into smaller blocks within the array, driving both efficiency and flexibility. 

 

Potential Electronics Applications

The combination of memory and resistor can be programmed to store information as resistance levels. The result is that memory and processing can take place in the same device, removing the data transfer bottleneck that takes place in conventional computers.

The potential of the programmable memristor computer could enable many advances in electrical engineering:  

  • Machine learning models: As models get larger, on-chip memory becomes a problem, while off-chip data retrieval takes significant time and energy. Being able to do the computing in-memory, with variable resistance acting as information storage, could be a real game-changer. 

  • Device technologies: As devices become smaller, from smartphones to sensors, the challenge of meeting expectations grows. The power consumption needed for AI applications can be too great, however, on-chip computing could change all of this. The technology could also create far more efficient supercomputers.

  • Response times: Delivering on the expectation of instantaneous response requires serious computing power, without the need to transfer data, speed increases drastically.

The potential for programmers to run machine learning algorithms, dealing with vast amounts of data for identification and prediction, away from a computer’s central processor is huge. 

Currently, programmers prefer to use graphical processing units (GPU), as they have thousands of small cores running calculations simultaneously. If memristors can do their own calculations, thousands of operations within a core could run at once, resulting in faster and more secure results. Where GPUs offer an increase in speed of 10-100 times, memristor AI processors could be a further 10-100 times quicker. 

As well as its machine learning applications, the computer would also suit any task requiring matrix-based operations. An example is solving partial equations, such as complicated forms and multiple variables needed to model physical phenomena. With large matrices of data, the memory-processor communication process can create a real lag. The memristor array used within a programmable computer solves this problem. Lu’s experimental-scale computer used around 6,000 memristors. In commercial design, there could be millions, offering enormous potential.

 

Are Memristors the Future of Electronics Design?

Memristor technology could very well revolutionise computing in the coming years. With data fed through the array, mathematical processing occurs through natural resistances without needing to move vectors. The potential for an increase in the speed of machine learning is undeniable. 

While a research field that is still in progress, this could lead to engineering applications in supercomputing, image processing, computer vision, intelligent control and robotics.

With the prolific growth in Internet of Things (IoT) devices, there is set to be an unprecedented amount of data that will need processing. Programmable memristor computers could very well be the key to enabling technology that is highly configurable, scalable and energy-efficient. 

Lu is due to release the next version of the chip next year and is planning to use multiple arrays to demonstrate how larger networks can be developed. The next chip will be faster and more efficient and the race for the AI applications if firmly underway. 

Related Content

Comments


You May Also Like