Maker Pro

EU Project TEMPO Aims to Produce AI Applications with Emerging Memory Technologies

July 18, 2019 by Tyler Charboneau

We’ve witnessed the exponential growth of AI applications in recent years, much of it driven by small companies with large ambitions. However, the EU’s Project TEMPO constitutes a coordinated, European effort to strengthen AI’s future.

Today’s portable and smart devices are ideal development platforms. While we have continually reduced the footprint of hardware components, their capabilities have simultaneously improved. Memory technologies, in particular, are crucial to application development.

Machine learning and natural language processing are foundational technologies for AI applications. They’re also quite demanding. From a performance standpoint, emerging memory technologies must meet intensive application demands head-on. Project TEMPO hopes to make ambitious applications a reality through hardware innovation.


What Does The Partnership Entail?

Project TEMPO is a collaboration between 19 members in the industrial and research spaces. It transcends national borders and rallies both private and public-sector resources around a collective goal. It is grant-funded by ECSEL Joint Undertaking and is set to expire in 2022. France, Germany, the Netherlands, and others are primary project partners.


Image courtesy of eeNews.


A Memory 'Power' Trio

TEMPO is harnessing the power of three memory technologies: imec’s MRAM, Fraunhofer’s FeRAM, and CEA-Leti’s RRAM. According to imec, these are ideal for implementing “both spiking neural network (SNN) and deep neural network (DNN) accelerators for 8 different use cases, ranging from consumer to automotive and medical applications”.

Each flavour brings a different set of possibilities to the table. Let’s examine each offering:



Dubbed STT-MRAM by imec, the technology is designed as a last-level cache specialised for 5-nanometre nodes. This is significant, as next-generation chip fabrication is focusing on making these diminutive nodes commonplace. Additionally, testing has revealed MRAM to be more power-efficient than static random access memory (SRAM). Since AI applications will live in mobile devices, these power savings are advantageous.

STT-MRAM consists of three layers connected by a magnetic tunnel junction. A thin dielectric layer is surrounded by a magnetic fixed layer and a magnetic free layer. This is a more simple alternative to traditional CMOS transistor designs. It also has a smaller footprint: one STT cell is 56.7 per cent smaller than an equivalent SRAM cell. STT-MRAM is also performant and uniquely suited to the high-performance computing sphere.



Ferroelectric RAM (FeRAM) is similar to dynamic RAM (DRAM), though a ferroelectric layer replaces a dielectric layer. This layer is polarised, allowing for field switching and memorisation. FeRAM is innately more compact than DRAM, and its tunability makes it flexible enough for a variety of applications.

FeRAM is power efficient and adept at quick writes. Furthermore, FeRAM can store data for roughly a decade at 85 degrees celsius. This longevity can multiply in low-power systems.



Dubbed 'resistive RAM', RRAM comprises a conductive top layer, a silicon medium, and a non-metallic lower layer. Voltage is applied to a pair of electrodes, which diffuse into a switching layer. RRAM is often compared to NAND flash memory, yet it's much denser. RRAM silicon wafers are strikingly compact. RRAM also has a huge production advantage: most fabricators won’t have to retool their manufacturing processes to produce RRAM at scale.


The Evolution of AI Applications

AI is focused on rapid processing and automation, i.e. tasks which require heavy computing power. Many mobile processors today can tackle neural networking operations, and some can handle millions, or even billions, at a given moment.

This power is impressive at the processing level, but memory technology represents the largest remaining bottleneck in neural networking. Project TEMPO aims to solve this current hurdle.

AI applications are growing in complexity and number—that much is undeniable. These applications are becoming more pervasive via smartphones and home assistants.

Consider Google Home and the Amazon Echo family: two popular platforms that use natural-language processing. These algorithms analyse speech in real time, translating it internally into commands.


From left to right: a microphone icon (voice recognition button) and Siri's active voice command screen. Image courtesy of Twilio.


Though this appears simple at the user experience level (as it should), the background processes are relatively complex. As companies expand such capabilities, algorithms will have to evolve in lockstep. This increased functionality comes with a computing cost. While our current technology offers a degree of future-proofing, that extra headroom will eventually run out.


The Cognitive Component

These large chunks of reference data are held in memory, based on user routines. Consequently, deep learning and machine learning are based on human cognitive processes, which are emulated and hardware-accelerated. Humans are creatures of habit, and we developed AI to recognise and learn from these patterns.


Digital brain concept. Image courtesy of Chartered Financial Analyst Institute.


Quick cognition is based on heuristics and associations, which spark recall. That may include associating environmental stimuli with past events, or—perhaps most importantly—drawing from past experiences to solve logic problems.

Think about it this way: we store ageing memories in the 'back of our minds', so to speak, while recent memories are easily accessible. How do we translate this human cognition to hardware? Computers hold data for long periods in storage memory.

However, quick application data is held in RAM for short periods, where it is cached and readily accessible. Neural networking and machine learning rely heavily on this hardware integration.

As you might expect, some types of RAM are better suited to certain operations. Accordingly, Project TEMPO is based on advancing RAM technology, so these AI operations can become even more efficient. MRAM, FeRAM, and RRAM are flexible technologies with room for improvement. TEMPO’s focus is on squeezing every last ounce of improvement out of these varieties.


The Logistics Question

Fabricators are key, as silicon production and experimentation require time and effort. TEMPO is largely research-focused, allowing manufacturers to evaluate their production practices as technology evolves. We are shifting to smaller fabrication processes, which will already require a degree of retooling.

Though AI is still maturing overall, engineers will have to eventually face these changes. It’s particularly advantageous to develop memory solutions that utilise existing production methods, aiding scalability.

For example, this is a strength of RRAM—a technology that CEA-Leti has heavily invested in. As these TEMPO partners refine their respective solutions over the next 3 years, it will be interesting to observe any manufacturing ramifications.

Related Content


You May Also Like