But before moving onto the Neoverse N1 technology itself, letâs start by looking at the limitations of datacentres, and why they have called for Armâs partnership with Docker to develop the N1 in the first place.
Introducing The Neoverse Platform and The Arm-Docker Partnership
The Neoverse N1 Platform (the latest entry in Armâs Neoverse family, which was first introduced in October 2018), enables data processing on a much more scalable level than the traditional infrastructure offered by datacentres. The name âNeoverse N1â is a catch-all name for a variety of heterogeneous computing elementsâincluding the throughput-focused CPU, âNeoverse E1ââthat form Armâs latest approach to realising the ânext generation [of] cloud-to-edge infrastructureâ.
Image courtesy of Bigstock.
The reason for such technology is clear when you consider that Arm itself predicts that 2035 will be the year of a trillion smart, connected devices.
Itâs that level of IoT growth that means it may no longer be enough to rely on general-purpose datacentres alone to manage an organisationâs data flow, because the necessary processing capabilities need to be localised, i.e. as close as possible to the device in question. This is in order to provide a more efficient means to keep up with the modern demands on data traffic (especially when 5G is just around the corner)âparticularly by being both low-power and low-latency.
Fundamentally, the said requirements for edge, and therefore IoT-friendly, data processing has now driven both the development of Neoverse N1 and the said partnership between software giant Arm and the market leader in containerisation, Docker.
The collaboration between these two tech giants has been made to, as Arm explains, deliver a âfrictionless cloud-native software development and delivery model for cloud, edge, and IoTâ.
Having now discussed the âwhyâ behind the Neoverse N1âs development, letâs now see what the N1 is offering to achieve such power and speed efficiency through its cloud-to-edge-based data processing capabilities.
Image courtesy of Arm.
Where the Neoverse Platform Comes in
As Arm put it, its goal is to offer a more âheterogeneous and distributed infrastructureâ through its cloud-to-edge infrastructure. The elements that make this possible are opening the door for developers to have the right technologies in place, particularly for the future of ubiquitous connected devices.
This section looks at Neoverseâs two cores that make this possible: the Neoverse N1, and the Neoverse E1.
The Neoverse N1 Core
The key to accomplishing Armâs goal is efficiency; and this is achieved through two major qualities: low power and high throughput. While the E1 CPU core achieves the latter aspect, the former is covered by the N1 CPU. This is designed with, to quote Arm: âserver-class features and thread performance with cutting-edge low-power design techniquesâ.
The N1 is co-optimised with the CoreLink CMN-600 (Armâs mesh interconnect designed for networking infrastructure, high-performance computing, and more), which facilitates an extreme level of scalability (from 8 to 16 cores for networking, storage, security, and edge compute nodes; and 128 or more cores in the context of hyperscale servers).
The platform achieves chip-to-chip connectivity over, and in accord with, the Cache Coherent Interconnect for Accelerators (CCIX) systems. The CCIX Consortium embodies a set of specifications which align with Armâs said goal: to introduce the next generation of heterogeneous computing. In the case of CCIX, this is by enabling faster interconnects and stronger cache coherency for better communication between, not only CPU memory, but accelerators, too.
Again, such scalability and networking improvements all reflect Armâs ambition of achieving low-power solutions, i.e. higher efficiency, which leads us to the other side of the same coin: the Neoverse E1âs high-throughput offerings.
The Neoverse E1 Core
While the Neoverse N1 covers high-performance processing, the Neoverse E1 is the first mainstream Arm core to use the companyâs new simultaneous multithreading (SMT) microarchitecture design (âthreadâ here being short for âthreads of executionâ, meaning the processing technology that enables multiple computing tasks to work in parallelâsee below diagram).
A basic diagram that represents multithreading in action: two computer processing threads are executed both simultaneously and exclusively of one another. Image courtesy of Wikimedia Commons.
According to Arm, improvements such as this have led to the E1 having the following enhancements over its preceding processor, the Cortex-A53:
- 2.1 times the compute performance.
- 2.7 times the throughput performance.
- 2.4 times the throughput efficiency.
The said introduction of multithreading is altogether a turning point for Arm, who has traditionally avoided the use of SMTs, in favour of its multi-core-based âbig.LITTLEâ processing solution. Now, however, as network and communications technology becomes ever more prevalent, the use of SMT-based parallel task processing is more important than ever before.
This again goes back to scalability in terms of meeting modern demands: Arm says that such an architectural design is able to âsupport [todayâs] throughput demands for next-generation edge to core data transportâ.
Armâs infographic that shows the efficiency increases, software compatibilities, and general scalability of Armâs Neoverse E1 Platform. Image courtesy of Arm.
Armâsâand by extension of course, Dockerâsâefforts to introduce a cloud-to-edge infrastructure are a sign of the changing times: it is no longer enough to rely on remote, general-purpose datacentres in view of the exponential growth in connected devices; and suffice to say, the Neoverse Platform as a whole rises to the enormous data demands.
This is chiefly thanks to the efficiency breakthroughs that the technology has brought to the table, which again relate to Neoverseâs two cores: the N1 and the E1ârespectively, milestones in both power and throughput proficiency and scalability.
The result of such leaps in heterogeneous computing is that not only are Arm preparing for its predicted, 1 trillion connected devices in 2035, but they have also paved the way for developers to integrate their own custom-built architectures that are rooted in the open-source aspects of Neoverse IP. To end with a quote from Drew Henry, SVP of Armâs Infrastructure Line of Business:
âThis incredible scalability gives our partners the flexibility to build diverse compute solutions by adding accelerators or other features with their own on-chip custom silicon. All of this enables our partners to deliver solutions with a lower total cost of ownership for infrastructure customers.â
For more information on Arm's connected technology developments, read our interview with its senior VP of IoT Cloud Services, Himagiri Mukkamala.