Software defined networking, storage and compute capabilities

Resource Capabilities

 
COMPUTE
 
 
STORAGE
 
 
NETWORKING
 

COMPUTE

The Compute for the KMAX product uses a scalability model based on the concept of Compute Unit.  A compute unit delivers the required compute in a form appropriate for the target applications along with the local memory required to support the applications running on the unit.  A compute unit does not itself define any storage or networking resources, which are detailed at compute node level in which they are shared by one or more compute units.
 

Exynos7 Compute Unit
 

The first generation of KMAX compute unit is based on the Samsung Exynos range of components that made it possible to create the first 14nm lithography ARMv8-A based device that KALEAO leverages to extend, for the first time ever, the extreme power efficiency of ARM big.LITTLE technology to the data centre.

Processor Cores
Each compute unit delivers 4x Cortex-A57 processor along with 4x Cortex-A53 processor to create the big.LITTLE arrangement.  The Cortex-A57 is capable of delivering the heavy processor lifting to ensure rapid application responses, while the Cortex-A53 is capable of ensuring that the background and low intensity workloads do not interrupt or detract from the high intensity activities, keeping performance high.  Working as a full 8-core SMP platform, the total performance can significantly exceed the more expensive processor sockets used today across many web-scale applications.
The KALEAO roadmap includes compute units with different capabilities including the addition of heterogeneous core and reconfigurable fabric based accelerators.

Main Memory
For each set of processor cores, there are  4GBs of package on package (PoP) LPDDR4 memory providing 25GB/s of bandwidth with roughly 1GB per hardware thread.  This matches efficiently the predominant application domain configuration of 4 X application cores on a 4GB machine.  However, unlike most other environments, the processor density of KMAX limits the need to overcommit applications with a single socket. Therefore, the application receives the full performance entitlement of the processor without sharing the clock frequency and memory channel bandwidth.

NVCache Memory
The NVCache is a high bandwidth tightly coupled memory with uses ranging form a ephemeral store for rapid boot and virtual machine instantiation through to providing the transcendental cache for the memory overcommit by the virtual machine manager. The Exynos compute unit uses a UFS2 based device in which the total capacity and device endurance can be selected to match the capability required.  Each compute unit can utilise a 64, 128 or 256GB device enabling a range of applications.  This device is also further enhanced by a inline hardware based de/compressor that can provide an effective 3x increase to the capacity and/or endurance of the selected device.


Quad-Exynos7 Compute Node
 

The KMAX Exynos7 compute unit node is constructed to support a 4:1 ratio between compute, networking and storage.  This ratio was selected to create a non-blocking balance between the per-compute unit capabilities and the capabilities of the selected network and storage interface devices. The storage and networking resources are also directly connected, allowing full network bandwidth access directly to the full storage bandwidth. Each of the 4 compute units are mounted on the same PCB with a common path to the embedded 2.5” NVMe SSD and network interfaces to form a compute node. The network and storage resources provide the hardware reconfiguring required to create the physicalized device interface exposed to each compute unit.




 

STORAGE

KALEAO storage system
 

The KMAX storage capabilities are based on the concepts of:
 
  • All flash storage using data-centre class commodity  SSD drives
  • A distributed share anything, everything-close architecture.

Each compute node supports a single 2.5”/7mm (datacentre) PCIe x4  NVMe factory fitted device.
The device can be supplied with different capabilities to support the characteristic of different read/write ratio, and therefore the endurance/cost ratio.

Endurance options include a DWPD (Disk full write per day )of 0.8, 3.6 and 10 over a 5 year period.

Drives today are available in the following capacities 500MB, 900MB, 1.9TB and 7.6TB with higher capacities expect in the near future.

There are 4 compute nodes per blade so each blade support 4x SSDs. With different SSD sizes, KMAX can provide the following example storage capabilities. A mix of drive capacity can also be used.
 
Each compute unit has an independent and self-sufficient path to storage that, with balanced bandwidth, provides no points of contention in the system, facilitating scalability and resiliency.

The converged nature of the nodes  makes compute and network close to the storage for minimum latency and maximum bandwidth without placing any load on the application processors.
 
STORAGE CAPABILITIES PER DRIVE

Capacity

500MB, 900MB, 1.9TB and 7.6TB

Endurance

0.8, 3.6 and 10 over a 5 year period

STORAGE CAPABILITIES PER SYSTEM
Blade Chassis Rack
2TB 24TB 336TB
4TB 48TB 672TB
14TB 96TB 1.34PB
30.7TB 370TB 5.15PB
 

Storage Physicalization


Block storage devices are dynamically created either locally, across a blade, chassis, rack, or system. One or more block devices can be assigned directly to a guest OS.
 
  • Content can be empty, cloned or shared
  • Each device can also define its replication and availability attributes. Remote node storage requests across any network path will follow a “fast path” directly between the storage and  network resources.




 

NETWORKING

KALEAO network system


The KMAX blade supports four compute nodes, and each connect to an embedded 10/40Gb switch.

The switch interconnects both nodes together and provides connectivity via 2x QSFP 40/10 Gb/s ports located on the front panel of the blade

Supporting the concept of independent production A and B networks, when a guest machine is created, it can be assigned 1 or more network adaptors, which are dynamically created and pinned in the memory of the guest and configured to map to the production networks. Offering scalable virtual networking, void of the limitation and cost of traditional VLAN, any physicalized network interface can be assigned into an encapsulated virtual network or physically bound to either of the dual independent chassis-level production network ports.  Interface devices can also be bound across both ports to provide either route resilience or increased aggregate bandwidth.
 

Network Physicalization
 

When a new application domain is created, more or more new physicalized network adaptors are dynamically assigned an unique MAC address along with the network and routing policies and are pinned using independent memory buffers, directly to the specific application domain.

Networking packets are switched between each compute unit internally and across compute nodes using the embedded switch and therefore maximizing inter-machine communication efficiency and limiting costly external network traffic.