Sorry, you need to enable JavaScript to visit this website.
Submitted by Anonymous (not verified) on

Server memory (DRAM) is one of the biggest cost components in the data center, but it’s common to find that over half of the provisioned memory isn’t actively utilized by applications and can be considered cold. Studies from major cloud providers / hyperscalers including Microsoft Azure, Google Cloud, and Meta have found that utilization for internal and external workloads regularly drops to 50% or below.* This means that much of DRAM is wasted—resulting in massive amounts of completely wasted spend. 

In this session, we will delve into how the industry has (unsuccessfully) tried to solve this challenge with approaches like CXL, Optane and others, and the limitations that come with each. We will also explore a novel approach, AI-powered predictive memory, that enables system Flash to appear as DRAM-speed memory—brining Flash into the memory tier and enabling unprecedented cost efficiencies and scope for capacity expansion. We’ll examine how this approach can be applied in any computing environment—on-prem or in the cloud, with any processor, across virtualized / bare metal / containerized environments, with no changes to the OS or applications. 

Sources: *https://dl.acm.org/doi/pdf/10.1145/3578338.3593553 & https://www.datacenterdynamics.com/en/news/only-13-of-provisioned-cpus-…

Learning Objectives

DRAM constitutes roughly half of the cost of an individual server, over $100B is spent on it annually, and prices are only rising. The cost of DRAM is a major reason why modern computing tends to get so expensive.
For such a costly resource, the expectation would be that all memory being provisioned is truly needed and utilized effectively. In many business environments, however, this is not the case. Studies from major cloud providers, however, have shown that DRAM utilization regularly drops to 50% or below.
MEXT is tackling the memory utilization challenge. MEXT continually offloads underutilized memory pages to Flash and leverages AI to predict which pages in Flash should be preloaded back into DRAM. This keeps application performance intact within a far smaller DRAM footprint—yielding lower computing costs.
The MEXT AI engine was inspired by modern AI techniques based on neural networks; but instead of using these techniques to predict words or natural language patterns (like ChatGPT does), it predicts sequences of future memory page accesses.

Main Speaker / Moderator
Webform Submission ID
30