SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Phil Cayton is a Senior Staff Engineer at Intel Corporation, with 25 years’ experience developing and researching non-volatile local and remote storage and fabrics technologies, particularly NVMe, NVMe-oF, NVMe-MI, InfiniBand and iWARP architectures resulting in 25+ patents.
New technologies and platforms have laid waste to the assumptions of fixed-size, monolithic memory. Multiple layers of CXL-attached memory and persistent memory now provide a wide variety of types and speeds of memory available to developers of future systems. We will compare these various tiers of memories, whether inside the box or remotely attached. Next, we will examine how users and consumers of multi-tiered memory can make use of their varying characteristics, such as latency and persistence. We'll discuss the idea of application-specific memory, as well as some of the libraries available for tiering. We'll touch on the various different schemes for how to use the multiple layers of memory. Finally, we'll discuss the ways to manage the hierarchies and optimize access to them by modeling the ways applications can use them. This helps make the most of the hierarchy.
Modern memory, storage and content delivery systems are built out of myriad components that provide a dizzying set of parameters used to optimize their cost or performance. By modularly simulating the components into building blocks, we show how you can quickly try many different sets of configurations. This modularity also allows plugging in alternative or proprietary components to measure their impact on the overall system cost and performance. These simulations can be built in days or weeks, instead of the months to years needed to build and test live systems.
This session takes a real-world configuration and steps through the process of simulating it. We demonstrate how modular components are swapped and the simulated performance of different variations. We share our real-world experience in how full-system experimentation can be done better, cheaper and faster using simulations. This includes a recent success story that resulted in discovering alternative configurations with double-digit gains.
You won't want to miss this session.
Building on Optimizing Complex Hierarchical Memory Systems through Simulation from SDC 2023, this talk details recent work to optimize caching systems used in Content Delivery Networks, or CDNs. CDNs are built out of groups of servers at a variety of locations with various tiers and types of caches. Modern CDN caches present a huge array of variable configurations.
By modularly simulating the components into building blocks, we show how you can quickly try many different sets of configurations. This modularity also allows plugging in alternative or proprietary components to measure their impact on the overall system cost and performance. These simulations can be built in days or weeks, instead of the months to years needed to build and test live systems.
This session walks through the process of simulating specific CDN configurations. We demonstrate how modular components are swapped and show the simulated performance of different variations. We also demonstrate how to use real-world CDN traces to build realistic scenarios and how the results of the analysis are graphically presented.
You won't want to miss this session.
Building Content Delivery Networks, or CDNs, requires distributed, localized collections of compute, memory and storage. CDNs are built out of groups of servers at a variety of locations with various tiers and types of caches. Modern CDN caches present a huge array of variable configurations.
Magnition sponsored a university research project to study configurations and algorithms that are used in CDNs to optimize response speed and cost of the front-end caches. By testing the configurations and algorithms through simulations, over 100,000 variants were examined and compared using traffic traces provided by genuine CDN companies. These simulations were able to test what would have been years worth of traffic in the course of roughly a week.
This session discusses the process of simulating specific CDN configurations. We demonstrate how modular components and algorithms are sampled, share some of the results we found, and talk about some of the anomalies we stumbled across along the way. We also demonstrate how to use real-world CDN traces to build realistic scenarios and show graphical representations of the results.
You won't want to miss this session.
CXL offers unprecedented opportunities to design and build much larger application and compute arrangements than were available even a few years ago. The ability to connect memory subsystems and other compute resources through a switched network provides a dizzying array of possibilities for custom tailoring an environment for a particular workload.
It also presents perplexing complexity of how to design and configure efficient, right-sized systems. Magnition is collaborating with JackRabbit Labs to provide a CXL simulation environment, allowing you to build intricate CXL configurations using deterministic behavioral simulations of genuine CXL component designs. These models can be run against real-world workloads to make sane design and layout decisions. This can be done without having the number of components, or even any of the components, to build a physical production system.
Join us as we describe how we emulate individual CXL devices and switches and build them into a large-scale simulation environment. With this, you can prove and stress your designs far more easily, cheaply and flexibly than building genuine lab systems.