SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Compute Express Link™ (CXL™) is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. CXL technology is designed to address the growing needs of high-performance computational workloads by supporting heterogeneous processing and memory systems for applications in Artificial Intelligence, Machine Learning, communication systems, and High-Performance Computing. In this presentation, attendees will learn about the next generation of CXL technology. The CXL 3.0 specification, releasing in Q3 2022, adds support for double the bandwidth and improved capability for better scalability and improved resource utilization. CXL 3.0 features include enhanced memory pooling and new memory usage models, multi-level switching with multiple hosts, fabric capabilities and enhanced fabric management, new symmetric coherency capabilities, and improved software capabilities.
Compute Express Link™ (CXL™) is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. CXL technology is designed to address the growing needs of high-performance computational workloads by supporting heterogeneous processing and memory systems for applications in Artificial Intelligence, Machine Learning, communication systems, and High-Performance Computing. In this presentation, attendees will learn about the next generation of CXL technology. The CXL 3.0 specification, releasing in Q3 2022, adds support for double the bandwidth and improved capability for better scalability and improved resource utilization. CXL 3.0 features include enhanced memory pooling and new memory usage models, multi-level switching with multiple hosts, fabric capabilities and enhanced fabric management, new symmetric coherency capabilities, and improved software capabilities.
Emergence of serial-attached protocols like CXL and OpenCAPI. It is possible to connect DRAM based devices in the same physical slot as NAND Flash SSD. This paper discusses which pieces of existing NVMe SSD standards, like Mechanical form-factors, management interface protocols, power mode and Thermal profiles, can be reused for DRAM based CXL Memory devices. This will not only save cost, but also accelerate the adoption because you can reuse existing software eco-system and extend it for managing any type of device, whether storage or memory. Few examples include: - Expanding power and Thermal profiles in SFF-TA-1006 and SFF-TA-1023 for DRAM devices - Remapping optional signals in EDSFF SFF-TA-1009 spec to enable Persistent Memory functions. - Remapping few NVMe-MI command for DRAM based Memory modules.
With CXL, system software, system hardware, and applications software developers will soon be presented with opportunities to disaggregate and pool memory into a memory-as-a-service for multiple computing hosts. In this session, Charles Fan will describe the architecture of such a system, including both services transparent to the applications and APIs that allow deeper integration. The components that will be covered include (1) host-based transparent memory tiering; (2) fabric management that allows dynamic provisioning and sharing; (3) advanced data services.
There is a demand for Hierarchical memory as use cases demand higher amounts of memory capacity and bandwidth as AI/ML, database applications scale up and scale out. Software-Defined Memory (SDM) is an emerging HW-SW co-design architecture paradigm that provides software abstraction between applications and underlying memory resources with dynamic memory provisioning to achieve the desired SLA. With emergence of newer memory technologies and faster interconnects, it is possible to optimize memory resources deployed in cloud infrastructure while achieving best possible TCO. SDM provides a mechanism that also allows to pool disjoint memory domains into a unified memory namespace. This talk will cover SDM Architecture, current industry use cases that drive the need for SDM, academic research and leading use cases (e.g. Memcached, databases etc.) that can benefit from SDM design. This talk will cover how applications (AI/ML, Databases, Cache, Virtualized Servers etc.) can consume different tiers of memory (e.g., DDR, SCM, HBM) and interconnect technologies (e.g. CXL) that are foundational to SDM framework to provide load-store access for large scale application deployments. Ongoing work in software community that enables SDM in Kernel as well as in applications will be covered. SDM value prop will be demonstrated with caching benchmarks and tiering to show how memory can be accessed transparently.
In order to develop open source CXL ecosystem software it has proven useful to emulate CXL features within the QEMU project. In this talk, I will introduce the current major CXL features that QEMU can emulate and walk you through how to set up a Linux + QEMU CXL environment that will enable testing and developing new CXL features. In addition, I will highlight some of the limitations of QEMU CXL emulation.
This session will give a brief overview of CXL and its evolution and then focus on possible use cases for storage.