Display Order
0
Track Background Color
#472f5b
Old ID
425

SNIA SDXI Specification v1.0 and Beyond

Submitted by Anonymous (not verified) on

SDXI v1.0 is a standard for a memory-to-memory data mover and acceleration interface that is extensible, forward-compatible, and independent of I/O interconnect technology. Among other features, SDXI standardizes an interface and architecture that can be abstracted or virtualized with a well-defined capability to quiesce, suspend, and resume the architectural state of a per-address-space data mover. This specification was developed by SNIA’s SDXI Technical Working Group, comprising of 89 individuals representing 23 SNIA member companies.

CXL Memory Disaggregation and Tiering: Lessons Learned from Storage

Submitted by Anonymous (not verified) on

The introduction of CXL has significantly advanced the enablement of memory disaggregation. Along with disaggregation has risen the need for reliable and effective ways to transparently tier data in real time between local direct attached CPU memory and CXL pooled memory. While the CXL hardware level elements have advanced in definition, the OS level support, drivers and application APIs that facilitate mass adoption are still very much under development and still in discovery phase.

CXL: Advancing Coherent Connectivity

Submitted by Anonymous (not verified) on

Delivering high-performance interoperable computational infrastructures is vital to meeting the exponential growth of global data for applications in Artificial Intelligence, Machine Learning, Analytics, Cloudification of the Network and Edge, and High-Performance Computing. CXL™ (Compute Express Link™), an open interconnect standard, delivers coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices to deliver optimized performance in evolving usage models.

CXL and the Art of Hierarchical Memories: Their Management and Use

Submitted by Anonymous (not verified) on

New technologies and platforms have laid waste to the assumptions of fixed-size, monolithic memory. Multiple layers of CXL-attached memory and persistent memory now provide a wide variety of types and speeds of memory available to developers of future systems. We will compare these various tiers of memories, whether inside the box or remotely attached. Next, we will examine how users and consumers of multi-tiered memory can make use of their varying characteristics, such as latency and persistence.

Fabric Attached Memory – Hardware and Software Architecture

Submitted by Anonymous (not verified) on

HPC architectures increasingly handle workloads where the working data set cannot be easily partitioned or is too large to fit into node local memory. We have defined a system architecture and a software stack to enable large data sets to be held in fabric-attached memory (FAM) that is accessible to all compute nodes across a Slingshot-connected HPC cluster, thus providing a new approach to handling large data sets.

Compute Express Link™ (CXL™): Enabling an interoperable ecosystem for heterogeneous memory and computing solutions

Submitted by Anonymous (not verified) on

CXL, an open industry standard interconnect, addresses the growing high-performance computational workloads to support heterogeneous processing and memory systems with applications in Artificial Intelligence, Machine Learning, Analytics, Cloud Infrastructure, Cloudification of the Network and Edge, communication systems, and High-Performance Computing by enabling coherency and memory semantics on top of PCI Express® (PCIe®) based I/O semantics to optimize performance in evolving usage models.

Riding the Long Tail of Optane’s Comet - Emerging Memories, CXL, UCIe, and More

Submitted by Anonymous (not verified) on

It’s been a year since the announcement that Intel would “Wind Down” its Optane 3D XPoint memories. Has anything risen to take its place? Should it? This presentation reviews the alternatives to Optane that are now available or are in development, and evaluates the likelihood that one or more of these could fill the void that is being left behind. We will also briefly review the legacy Optane left behind to see how that legacy is likely to be used to support persistent memories in more diverse applications, including cache memory chiplets.

Optimizing complex hierarchical memory systems through simulations

Submitted by Anonymous (not verified) on

Modern memory, storage and content delivery systems are built out of myriad components that provide a dizzying set of parameters used to optimize their cost or performance. By modularly simulating the components into building blocks, we show how you can quickly try many different sets of configurations. This modularity also allows plugging in alternative or proprietary components to measure their impact on the overall system cost and performance. These simulations can be built in days or weeks, instead of the months to years needed to build and test live systems.

Optimizing Content Delivery Networks through Simulations

Submitted by Anonymous (not verified) on

Building on Optimizing Complex Hierarchical Memory Systems through Simulation from SDC 2023, this talk details recent work to optimize caching systems used in Content Delivery Networks, or CDNs. CDNs are built out of groups of servers at a variety of locations with various tiers and types of caches. Modern CDN caches present a huge array of variable configurations.

Emulating CXL with QEMU

Submitted by Anonymous (not verified) on

In order to develop open source CXL ecosystem software it has proven useful to emulate CXL features within the QEMU project. In this talk, I will introduce the current major CXL features that QEMU can emulate and walk you through how to set up a Linux + QEMU CXL environment that will enable testing and developing new CXL features. In addition, I will highlight some of the limitations of QEMU CXL emulation.

Subscribe to Magic Memory Access