SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
During the last two decades, the data center world has been moving to a “Software Defined Everything” paradigm. This has been taken care of mostly by hypervisors running on the x86 up to recently.
In parallel, a new communication protocol to interface with SSDs has been specified from the ground-up, allowing to fully exploit the levels of parallelism and performances of all-flash storage: NVMe, and NVMe-oF. NVMe-oF promises to enable the performances of direct attached all-flash storage with the flexibility an TCO savings of shared storage. To fully unlock the benefits of NVMe-oF while keeping the software defined paradigm, we believe a new kind of processor is needed: the Data Processing Unit, or DPU.
Today, information is being digitized on a massive scale, by servers in datacenters, by mobile devices, and by networks of sensors everywhere around us. Artificial intelligence techniques and ubiquitous processing power are making it possible to mine this massive ocean of data; however, integral to harnessing this data as knowledge is the ability to store it for long periods of time. Legacy storage solutions have scaled extensively over the years, but the areal density of magnetic media (HDD and tape), which enables today’s mainstream archival storage solutions, is slowing, and the size of libraries is becoming unwieldy. The industry needs a new storage medium that is more dense, durable, sustainable, and cost effective in order to cope with the expected future growth of archival data. DNA, nature’s data storage medium, enters this picture at a time when synthesis and sequencing technologies for advanced medical and scientific applications are enabling the manipulation of synthetic DNA in ways previously unimagined. In this panel discussion, the founders of the DNA Storage Alliance will discuss their views of this emerging technology and the challenges we face to make it a commercially viable part of the storage ecosystem.
CPU performance improvements based on Dennard scaling and Moore's Law have already reached their limits, and domain-specific computing has been considered as an alternative to overcome the limitations of traditional CPU-centric computing models. Domain-specific computing, seen in early graphics cards and network cards, has expanded into the field of accelerators such as GPGPUs, TPUs, and FPGAs as machine learning and blockchain technologies become more common. In addition, hyperscalers, where power efficiency is particularly important, use ASICs or FPGAs to offload and accelerate OS, security, and data processing tasks. Meanwhile, technologies such as cloud, machine learning, big data, and the edge are generating data explosively, and the recent emergence of high-performance devices such as over-hundred-gigabit networks, NVMe SSDs and SCMs has made CPU-centric computing more the bottleneck. Processing large amounts of data in a power-efficient manner requires re-examining the existing model of moving data from storage to the CPU, which consumes a lot of power and limits performance due to bandwidth limitations. Eventually, we expect that each device will extend the functions it performs into the realm of computing per its needs, and each device will participate in heterogeneous computing coordinated by the CPU. Samsung believes that near-data processing, or in-storage computing is another important piece of the puzzle. In this keynote, we look back at the system architecture that has changed to handle a variety of data and discuss the changes we expect from system architecture in the future. And we'll talk about what Samsung can contribute to these changes, including the evolution of computational storage, form factors, features, roles, benefits, and components. We'll also look at the ecosystem elements this computational storage needs to settle into, and talk about areas in which various industry players need to work together.
Although quantum technology can be leveraged to do many amazing things, it is not able to provide a general replacement for the storage capabilities we have today with HDDs and SSDs. However, there are a few things where quantum can be leveraged to provide some capabilities that are related to storage and this presentation will cover them. The presentation will start with a quick overview of some of the basic concepts of quantum technology and the reasons why quantum computing may potentially provide significant performance improvements over classical computing for certain applications. It will discuss how quantum computing does implement something similar to computational storage and follow that by explaining how quantum memories can be utilized in certain applications. It will wrap up by explaining how quantum computers work very closely with classical computers to form hybrid classical/quantum processing systems and mention that traditional SSD and HDD storage devices will still be needed on the classical side to support these types of systems.
When data processing engines are using more and more log semantics, it’s natural to extend Zoned Namespace to provide a native log interface by introduce variable size, byte append-able, named zones. Using the newly introduced ZNSNLOG interface, not only the storage device enables less write amplification/more log write performance, but more flexible and robust naming service. Given the trends towards a compute-storage disaggregation paradigm and more capable computational storage, ZNSNLOG extension enables more opportunities for near data processing.
Emerging and existing applications with cloud computing, 5G, IoT, automotive, and high-performance computing are causing an explosion of data. This data needs to be processed, moved, and stored in a secure, reliable, available, cost-effective, and power-efficient manner. Heterogeneous processing, tiered memory and storage architecture, accelerators, and infrastructure processing units are essential to meet the demands of this evolving compute, memory, and storage landscape. These requirements are driving significant innovations across compute, memory, storage, and interconnect technologies. Compute Express Link* (CXL) with its memory and coherency semantics on top of PCI Express* (PCIe) is paving the way for the convergence of memory and storage with near memory compute capability. Pooling of resources with CXL will lead to rack-scale efficiency with efficient low-latency access mechanisms across multiple nodes in a rack with advanced atomics, acceleration, smart NICs, and persistent memory support. In this talk we will explore how the evolution in load-store interconnects will profoundly change the memory, storage, and compute landscape going forward.