SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
SSDs that support Zoned Namespace (ZNS) are increasingly popular for large-scale storage deployments due to their cost efficiency and performance improvements over conventional SSDs, which include 3-4x throughput increase, prolonged SSD lifetime, as well as making QLC media available to I/O heavy workloads. As the zoned storage hardware ecosystem has matured, its open-source software ecosystem has also grown. As a result, we are now emerging at a new stage that provides a solid foundation for large-scale cloud adoption. This talk describes SSDs that support Zoned Namespace s, the work of SNIA's Zoned Storage TWG to standardize ZNS SSD device models, and the quickly evolving software ecosystem across filesystems (f2fs, btrfs, SSDFS), database systems (RocksDB, TerarkDB, MySQL), and cloud orchestration platforms (Openstack and Kubernetes /w Mayastor, Longhorn, SPDK's CSAL).
SSDs that support Zoned Namespace (ZNS) are increasingly popular for large-scale storage deployments due to their cost efficiency and performance improvements over conventional SSDs, which include 3-4x throughput increase, prolonged SSD lifetime, as well as making QLC media available to I/O heavy workloads. As the zoned storage hardware ecosystem has matured, its open-source software ecosystem has also grown. As a result, we are now emerging at a new stage that provides a solid foundation for large-scale cloud adoption. This talk describes SSDs that support Zoned Namespace s, the work of SNIA's Zoned Storage TWG to standardize ZNS SSD device models, and the quickly evolving software ecosystem across filesystems (f2fs, btrfs, SSDFS), database systems (RocksDB, TerarkDB, MySQL), and cloud orchestration platforms (Openstack and Kubernetes /w Mayastor, Longhorn, SPDK's CSAL).
Flexible Data Placement (FDP) represents the latest development in mainstream data placement technology for NVMe. Although its use-cases resemble those of other NVMe features, such as Streams and ZNS, the differences have significant implications for the implementation within host storage stacks and applications. As host stacks adopt various data placement technologies, the risk of bloated codebases and redundant implementations rises, increasing maintenance costs for large mainline projects. In this presentation, we discuss the efforts to integrate FDP support in Linux, ranging from QEMU emulation to different I/O paths (e.g., Linux Kernel I/O Passthru, SPDK), libraries (e.g., xNVMe), and tools (e.g., fio, nvme-cli). We highlight the design decisions made to enable early adoption without depending on major Linux kernel block layer changes. Additionally, we provide an overview of the FDP stack implementation in two example applications: Cachelib and RocksDB. We use these examples to look at WAF behaviour as well as evaluate the engineering efforts of adoption.
In recent years, zoned storage has become pervasive across the storage software ecosystem, including file systems, cloud storage, and end-to-end application integrations. Zoned storage overcomes the drawbacks of write amplification by enabling the host to collaborate with the storage device when submitting writes. The ecosystem seamlessly integrates support for shingled magnetic drives (SMR HDDs), SSDs with Zoned Namespaces (ZNS SSDs), and UFS-enabled mobile devices through a single storage abstraction. It furthermore offers the ability to enable the use of QLC media in write-heavy workloads and reduces cost by eliminating the media over-provisioning.
This presentation highlights the integration of zoned storage in the cloud-native storage stack, from libraries to file systems, cloud storage and databases. Attendees will gain insights into the innovative combination of zoned storage and log-structured writes and understand how zoned storage integration addresses the challenges of modern storage workloads. Furthermore, the tight integration with the Kubernetes cloud orchestration platform is presented and made comprehensible in a showcase.
This presentation by provides an overview of the NVM Express® ratified technical proposal TP4146 Flexible Data Placement and shows how a host can manage its user data to capitalize on a lower Write Amplification Factor (WAF) by an SSD to extend the life of the device, improve performance, and lower latency. Mike Allison, the lead author of the technical proposal will cover: • The new terms associated with FDP (e.g., Placement Identifier, Reclaim Unit, Reclaim Unit Handle, Reclaim Groups, etc.) • Enabling the FDP capability • Managing the Reclaim Unit Handles during namespace creation • How I/O writes place of the user data to a Reclaim Unit • The differences between other placement capabilities supported by NVM Express
Flexible Data Placement (FDP) is a new NVM Express® (NVMe) feature that advertises the ability to achieve Write Amplification Factor (WAF) of 1. This presentation will describe what a WAF of 1 means for an SSD. Several example workloads achieving a WAF of 1 will be discussed. Additionally, some Hosts have an increased ability to restrict either the deallocate behavior or the write behavior of their various workloads. For this reason, a review of different Host side rule implementations will be discussed. Illustrated NAND activity will be described enabling an attendee to extrapolate to other workloads. From simple circular FIFOs to large region deallocations to probabilistic overwrites, this presentation will cover many FDP use-cases with recommended write and deallocate guidelines for Hosts.
This presentation delves into the development and implementation of a multi-petabyte archive storage solution, focusing on the challenges and architectural choices made to accommodate Host Managed Shingled Magnetic Recording (HM-SMR) drives in a software-defined storage environment. We will discuss the sequential write constraints of SMR drives and compare various HM-SMR libraries from a software engineering standpoint.
Additionally, we will explore the utilization of ZoneFS as a File IO interface to HM-SMR for Leil software-defined storage and the modifications made to our CI/CD framework following the integration of HM-SMR drives. The significance of data alignment in HM-SMR driven storage development and its impact on the format of stored data will be addressed. Lastly, we will showcase a graphical UI tool designed to inspect the content of zones within HM-SMR drives during development.