SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
NVM Express® (NVMe®) has become synonymous with high-performance storage with widespread adoption in client, cloud, and enterprise applications. The NVMe 2.0 family of specifications, released in June 2021, allow for faster and simpler development of NVMe solutions to support increasingly diverse environments, now including Hard Disk Drives (HDDs). The extensibility of the specifications encourages the development of independent command sets like Zoned Namespaces (ZNS) and Key Value (KV) while enabling support for the various underlying transport protocols common to NVMe and NVMe over Fabrics (NVMe-oF™) technologies. This presentation provides an overview of the latest NVMe technologies, summarizes the NVMe standards roadmap, and describes new NVMe standardization initiatives.
The DatacenterSSD specification has been created by a group of hyperscale datacenter companies in collaboration with SSD suppliers and enterprise integrators. What is in this specification? How does it expand on the NVMe specification? How can devices demonstrate compliance? In this talk we’ll review important items from the DatacenterSSD specification to understand how it expands on the NVMe family of specifications for specific use cases in a datacenter environment. Since the DatacenterSSD specification goes beyond just an interface specification, we will also show what test setups can be used to demonstrate compliance.
Storage interfaces have evolved more in the past 3 years than in the previous 20 years. In Linux, we see this happening at two different layers: (i) the user- / kernel-space I/O interface, where io_uring is bringing a low-weight, scalable I/O path; and (ii) and the host/device protocol interface, where key-values and zoned block devices are starting to emerge. Applications that want to leverage these new interfaces have to at least change their storage backends. This presents a challenge for early technology adopters, as the mature part of the Linux I/O stack (i.e., the block layer I/O path) might not implement all the needed functionality. While alternatives such as SDPK tend to be available more rapidly, the in-kernel I/O path presents a limitation. In this talk, we will present how we are enabling an asynchronous I/O path for applications to use NVMe devices in passthru mode. We will speak to the upstream efforts to make this path available in Linux. More specifically, we will (i) detail the changes in the mainline Linux kernel, and (ii) we will show how we are using xNVMe to enable this new I/O path transparently to applications. In the process, we will provide a performance evaluation to discuss the trade-offs between the different I/O paths in Linux, including block I/O io_uring, passthru io_uring, and SPDK.
The QEMU emulated NVMe device is used by developers and users alike to develop, test and verify device drivers and tools. The emulated device is in rapid development and with QEMU 6.0, the device was updated to support a number of core additional features such as an update to NVMe v1.4, universal Deallocated and Unwritten Logical Block Error support, enhanced PMR and CMB support as well as a number of experimental features such as Zoned Namespaces, multipath I/O, namespace sharing and DIF/DIX end-to-end data protection. The addition of these features allow users to test various configurations and developers to test device driver code paths that would normally not be easily exercised on generally available hardware. In this talk we present the implementation of these features and how they may be used to improve tooling and device drivers.
The de-facto way of copying data in I/O stack has been pulling it from one location followed by pushing to another. The farther the application, requiring copy, is from storage, the longer it takes for trip to be over. With copy-offload the trip gets shorter as the storage device presents an interface to do internal data-copying. This enables the host to optimize the pull-and-push method, freeing up the host CPU, RAM, and the fabric elements. The copy-offload interface has existed in SCSI storage for at least a decade through XCOPY but faced insurmountable challenges in getting into Linux I/O stack. As for NVMe storage, copy-offload made its way into the main specification with a new Simple Copy Command (SCC) recently. This has stimulated a renewed interest and efforts toward copy-offload in Linux community. This talk presents copy-offload work in Linux, with a focus on NVMe simple-copy command. We outline the previous challenges, and extensively cover the current upstream efforts towards enabling SCC; we believe these efforts can sprout more copy-offload standardization advancements. We also elaborate the kernel/app interface and the use-cases that are being built and discussed in Linux I/O stack. The talk will conclude with the evaluation data comparing SCC with regular read-write based copy.
Yes, it really does say Windows and SPDK in the same sentence! The Storage Performance Development Kit (SPDK) is a well-regarded set of tools and libraries for writing high performance user mode storage applications on Linux and FreeBSD. However, in a typical Data Centre, a significant percentage of the servers will be running Microsoft Windows where the options for NVMe support are more limited. This talk looks at what is involved in making SPDK run natively on Windows. Starting with the creation of the Windows Platform Development Kit (WPDK) as a base, it covers the design options that were considered, the build environment, the trade offs and potential benefits. The current state is explained, together with examples of the changes that were required in both the SPDK and Data Plane Development Kit (DPDK). WPDK provides the POSIX functionality needed to run SPDK on Windows, implementing a set of headers and a lightweight library which map functionality as closely as possible to existing Windows semantics. It is intended to be a production quality layer that runs as native code, with no surprises, that can be tested independently. Although still at an experimental stage, the project has successfully served NVMe over TCP and iSCSI targets that are directly attached to physical NVMe disks. As a newcomer to open source development, there will also be a few personal reflections on the experience of gaining support from both the SPDK and DPDK communities to realise the vision.
The NVM Express® (NVMe®) family of specifications, released in June 2021, allow for faster and simpler development of NVMe solutions to support the increasingly diverse NVMe device environment, now including Hard Disk Drives (HDDs). The extensibility of the specifications encourages the development of independent command sets like Zoned Namespaces (ZNS) and Key Value (KV) while enabling support for the various underlying transport protocols common to NVMe and NVMe over Fabrics (NVMe-oF™) technologies. The NVMe 2.0 library of specifications have been broken out of multiple documents, including the NVMe Base specification, various Command Set specifications, various Transport specifications and the NVMe Management Interface specification. In this session, attendees will learn how the restructured NVMe 2.0 specifications enable the seamless deployment of flash-based solutions in many emerging market segments. This session will provide an overview and usages for several the new features in the NVMe 2.0 Specifications including ZNS, KV, Rotational Media and Endurance Group Management. Finally, the session will cover how these new features will benefit the cloud, enterprise and client market segments.