SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Abstract Block storage access across Storage Area Networks (SANs) have an interesting protocol and transport history. The NVMe-oF transport family provides storage administrators with the most efficient and streamlined protocols so far leading to more efficient data transfers and better SAN deployments. In this session we will explore some of the protocol history to set the context for a deep dive into NVMe/TCP, NVMe/RoCE, and NVMe/FC. We will then examine network configurations, network topology, QoS settings, and offload processing considerations. This knowledge is critical when deciding how to build, deploy, operate, and evaluate the performance of a SAN as well as understanding end to end hardware and software implementation tradeoffs. Agenda SAN Transport History and Overview Protocol History Protocol Comparisons NVMe/FC Deep Dive NVMe/RoCE Deep Dive NVMe/TCP Deep Dive Networking Configurations and Topologies for NVMe-oF QoS, Flow Control and congestion L2 local vs L3 routed vs Overlay Offload Processing Considerations and Comparisons
Cross Comparison of SAN transports
First principles behavior for NVMe-oF transports
Practical networking considerations for deploying NVMe-oF
Implications of NVMe-oF transports for data flow and packet processing
Infrastructure Processing Units (IPU) is an evolution of SmartNIC, focusing on infrastructure processing such as networking offload and storage offload. IPU is a critical ingredient in the disaggregated computer architectures and becomes a control point in the DC-oF (Data Center of the Future). In this talk, we would like to share the NVMe over TCP Initiator implementation as an example for IPU based storage offload, focusing on SPDK (https://spdk.io) support for IPU based NVMe over TCP Initiator solution.
First, we will introduce our high performance 200GbE IPU, then we will present the software components composed with SPDK based NVMe over TCP software stack. Second, we will share the performance optimizations in SPDK from last year which leverages on Intel® Ethernet 800 Series with Application Device Queues (ADQ) to improve SPDK based NVMe/TCP initiator performance. Third, we will share some performance results from Intel® Ethernet 800 Series with Application Device Queues (ADQ), which demonstrates ADQ significantly improves the performance (increase the IOPS and reduces the longtail latency) for the SPDK based NVMe over TCP initiator.
Real Time Analytics and Data intensive applications have driven the adoption of high performance, low latency, highly parallel NVMe solutions and now we are in the cusp of wide adoption of NVMe over Fabrics (NVMe-oF) Storage in both On-Prem and Cloud Data Centers. Since the introduction of NVMe-oF, a lot of investment and effort have gone in to improve the overall storage performance and features resulting in substantial improvement of IOPs and CPU utilization and lowering the TCO for the customer. NVMe over Fabrics lends itself well to cater to the demands of modern applications on Kubernetes Platforms and operating systems are now able to scale up and saturate the Storage Bandwidth. We will go over various architectures and their benefits/pitfalls. How modern applications are driving requirements for low latency IO. How latest 64G Fibre Channel provides 10+M IOPs with round 10 microsecs latency. What does this mean to the end customer? We will go over the performance benefits from the recent changes with respect to NVMe over Fabrics. Also, will focus on how commonly used applications running in VMs/Containers are driving increase in IO Density and how to get optimum performance
Storage Area Networks (SANs) are usually used to access the most critical data of an organization, therefore ensuring their security is of paramount importance. This presentation will introduce the general SAN security threats and the methods (authentication and secure channel) to mitigate them. It will also present the authentication protocol and secure channel specifications that have been defined for NVMe-oF over IP fabrics, with special attention to the NVMe/TCP case.
NVMe/TCP has the potential to provide significant benefits in application environments ranging from the Edge to Data Center. However, to fully unlock its potential, we first need to overcome NVMe over Fabrics' discovery problem. This discovery problem, specific to IP based fabrics, can result in the need for the Host admin to configure each Host to access the appropriate NVM subsystems. In addition, any time an NVM Subsystem is added or removed, the Host admin needs to update the impacted hosts. This process of explicitly updating the Host any time a change is made does not scale when more than a few Host and NVM subsystem interfaces are being used. Also, due to the de-centralized nature of this process, it also adds complexity when trying to use NVMe-oF in environments that require high-degrees of automation. For these and other reasons, Dell Technologies, along with several other companies, have been collaborating on innovations that enable an NVMe-oF IP Based fabric to be centrally managed. These innovations, being tracked under nvme.org’s Technical Proposals TP-8009 and TP-8010, enable administrators to set a policy that defines the relationships between Hosts and the NVM subsystems they need to access. These policies are then used by a Centralized Discovery Controller to allow each Host to automatically discover and connect to only the appropriate NVM subsystems and nothing else.
This session provides an overview of designing and implementing NVMe/TCP across shared storage for the enterprise. This session will cover enabling this technology in Dell PowerStore, and will be co-presented with VMware to discuss NVMe/TCP initiator support. An analysis on the benefits of NVMe/TCP will be covered, along with the ecosystem support needed to ensure enterprise readiness, simplicity and scale.
Abstract Block storage access across Storage Area Networks (SANs) have an interesting protocol and transport history. The NVMe-oF transport family provides storage administrators with the most efficient and streamlined protocols so far leading to more efficient data transfers and better SAN deployments. In this session we will explore some of the protocol history to set the context for a deep dive into NVMe/TCP, NVMe/RoCE, and NVMe/FC. We will then examine network configurations, network topology, QoS settings, and offload processing considerations. This knowledge is critical when deciding how to build, deploy, operate, and evaluate the performance of a SAN as well as understanding end to end hardware and software implementation tradeoffs. Agenda SAN Transport History and Overview Protocol History Protocol Comparisons NVMe/FC Deep Dive NVMe/RoCE Deep Dive NVMe/TCP Deep Dive Networking Configurations and Topologies for NVMe-oF QoS, Flow Control and congestion L2 local vs L3 routed vs Overlay Offload Processing Considerations and Comparisons