SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
San Tomas + Lawrence
Tue Sep 16 | 4:30pm
The state of Fibre Channel is exciting as we embark on the development of 256GFC (aka Gen 9 Fibre Channel)! With 128GFC products coming out at the end of this year, the Fibre Channel community is feverishly working on the next generation speeds. Now is the time to step back a bit and view the Fibre Channel roadmap. In this session, we discuss the technical challenges associated with the creation of 256GFC products as well as explore the features of Gen 8 Fibre Channel. Come explore the excitement around FC-SP-3 "Autonomous In-flight encryption" (AIE) that provide a plug-n-play solution for end-to-end security! Take a look at the expansion of Fabric Notifications to produce automated responses to disruptions in the network. Let's see where Fibre Channel is taking you.
Provide an update of the 256GFC developments in Fibre Channel. Explain the implications of the changes link up times with high data rates. Explore the ease of security deployments based on FC-SP-3 Autonomous In-flight encryption. Review the capabilities of Fabric Notifications and examine the newest addition of Port Notifications. Provide a briefing on the creation of the FC-RDMA specification.
A new set of NVM Express® (NVMe®) and Open Compute Project® (OCP) features is revolutionizing the virtualization landscape, enabling a new SSD-supported virtualization ecosystem. This presentation will explore these innovative features and their potential applications in host systems. We will describe an example virtual machine (VM) setup and discuss how the features can be utilized together to create a robust, secure, and performant virtualized environment. Specifically, we will cover the use of SR-IOV to expose individual functions with child controllers, Exported NVM Subsystems for building virtualized subsystems, and OCP’s security extensions to Caliptra for maintaining security.
We will also delve into advanced topics, including Quality of Service (QoS) parameter setting across varied VMs with differing Service Level Agreement (SLA) processes, and live migration of VMs from one SSD to another. The presentation will highlight how Tracking Allocation Status and Granularity facilitates live migration. Finally, we will touch on extended SSD-supported virtualization examples, including single-port vs dual-port variations, AI enablement through direct GPU access, storage use cases, application container use cases, and extensible integration to Flexible Data Placement. These examples may utilize subsets of the features in novel configurations or explore entirely new applications.
This talk reflects on 18 years of SMR evolution —covering physical layouts, filesystems, garbage collection algorithms, device drivers, and simulators. Furthermore, the talk will also discuss how SMR disks integrated with data storage solutions like RAID and deduplication, including real-world use cases of SMR disks by hyperscalers.
We will also discuss how SMR and HAMR technology interact in the context of AI workloads to provide intriguing new possibilities for HDDs.
The state of Fibre Channel is exciting as we embark on the development of 256GFC (aka Gen 9 Fibre Channel)! With 128GFC products coming out at the end of this year, the Fibre Channel community is feverishly working on the next generation speeds. Now is the time to step back a bit and view the Fibre Channel roadmap. In this session, we discuss the technical challenges associated with the creation of 256GFC products as well as explore the features of Gen 8 Fibre Channel. Come explore the excitement around FC-SP-3 "Autonomous In-flight encryption" (AIE) that provide a plug-n-play solution for end-to-end security! Take a look at the expansion of Fabric Notifications to produce automated responses to disruptions in the network. Let's see where Fibre Channel is taking you.
Since the Enterprise and Datacenter Standard Form Factor (EDSFF) was created, the biggest complaint about EDSFF has been that there are too many form factors. While this was by design, EDSFF still has more form factors than what was supported previously in Enterprise and Datacenter application. So introducing E2 as a new EDSFF form factor in the market is obviously going to get a lot of scrutiny. The goal of this presentation is to discuss the motivation behind creating the EDSFF E2 form factor, why existing form factors could not meet this need, why the EDSFF E2 form factor ended up with the specific dimensions, and what do future applications look like.
HPC and AI workloads require processing massive datasets and executing complex computations at exascale speeds to deliver time-critical insights. In distributed environments where storage systems coordinate and share results, communication overhead can become a critical bottleneck. This challenge underscores the need for storage solutions that deliver scalable, parallel access with microsecond latencies from compute clusters. Caching can help reduce communication costs when implemented on either servers or clients. Servers, in this context, refer to the data servers that provide file system and object store functionalities, while clients denote the storage clients running on compute nodes in HPC/AI clusters that access and retrieve data from these servers.
However, server-side caching is limited by the fixed memory and network bandwidth of individual servers. Traditional client-side caching, on the other hand, is typically node-local which limits data reuse across the cluster and often results in redundant caching efforts, leading to inefficiencies and duplicated data. Furthermore, without a shared global view, synchronizing caches consistently across nodes becomes challenging, further diminishing their overall effectiveness. Global Distributed client-side caching over high-speed interconnects is attractive because it leverages the higher aggregate resources—such as DRAM, local SSDs, network bandwidth, and RDMA capabilities—available across the client nodes, scaling independently of the number of server nodes. However, fully realizing these benefits demands an efficient caching framework underpinned by carefully tuned policies to manage these valuable resources. In this presentation, we detail the design and implementation of an efficient, distributed client-side caching framework that addresses these challenges.
The rise of Generative and Agentic AI has driven a fundamental shift in storage —from storing data to functioning as comprehensive knowledge management systems. Traditional model of storing data and system metadata and providing analytical capabilities on top of it is now inadequate. Agentic AI workflows require access to semantically enriched representations of data, including embeddings and derived metadata (e.g., classification, categorization). As data is ingested, storage systems must support real-time or near-real-time generation and association of such metadata. Industry’s initial response involved deploying separate document and embedding stores alongside conventional storage. However, the disaggregation of data and its semantic representations across multip
To address these constraints, computation, storage and access of enriched data must be co-located with the primary data . This has driven the evolution of storage into unified knowledge platforms that natively compute, persist, and index vectors and derived metadata with underlying data and system metadata. This rearchitecting affects not just data, but also how storage systems are administered. Protocols like the Model Context Protocol have been introduced to facilitate interaction and administration of system and data. Solutions like HPE Alletra Storage MP X10000 exemplify this evolution, offering integrated capabilities to support AI-native workloads. As these platforms mature, there is a need to standardize access to knowledge and semantic representations for seamless integration with application . This talk explores the evolution of storage platforms and the emerging capabilities required to support this new paradigm.