SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
As the rapid expansion of AI and analytics continues, storage system architecture and total cost of ownership (TCO) are undergoing significant transformation. Emerging technologies such as HAMR in rotating storage and high-capacity, data center-grade QLC in flash promise to redefine the landscape for both hyperscale and OEM data storage solutions. But what will that evolution look like?
As large deployments become more common, our industry needs a standardized, multi-vendor solution to boot computer systems from OS images stored on NVMe® devices across a network. The newly published NVM Express® Boot Specification and ecosystem partnership with UEFI and ACPI enables this by leveraging the NVMe over Fabrics (NVMe-oF™) standard. This talk is a dive into the details of the new specification and the design of an open-source prototype for booting over NVMe/TCP transport using a UEFI implementation.
The SNIA TC AI Taskforce is working on a paper on AI workloads and the storage requirements for those workloads. This presentation will be an introduction to the key AI workloads with a description of how they use the data transports and storage systems. This is intended to be a foundational level presentation that will give participants a basic working knowledge of the subject.
The AI Pipeline is a complex set of phases and operations, all with different requirements for the underlying storage which affect storage technology choices. Understanding this complexity can be overwhelming and hard to understand. This presentation will provide a 101 style view of the typical demands of the AI phases on the storage which impacts your storage choices and success deploying your AI strategy. Storage is critical for efficient AI, and is currently overlooked, to some extent, by the market.