SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Since EDSFF was created, the biggest complaint about EDSFF has been that there are too many form factors. While this was by design, EDSFF still has more factors than what was supported previously in Enterprise and Datacenter application. So introducing E2 as a new EDSFF form factor in the market is obviously going to get a lot of scrutiny. The goal of this presentation is to discuss the motivation behind creating the EDSFF E2 form factor, why existing form factors could not meet this need, and why the EDSFF E2 form factor ended up with the specific dimensions.
This is an update on the activities in the OCP Storage Project.
Since EDSFF was created, the biggest complaint about EDSFF has been that there are too many form factors. While this was by design, EDSFF still has more factors than what was supported previously in Enterprise and Datacenter application. So introducing E2 as a new EDSFF form factor in the market is obviously going to get a lot of scrutiny. The goal of this presentation is to discuss the motivation behind creating the EDSFF E2 form factor, why existing form factors could not meet this need, and why the EDSFF E2 form factor ended up with the specific dimensions.
Learn about the new SNIA Emerald V1.0 Device Power Measurement Test Specification, the tools and methods to measure an enterprise data storage device. A useful new metric for system supply chain and hypervisor vendors to evaluate devices with enterprise data enter workloads. In the near future, some regulatory bodies may cross-reference in their regional energy conservation programs. Learn about the changes as part of SNIA Emerald V5.0 System Power Measurement Test Specification, the tools and methods to measure an enterprise data storage system. A cross-referenced specification by the USA EPA Energy Star program and the EU Lot 9 Commission for vendors to test, report and submit enterprise data center workloads metrics as part of procurement requirements set by regulatory bodies as part of their regional energy conservation programs.
Enterprises are rushing to adopt AI inference solutions with RAG to solve business problems, but enthusiasm for the technology's potential is outpacing infrastructure readiness. It quickly becomes prohibitively expensive or even impossible to use more complex models and bigger RAG data sets due to the cost of memory. Using open-source software components and high-performance NVMe SSDs, we explore two different but related approaches for solving these challenges and unlocking new levels of scale: offloading model weights to storage using DeepSpeed, and offloading RAG data to storage using DiskANN. By combining these, we can achieve (a) more complex models running on GPUs that it was previously impossible to use, and (b) greater cost efficiency when using large amounts of RAG data. We'll talk through the approach, share benchmarking results, and show a demo of how the solution works in an example use case.
The performance of network file protocols is a critical factor in the efficiency of the AI and Machine Learning pipeline. This presentation provides a detailed comparative analysis of the two leading protocols, Server Message Block (SMB) and Network File System (NFS), specifically for demanding AI workloads.
We evaluate the advanced capabilities of both protocols, comparing SMB3 with SMB Direct and Multichannel against NFS with RDMA and multistream TCP configurations. The industry-standard MLPerf Storage benchmark is used to simulate realistic AI data access patterns, providing a robust foundation for our comparison. The core of this research focuses on quantifying the performance differences and identifying the operational and configuration overhead associated with each technology.
As the demand for cloud services continues to grow, so does the environmental impact of datacenters. Accurately measuring and managing carbon emissions is essential to advancing sustainability goals—but today’s approaches to carbon assessment vary widely across the industry. The panelists include Sustainability experts from Google, Meta, and Microsoft to discuss how the cloud industry can align on Product Category Rules (PCRs) and Lifecycle Assessment (LCA) standards to drive consistency, transparency, and real impact.
PCRs serve as foundational guidelines for conducting LCAs, the methodology used to quantify the carbon footprint of devices and services. With a standardized PCR framework, cloud providers can perform have comparable, credible, and actionable carbon assessments —supporting better decision-making across procurement, design, and operations.
Our panelists will explore the current state of carbon accounting practices in datacenters, highlight challenges in today’s fragmented landscape, and share insights into collaborative efforts underway to build unified sustainability frameworks. Attendees will gain a clearer understanding of how the industry can move from individual initiatives to collective impact, accelerating progress toward net-zero ambitions through measurable, standardized carbon assessment.