SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Understanding the Impact of AI on Data Storage: Learn how the growth of AI-model training datasets and generative AI outputs are driving the demand for increased data storage capacity in datacenters. Exploring Evolutionary Storage Technologies: Gain insights into the advancements in storage technologies, such as higher areal density, optimized data-access interfaces, and energy-efficient infrastructure, that are addressing the challenges of mass-capacity growth. Future Trends in AI Datacenter Storage: Discover the future directions and innovations in mass data storage that will support the evolving needs of AI-driven datacenters
Generative AI models, such as Stable Diffusion, have revolutionized the field of AI by enabling the generation of images from textual prompts. These models impose significant computational, and storage demands in HPC environments. The I/O workload generated during image generation is a critical factor affecting overall performance and scalability. This paper presents a detailed analysis of the I/O workload generated by Stable Diffusion when accessing storage devices, specifically NVMe-oF drives. The study explores various I/O patterns, including read and write operations, latency, throughput, bandwidth, LBA mappings, WAF (Write Amplification Factor) influence the performance of generative AI workloads. The IO pattern shows that how it affects the WAF of the SSD when multiple user requests. By using containerized Stable Diffusion deployed on FDP(Flexible Data Placement) enabled environment as a case study, we investigate how different storage configurations affects the efficiency of image generation and reduce the WAF for individual and concurrent user requests. We have developed a tool that provides insights into I/O activity on storage devices. It provides the graphical view of logical block address (LBA) mapping of I/O hits, block size and a granular view of data access patterns. This enables in-depth I/O analysis, helps identify performance bottlenecks, uncovers latency patterns, and supports optimization across the hardware and software stack.
Current SSD devices are mostly built with a 4KiB transaction unit, or even larger for bigger drive capacities. But what if your workload has high IOPs demands at smaller granularities? We will take a deep dive into our GNN testing using NVIDIA BaM and the modifications that we made to test smaller than 4K transactions. We will also discuss how this workload is a good example of the need for Storage Next.
The rate of change in the structure and capabilities of applications has never been as high as in the last year. There's a huge shift from stockpiling data cheaply to leveraging data to create insight with GenAI and to capitalize on business leads with predictive AI. Excitement and opinions about where storage matters run rampant. Thankfully, we can "follow the data" to pinpoint whether storage performance is critical in the compute node or just in the back end, discern the relative importance of bandwidth and latency, determine whether the volume and granularity of accesses is suitable for a GPU, and what the range of granularities of accesses are. Walking through recent developments in AI apps and their implications will lead to insights that are likely to surprise the audience. There are new opportunities to create innovative solutions to these challenges. The architectures of NAND and their controllers may adjust to curtail ballooning power with more efficient data transfers and error checking. IOPs optimizations that will be broadly mandatory in the future may be pulled in to benefit some applications now. New hardware/software codesigns may lead to protocol changes, and trade-offs in which computing agents and data structures are best suited to accomplish new goals. Novel software interfaces and infrastructure enable movement, access, and management of data that is tailored to the specific needs of each application. Come join in a fun, refreshing, provocative, and interactive session on storage implications for this new generation of AI applications!
How do we assess the performance of AI network and storage infrastructure that is critical to the successful deployment of today's complex AI training and inferencing engines? And is it possible to do this without needing to provision racks of expensive GPU Capex? This presentation discusses methodologies and considerations in performing such assessments. We look at different topologies, host and network side considerations and metrics. The performance aspects of NICs/SmartNICs, storage offload processing, switches and interconnects are examined. Benchmarking of AI collective communications with RoCE transport are considered along with the overall impact on training convergence time and network utilization. The operational aspect of commercial networks includes proxies, encapsulations, connection scale and encryption. We discuss their impact on AI training and inferencing.
The extreme growth in modern AI-model training datasets, as well as the explosion of Gen-AI data output are both fueling unprecedented levels of data-storage capacity growth in the datacenters. Such rapid growth in mass-capacity is demanding evolutionary steps in foundational storage technologies to enable higher areal density, optimized data-access interface methodologies and highly efficiency power/cooling infrastructure. We will explore these evolutionary technologies and take a sneak peek at the future of mass data-storage in the AI datacenters.
Bug detection and triaging in complex storage systems pose unique challenges that distinguish them from general-purpose or SaaS-based software. Unlike conventional code which largely operates in a straightforward user space, storage solutions must seamlessly integrate with the operating system kernel, device drivers, and underlying hardware devices. This tight coupling introduces additional complexity in logging, concurrency, and operational flow. For instance, storage systems often span hundreds of threads and processes, each writing into shared log files without conventional transactional guarantees. Such intricate interactions make it difficult for existing AI-based bug-tracking solutions which are typically trained on general codebases to deliver effective results. To address these limitations, we propose a novel approach that supplements the system code with knowledge extracted from high-level integration test cases. These tests, often written in human-readable scripting languages such as Python, capture end-to-end system behavior more effectively than narrowly focused unit tests. By converting the insights from integration tests into a structured knowledge graph, our methodology provides an AI bug-triaging agent with rich contextual understanding of system interactions, inter-process communications, and hardware events. This deeper, scenario-driven perspective empowers the agent to pinpoint and diagnose issues from storage system failures that would otherwise be hidden in the labyrinth of kernel-mode calls, user-mode processes, and low-level device drivers. Our early findings suggest that this targeted fusion of code analysis and integration-test-based knowledge significantly enhances both the speed and accuracy of bug identification in storage software an advancement poised to transform how complex system bugs are tracked and resolved.