SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
This presentation walks through how data flows through GPUs during Large Language Model training. There is an introduction to LLM neural networks and how they map to the GPU arrays. We look at the challenges of current GPU topologies and look at how these topologies will evolve with the introduction of UEC and UALink.
Understand the basics of Neural Networks Understand how a Neural Network map to a group of GPUs Understand how data flows into and between GPUs Understand the roles UEC and UALink play in large GPU arrays
Bug detection and triaging in complex storage systems pose unique challenges that distinguish them from general-purpose or SaaS-based software. Unlike conventional code which largely operates in a straightforward user space, storage solutions must seamlessly integrate with the operating system kernel, device drivers, and underlying hardware devices. This tight coupling introduces additional complexity in logging, concurrency, and operational flow. For instance, storage systems often span hundreds of threads and processes, each writing into shared log files without conventional transactional guarantees. Such intricate interactions make it difficult for existing AI-based bug-tracking solutions which are typically trained on general codebases to deliver effective results.
To address these limitations, we propose a novel approach that supplements the system code with knowledge extracted from high-level integration test cases. These tests, often written in human-readable scripting languages such as Python, capture end-to-end system behavior more effectively than narrowly focused unit tests. By converting the insights from integration tests into a structured knowledge graph, our methodology provides an AI bug-triaging agent with rich contextual understanding of system interactions, inter-process communications, and hardware events. This deeper, scenario-driven perspective empowers the agent to pinpoint and diagnose issues from storage system failures that would otherwise be hidden in the labyrinth of kernel-mode calls, user-mode processes, and low-level device drivers. Our early findings suggest that this targeted fusion of code analysis and integration-test-based knowledge significantly enhances both the speed and accuracy of bug identification in storage software an advancement poised to transform how complex system bugs are tracked and resolved.
The extreme growth in modern AI-model training datasets, as well as the explosion of Gen-AI data output are both fueling unprecedented levels of data-storage capacity growth in the datacenters. Such rapid growth in mass-capacity is demanding evolutionary steps in foundational storage technologies to enable higher areal density, optimized data-access interface methodologies and highly efficiency power/cooling infrastructure. We will explore these evolutionary technologies and take a sneak peek at the future of mass data-storage in the AI datacenters.
The rapid advancement of AI is significantly increasing demands on compute, memory and the storage infrastructure. As NVMe storage evolves to meet these needs, it is experiencing a bifurcation in requirements. On one end, workloads such as model training, checkpointing, and key-value (KV) cache tiering are driving the need for line-rate saturating SSDs with near-GPU and HPC attachment. On the other end, the rise of multi-stage inference, synthetic data generation, and post-training optimization is fueling demand for dense, high-capacity disaggregated storage solutions — effectively displacing traditional rotating media in the nearline tier of the datacenter. This paper explores the architectural considerations across both ends of this spectrum, including Gen6 performance, indirection unit (IU) selection, power monitoring for energy efficiency, liquid cooled thermal design, and strategies for enabling high capacity through form factor and packaging choices. We demonstrate how thoughtful design decisions can unlock the full potential of storage systems in addressing the evolving challenges of AI workloads.
How do we assess the performance of AI network and storage infrastructure that is critical to the successful deployment of today's complex AI training and inferencing engines? And is it possible to do this without needing to provision racks of expensive GPU Capex? This presentation discusses methodologies and considerations in performing such assessments. We look at different topologies, host and network side considerations and metrics. The performance aspects of NICs/SmartNICs, storage offload processing, switches and interconnects are examined. Benchmarking of AI collective communications with RoCE transport are considered along with the overall impact on training convergence time and network utilization. The operational aspect of commercial networks includes proxies, encapsulations, connection scale and encryption. We discuss their impact on AI training and inferencing.
The rate of change in the structure and capabilities of applications has never been as high as in the last year. There's a huge shift from stockpiling data cheaply to leveraging data to create insight with GenAI and to capitalize on business leads with predictive AI. Excitement and opinions about where storage matters run rampant. Thankfully, we can "follow the data" to pinpoint whether storage performance is critical in the compute node or just in the back end, discern the relative importance of bandwidth and latency, determine whether the volume and granularity of accesses is suitable for a GPU, and what the range of granularities of accesses are. Walking through recent developments in AI apps and their implications will lead to insights that are likely to surprise the audience.
There are new opportunities to create innovative solutions to these challenges. The architectures of NAND and their controllers may adjust to curtail ballooning power with more efficient data transfers and error checking. IOPs optimizations that will be broadly mandatory in the future may be pulled in to benefit some applications now. New hardware/software codesigns may lead to protocol changes, and trade-offs in which computing agents and data structures are best suited to accomplish new goals. Novel software interfaces and infrastructure enable movement, access, and management of data that is tailored to the specific needs of each application. Come join in a fun, refreshing, provocative, and interactive session on storage implications for this new generation of AI applications!
Current SSD devices are mostly built with a 4KiB transaction unit, or even larger for bigger drive capacities. But what if your workload has high IOPs demands at smaller granularities? We will take a deep dive into our GNN testing using NVIDIA BaM and the modifications that we made to test smaller than 4K transactions. We will also discuss how this workload is a good example of the need for Storage Next.