SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Stevens Creek
Wed Sep 18 | 10:35am
The performance requirements needed to power GPU-based computing use cases for AI/DL and other high-performance workflows are challenged by the performance limitations of legacy file and object storage systems. Typically, such use cases have needed to deploy parallel file systems such as Lustre or others, which require networking and skillsets not typically available in standard Enterprise data centers.
Standards-based parallel file systems such as pNFS v4.2 provide the high-performance needed for such work loads, and do so with commodity hardware, standard Ethernet infrastructure. They also provide the multi-protocol file and object access not typically supported by HPC parallel file systems. PNFS v4.2 architectures used in this way are often called Hyperscale NAS, since they merge very high throughput parallel file system performance with the standard capabilities of enterprise NAS solutions. It is this architecture that is deployed at Meta to feed 24,000 GPUs in its AI Research SuperCluster at 12.5TB per second on commodity hardware and standard Ethernet to power its Llama 2 & 3 large language models (LLMs).
But AI/DL data sets are often distributed across multiple incompatible storage types in one or more locations, including S3 storage at edge locations. Traditionally, to pull S3 data from the edge into such workflows has required deployment of file gateways or other methods to bridge protocols.
This session will look at an architecture that enables data on S3 storage to be automatically integrated into a multi-platform, multi-protocol, multi-site Hyperscale NAS environment seamlessly. By leveraging real-world implementations, the session will highlight how this standards-based approach can enable organizations to leverage conventional enterprise infrastructure with data in place on existing storage of any type to feed GPU-based AI and other high-performance workflows.
Learn how S3 data silos can be seamlessly integrated into high-performance multi-protocol parallel file system workflows, such as are needed to power GPU computing for AI/DL and other high-performance use cases.
Understand how distributed data sources can be consolidated for high-performance use cases with data in place, without needing to copy data into a proprietary and often siloed new data repository.
Learn best practices for utilizing commodity hardware, standard networking, and existing multi-vendor, multi-protocol and often distributed storage resources into an integrated, vendor neutral environment capable of powering high-performance use cases.
Mounting S3-compatible storage via S3FS seems like an easy way to enable POSIX-like access in Kubernetes. But in real AI/ML workloads—e.g., training with PyTorch or TensorFlow—we hit major issues: crashes from incomplete writes, vanished checkpoints, inconsistent metadata, and unpredictable I/O latency.
In this session, we’ll share how we overcame these challenges by designing a scalable, POSIX-compliant distributed file system that still leverages the cost-effectiveness of object storage. Instead of abandoning object storage, we rebuilt the access layer for better consistency, performance, and observability in large-scale environments.
Attendees will gain insight into architectural trade-offs, POSIX compliance in user space, Kubernetes integration via CSI and Operators, and observability benchmarks collected from real production AI training clusters.
Ideal for platform engineers, MLOps, and K8s architects seeking reliable, scalable storage for data-heavy workloads.
This is an intermediate session; attendees should be comfortable with object storage, file storage, and the basic concepts of the Kubernetes CSI driver.
Rubrik is a cybersecurity company protecting mission critical data for thousands of customers across the globe including banks, hospitals, and government agencies. SDFS is the filesystem that powers the data path and makes this possible. In this talk, we will discuss challenges in building a masterless distributed filesystem with support for data resilience, strong data integrity, and high performance which can run across a wide spectrum of hardware configurations including cloud platforms. We will discuss the high level architecture of our FUSE based filesystem, how we leverage erasure coding for maintaining data resilience and checksum schemes for maintaining strong data integrity with high performance. We will also cover the challenges in continuously monitoring and maintaining the health of the filesystem in terms of data resilience, data integrity and load balance. Further we will go over how we expand and shrink resources online from the filesystem. We will also discuss the need and challenge of providing priority natively in our filesystem to support a variety of workloads and background operations with varying SLA requirements. Finally, we will also touch on the benefits and challenges of supporting encryption, compression, and de-duplication natively in the filesystem.
GoogleFS introduced the architectural separation of metadata and data, but its reliance on a single active master imposed fundamental limitations on scalability, redundancy, and availability. This talk presents a modern metadata architecture, exemplified by SaunaFS, that eliminates the single-leader model by distributing metadata across multiple concurrent, multi-threaded servers. Metadata is stored in a sharded, ACID-compliant transactional database (e.g., FoundationDB), enabling horizontal scalability, fault tolerance through redundant metadata replicas, reduced memory footprint, and consistent performance under load. The result is a distributed file system architecture capable of exabyte-scale operation in a single namespace while preserving POSIX semantics and supporting workloads with billions of small files.
Enterprise IT infrastructures face soaring AI and analytics demands, driving the need for storage that leverages existing networks, cuts power-hungry server counts, and frees CAPEX for AI. Yet current solutions create isolated silos: proprietary, server-based systems that waste power, lack cloud connectivity, and force large teams to manage multiple silo technologies—locking data behind vendor walls and hampering AI goals. Modeled on the Open Compute Project, the Open Flash Platform (OFP) liberates high-capacity flash through an open architecture built on standard pNFS which is included in every Linux distribution. Each OFP unit contains a DPU-based Linux instance and network port, so it connects directly as a peer—no additional servers. By removing surplus hardware and proprietary software, OFP lets enterprises use dense flash efficiently, halving TCO and increasing storage density 10×. Early configurations deliver up to 48 PB in 2U and scale to 1 EB per rack, yielding a 10× reduction in rack space, power, and OPEX and a 33 % longer service life. This session explains the vision and engineering that make OFP possible, showing how open, standards-based architecture can simplify, scale, and free enterprise data.
The performance of network file protocols is a critical factor in the efficiency of the AI and Machine Learning pipeline. This presentation provides a detailed comparative analysis of the two leading protocols, Server Message Block (SMB) and Network File System (NFS), specifically for demanding AI workloads. We evaluate the advanced capabilities of both protocols, comparing SMB3 with SMB Direct and Multichannel against NFS with RDMA and multistream TCP configurations. The industry-standard MLPerf Storage benchmark is used to simulate realistic AI data access patterns, providing a robust foundation for our comparison. The core of this research focuses on quantifying the performance differences and identifying the operational and configuration overhead associated with each technology.
The Samba file server is evolving beyond traditional TCP-based transport. This talk introduces the latest advancements in Samba's networking stack, including full support for SMB over QUIC, offering secure, firewall-friendly file sharing using modern internet protocols. We’ll also explore the ongoing development of SMB over SMB-Direct (RDMA), aimed at delivering low-latency, high-throughput file access for data center and high-performance environments. Join us for a deep dive into these transport innovations, their architecture, current status, and what's next for Samba’s high-performance networking roadmap.