SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Stevens Creek
Wed Sep 18 | 11:35am
For over nine years, Microsoft Azure has provided completely managed file shares in the cloud.
Azure Files provides SMB3, NFS4.1 and REST based access to file shares.
This talk will present the evolution of architecture of Azure Files to serve applications with higher performance and scale needs, based on Azure Storage architecture under the hood, not on a conventional file system -- let alone NTFS. We will focus on features which provide the availability and reliability guarantees despite the constant din of hardware underneath suffering failures and needing replacement. Leveraging the Continuous Availability features of SMB3, users are able to access always available and highly reliable file shares. We will also deep dive into our approach to achieve security at cloud scale.
This talk will highlight recently added features and the engineering challenges of making significant changes and additions to data schemas and the code that manipulates it, while still allowing access to those many petabytes of data, or breaking the semantics that applications depend on. Additionally, this talk will discuss the significance of receiving an ack for the write requests from Azure File Server, how we build back state to continue serving data when client reconnects and customer focused challenges and how we overcame them.
Learn about the evolution of fully managed file shares built on top of something other than a conventional file system, while still serving live customer traffic.
Gain an understanding of the challenges in durably committing what is typically seen as volatile handle state, ensuring it remains both highly available and performant.
Learn how Azure Files is iterating to improve scale, performance, and security
See through the eyes of the customer what the expectations are for the public cloud of the 21st century.
Mounting S3-compatible storage via S3FS seems like an easy way to enable POSIX-like access in Kubernetes. But in real AI/ML workloads—e.g., training with PyTorch or TensorFlow—we hit major issues: crashes from incomplete writes, vanished checkpoints, inconsistent metadata, and unpredictable I/O latency.
In this session, we’ll share how we overcame these challenges by designing a scalable, POSIX-compliant distributed file system that still leverages the cost-effectiveness of object storage. Instead of abandoning object storage, we rebuilt the access layer for better consistency, performance, and observability in large-scale environments.
Attendees will gain insight into architectural trade-offs, POSIX compliance in user space, Kubernetes integration via CSI and Operators, and observability benchmarks collected from real production AI training clusters.
Ideal for platform engineers, MLOps, and K8s architects seeking reliable, scalable storage for data-heavy workloads.
This is an intermediate session; attendees should be comfortable with object storage, file storage, and the basic concepts of the Kubernetes CSI driver.
Rubrik is a cybersecurity company protecting mission critical data for thousands of customers across the globe including banks, hospitals, and government agencies. SDFS is the filesystem that powers the data path and makes this possible. In this talk, we will discuss challenges in building a masterless distributed filesystem with support for data resilience, strong data integrity, and high performance which can run across a wide spectrum of hardware configurations including cloud platforms. We will discuss the high level architecture of our FUSE based filesystem, how we leverage erasure coding for maintaining data resilience and checksum schemes for maintaining strong data integrity with high performance. We will also cover the challenges in continuously monitoring and maintaining the health of the filesystem in terms of data resilience, data integrity and load balance. Further we will go over how we expand and shrink resources online from the filesystem. We will also discuss the need and challenge of providing priority natively in our filesystem to support a variety of workloads and background operations with varying SLA requirements. Finally, we will also touch on the benefits and challenges of supporting encryption, compression, and de-duplication natively in the filesystem.
GoogleFS introduced the architectural separation of metadata and data, but its reliance on a single active master imposed fundamental limitations on scalability, redundancy, and availability. This talk presents a modern metadata architecture, exemplified by SaunaFS, that eliminates the single-leader model by distributing metadata across multiple concurrent, multi-threaded servers. Metadata is stored in a sharded, ACID-compliant transactional database (e.g., FoundationDB), enabling horizontal scalability, fault tolerance through redundant metadata replicas, reduced memory footprint, and consistent performance under load. The result is a distributed file system architecture capable of exabyte-scale operation in a single namespace while preserving POSIX semantics and supporting workloads with billions of small files.
Enterprise IT infrastructures face soaring AI and analytics demands, driving the need for storage that leverages existing networks, cuts power-hungry server counts, and frees CAPEX for AI. Yet current solutions create isolated silos: proprietary, server-based systems that waste power, lack cloud connectivity, and force large teams to manage multiple silo technologies—locking data behind vendor walls and hampering AI goals. Modeled on the Open Compute Project, the Open Flash Platform (OFP) liberates high-capacity flash through an open architecture built on standard pNFS which is included in every Linux distribution. Each OFP unit contains a DPU-based Linux instance and network port, so it connects directly as a peer—no additional servers. By removing surplus hardware and proprietary software, OFP lets enterprises use dense flash efficiently, halving TCO and increasing storage density 10×. Early configurations deliver up to 48 PB in 2U and scale to 1 EB per rack, yielding a 10× reduction in rack space, power, and OPEX and a 33 % longer service life. This session explains the vision and engineering that make OFP possible, showing how open, standards-based architecture can simplify, scale, and free enterprise data.
The performance of network file protocols is a critical factor in the efficiency of the AI and Machine Learning pipeline. This presentation provides a detailed comparative analysis of the two leading protocols, Server Message Block (SMB) and Network File System (NFS), specifically for demanding AI workloads. We evaluate the advanced capabilities of both protocols, comparing SMB3 with SMB Direct and Multichannel against NFS with RDMA and multistream TCP configurations. The industry-standard MLPerf Storage benchmark is used to simulate realistic AI data access patterns, providing a robust foundation for our comparison. The core of this research focuses on quantifying the performance differences and identifying the operational and configuration overhead associated with each technology.
The Samba file server is evolving beyond traditional TCP-based transport. This talk introduces the latest advancements in Samba's networking stack, including full support for SMB over QUIC, offering secure, firewall-friendly file sharing using modern internet protocols. We’ll also explore the ongoing development of SMB over SMB-Direct (RDMA), aimed at delivering low-latency, high-throughput file access for data center and high-performance environments. Join us for a deep dive into these transport innovations, their architecture, current status, and what's next for Samba’s high-performance networking roadmap.