SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Understand the limitations of single-leader metadata architectures in distributed file systems, including bottlenecks in scalability, availability, and fault tolerance. Learn how to design a distributed metadata service using concurrent, multi-threaded servers to eliminate the need for a central coordinator. Explore the role of sharded, ACID-compliant transactional databases (e.g., FoundationDB) in enabling scalable, consistent, and highly available metadata storage. Discover architectural patterns that support exabyte-scale namespaces with billions of files while preserving POSIX semantics. Understand the challenges and architectural transitions in moving from a single metadata server (MDS) leader to a fully distributed metadata service.
Rubrik is a cybersecurity company protecting mission critical data for thousands of customers across the globe including banks, hospitals, and government agencies. SDFS is the filesystem that powers the data path and makes this possible. In this talk, we will discuss challenges in building a masterless distributed filesystem with support for data resilience, strong data integrity, and high performance which can run across a wide spectrum of hardware configurations including cloud platforms. We will discuss the high level architecture of our FUSE based filesystem, how we leverage erasure coding for maintaining data resilience and checksum schemes for maintaining strong data integrity with high performance. We will also cover the challenges in continuously monitoring and maintaining the health of the filesystem in terms of data resilience, data integrity and load balance. Further we will go over how we expand and shrink resources online from the filesystem. We will also discuss the need and challenge of providing priority natively in our filesystem to support a variety of workloads and background operations with varying SLA requirements. Finally, we will also touch on the benefits and challenges of supporting encryption, compression, and de-duplication natively in the filesystem.
GoogleFS introduced the architectural separation of metadata and data, but its reliance on a single active master imposed fundamental limitations on scalability, redundancy, and availability. This talk presents a modern metadata architecture, exemplified by SaunaFS, that eliminates the single-leader model by distributing metadata across multiple concurrent, multi-threaded servers. Metadata is stored in a sharded, ACID-compliant transactional database (e.g., FoundationDB), enabling horizontal scalability, fault tolerance through redundant metadata replicas, reduced memory footprint, and consistent performance under load. The result is a distributed file system architecture capable of exabyte-scale operation in a single namespace while preserving POSIX semantics and supporting workloads with billions of small files.
Enterprise IT infrastructures face soaring AI and analytics demands, driving the need for storage that leverages existing networks, cuts power-hungry server counts, and frees CAPEX for AI. Yet current solutions create isolated silos: proprietary, server-based systems that waste power, lack cloud connectivity, and force large teams to manage multiple silo technologies—locking data behind vendor walls and hampering AI goals. Modeled on the Open Compute Project, the Open Flash Platform (OFP) liberates high-capacity flash through an open architecture built on standard pNFS which is included in every Linux distribution. Each OFP unit contains a DPU-based Linux instance and network port, so it connects directly as a peer—no additional servers. By removing surplus hardware and proprietary software, OFP lets enterprises use dense flash efficiently, halving TCO and increasing storage density 10×. Early configurations deliver up to 48 PB in 2U and scale to 1 EB per rack, yielding a 10× reduction in rack space, power, and OPEX and a 33 % longer service life. This session explains the vision and engineering that make OFP possible, showing how open, standards-based architecture can simplify, scale, and free enterprise data.
The performance of network file protocols is a critical factor in the efficiency of the AI and Machine Learning pipeline. This presentation provides a detailed comparative analysis of the two leading protocols, Server Message Block (SMB) and Network File System (NFS), specifically for demanding AI workloads. We evaluate the advanced capabilities of both protocols, comparing SMB3 with SMB Direct and Multichannel against NFS with RDMA and multistream TCP configurations. The industry-standard MLPerf Storage benchmark is used to simulate realistic AI data access patterns, providing a robust foundation for our comparison. The core of this research focuses on quantifying the performance differences and identifying the operational and configuration overhead associated with each technology.
The Samba file server is evolving beyond traditional TCP-based transport. This talk introduces the latest advancements in Samba's networking stack, including full support for SMB over QUIC, offering secure, firewall-friendly file sharing using modern internet protocols. We’ll also explore the ongoing development of SMB over SMB-Direct (RDMA), aimed at delivering low-latency, high-throughput file access for data center and high-performance environments. Join us for a deep dive into these transport innovations, their architecture, current status, and what's next for Samba’s high-performance networking roadmap.
Samba is evolving to meet the demands of modern enterprise IT. The latest advancements bring critical SMB3 capabilities that boost scalability, reliability, and cloud readiness. With features like SMB over QUIC, Transparent Failover, and SMB3 Directory Leases now arriving, Samba is positioning itself as a robust solution for secure, high-performance file services across data centers and hybrid cloud environments. Learn how these enhancements can future-proof your infrastructure - without vendor lock-in.