SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Lafayette
Wed Sep 17 | 10:35am
Linux users often ask: Why is my application slow? What caused it to crash? Was it a client-side issue—and if so, where? At SambaXP 2025, the Azure Files team introduced a new set of eBPF-based tools to help answer these questions by improving observability and debugging for Linux SMB client issues. We also shared a conceptual overview of the Always-On Diagnostics (AOD) project – a daemon that continuously monitors anomalies and automatically captures relevant logs when issues occur.
Since then, we’ve continued developing AOD and expanded the eBPF tooling to support the Linux NFS client. This talk will demonstrate how these tools can capture valuable diagnostic data in real-world anomalous scenarios. After a technical introduction to the standalone eBPF scripts, we’ll walk through the design of the AOD daemon, explain how the eBPF tools integrate into its system, and present the overall architecture. The session will conclude with a live demonstration of the diagnostics workflow in action – showing how AOD detects anomalies, collects logs, and can help pinpoint and contextualize client-side issues as they happen. While prior exposure to eBPF concepts may be helpful, it is not required. The session will begin with foundational explanations before diving deeper, making it accessible to all attendees.
Learn how to use the AOD daemon to capture hard-to-reproduce anomalies on Linux SMB and NFS clients, improving visibility into client-side behavior. Explore the eBPF scripts that power AOD—and understand how to extend or adapt them for custom observability needs. Understand the architecture and technical details behind these tools. Provide feedback and insights to help shape the future direction of these tools for the broader open-source community and ensure they meet the real-world needs of those working with Linux SMB and NFS workloads
Rubrik is a cybersecurity company protecting mission critical data for thousands of customers across the globe including banks, hospitals, and government agencies. SDFS is the filesystem that powers the data path and makes this possible. In this talk, we will discuss challenges in building a masterless distributed filesystem with support for data resilience, strong data integrity, and high performance which can run across a wide spectrum of hardware configurations including cloud platforms. We will discuss the high level architecture of our FUSE based filesystem, how we leverage erasure coding for maintaining data resilience and checksum schemes for maintaining strong data integrity with high performance. We will also cover the challenges in continuously monitoring and maintaining the health of the filesystem in terms of data resilience, data integrity and load balance. Further we will go over how we expand and shrink resources online from the filesystem. We will also discuss the need and challenge of providing priority natively in our filesystem to support a variety of workloads and background operations with varying SLA requirements. Finally, we will also touch on the benefits and challenges of supporting encryption, compression, and de-duplication natively in the filesystem.
GoogleFS introduced the architectural separation of metadata and data, but its reliance on a single active master imposed fundamental limitations on scalability, redundancy, and availability. This talk presents a modern metadata architecture, exemplified by SaunaFS, that eliminates the single-leader model by distributing metadata across multiple concurrent, multi-threaded servers. Metadata is stored in a sharded, ACID-compliant transactional database (e.g., FoundationDB), enabling horizontal scalability, fault tolerance through redundant metadata replicas, reduced memory footprint, and consistent performance under load. The result is a distributed file system architecture capable of exabyte-scale operation in a single namespace while preserving POSIX semantics and supporting workloads with billions of small files.
Enterprise IT infrastructures face soaring AI and analytics demands, driving the need for storage that leverages existing networks, cuts power-hungry server counts, and frees CAPEX for AI. Yet current solutions create isolated silos: proprietary, server-based systems that waste power, lack cloud connectivity, and force large teams to manage multiple silo technologies—locking data behind vendor walls and hampering AI goals. Modeled on the Open Compute Project, the Open Flash Platform (OFP) liberates high-capacity flash through an open architecture built on standard pNFS which is included in every Linux distribution. Each OFP unit contains a DPU-based Linux instance and network port, so it connects directly as a peer—no additional servers. By removing surplus hardware and proprietary software, OFP lets enterprises use dense flash efficiently, halving TCO and increasing storage density 10×. Early configurations deliver up to 48 PB in 2U and scale to 1 EB per rack, yielding a 10× reduction in rack space, power, and OPEX and a 33 % longer service life. This session explains the vision and engineering that make OFP possible, showing how open, standards-based architecture can simplify, scale, and free enterprise data.
The performance of network file protocols is a critical factor in the efficiency of the AI and Machine Learning pipeline. This presentation provides a detailed comparative analysis of the two leading protocols, Server Message Block (SMB) and Network File System (NFS), specifically for demanding AI workloads. We evaluate the advanced capabilities of both protocols, comparing SMB3 with SMB Direct and Multichannel against NFS with RDMA and multistream TCP configurations. The industry-standard MLPerf Storage benchmark is used to simulate realistic AI data access patterns, providing a robust foundation for our comparison. The core of this research focuses on quantifying the performance differences and identifying the operational and configuration overhead associated with each technology.
The Samba file server is evolving beyond traditional TCP-based transport. This talk introduces the latest advancements in Samba's networking stack, including full support for SMB over QUIC, offering secure, firewall-friendly file sharing using modern internet protocols. We’ll also explore the ongoing development of SMB over SMB-Direct (RDMA), aimed at delivering low-latency, high-throughput file access for data center and high-performance environments. Join us for a deep dive into these transport innovations, their architecture, current status, and what's next for Samba’s high-performance networking roadmap.
Samba is evolving to meet the demands of modern enterprise IT. The latest advancements bring critical SMB3 capabilities that boost scalability, reliability, and cloud readiness. With features like SMB over QUIC, Transparent Failover, and SMB3 Directory Leases now arriving, Samba is positioning itself as a robust solution for secure, high-performance file services across data centers and hybrid cloud environments. Learn how these enhancements can future-proof your infrastructure - without vendor lock-in.