Sorry, you need to enable JavaScript to visit this website.

Famfs: Get Ready for Big Pools of Disaggregated Shared Memory

Submitted by Anonymous (not verified) on

CXL enables disaggregated memory, both for composable capacity-on-demand and shared-memory. That is memory is not private to one server. Adding system-ram capacity should largely “just work”, but shared memory is more complex. The Fabric-Attached Memory File System (famfs) enables disaggregated shared memory to be used as memory-mappable files that map directly to the shared memory. Famfs is open source software which is on-track to be merged into the upstream Linux kernel in the coming months. 

Eliminating NTLM in Storage: Modernizing SMB Authentication on Windows

Submitted by Anonymous (not verified) on

NTLM (NT LAN Manager) remains prevalent in storage environments, including SMB, where it’s often used to authenticate access to shared folders, NAS devices, and legacy systems that do not support Kerberos. However, NTLM carries significant security risks, such as susceptibility to relay, pass-the-hash, and brute-force attacks. Windows is now undergoing a transformation to eliminate NTLM, focusing on modernizing on-prem authentication by strengthening Kerberos and introducing new capabilities that close gaps where NTLM is traditionally required.  

Always-On Diagnostics: eBPF-Powered Insights for Linux SMB and NFS Clients

Submitted by Anonymous (not verified) on

Linux users often ask: Why is my application slow? What caused it to crash? Was it a client-side issue—and if so, where? At SambaXP 2025, the Azure Files team introduced a new set of eBPF-based tools to help answer these questions by improving observability and debugging for Linux SMB client issues. We also shared a conceptual overview of the Always-On Diagnostics (AOD) project – a daemon that continuously monitors anomalies and automatically captures relevant logs when issues occur.    

448G/lane? PCIe™ 7.0? E2 form factor? Suffer the egos of the opinionated experts working to meet the bandwidth, interconnect, and capacity for these AI capable interconnects and form factors.

Submitted by Anonymous (not verified) on

The bandwidth and capacity demand explosion of AI are forcing interconnects and storage to rapidly adapt and evolve.  Pushing the bandwidth challenges the limits of the copper interconnect. The SNIA SFF Technical Working Group is proactively addressing the bandwidth needs through new development work on 448G channel modeling and PCIe 7.0 projects as well as studying mechanical aspects of the system design.  In addition, there is a focus on how to meet the capacity and performance needs of storage for AI through a new EDSFF form factor.

Why does NVMe Need to Evolve for Efficient Storage Access from GPUs?

Submitted by Anonymous (not verified) on

Abstract With its introduction in 2011, the NVMe storage protocol has allowed CPUs to handle more data with less latency. This in turn has significantly improved the CPU's ability to manage parallel tasks with multiple queues while improving CPU utilization rates. More recently, the growing relevance of GPUs in AI training and inference has led to innovations that illustrate enabling NVMe storage access directly from GPUs. In this presentation we discuss some challenges with doing this efficiently.

CXL Ecosystem Innovation Leveraging QEMU-based Emulation

Submitted by Anonymous (not verified) on
As with any emerging technology, developing open-source software for CXL is challenging and emulation of features is a key path forward. In this talk, we will introduce the current major CXL features that QEMU can emulate and walk you through how to set up a Linux + QEMU CXL environment to enable testing and developing new CXL features. Some limitations exist, as with any platform, which we will discuss along with the latest developments in support for dynamic capacity devices (DCD) and switches.

Disaggregated KV Storage: A New Tier for Efficient Scalable LLM Inference

Submitted by Anonymous (not verified) on
As generative AI models continue to grow in size and complexity, the infrastructure costs of inference—particularly GPU memory and power consumption—have become a limiting factor. This session presents a disaggregated key-value (KV) storage architecture designed to offload KV-cache tensors efficiently, reducing GPU compute pressure while maintaining low-latency, high-throughput inference. We introduce the first end-to-end system based on shared storage for KV-cache offloading.

Storage Multi-Queue on Windows - A New Stack for High Performance Storage Hardware

Submitted by Anonymous (not verified) on

In this presentation, we will introduce Storage Multi-Queue on Windows. Storage Multi-Queue is a new storage driver architecture on Windows that is optimized for high performance storage hardware. Whereas the previous storage architecture was based around SCSI concepts, the new architecture uses NVMe concepts and data structures. This allows for a more efficient mapping to modern hardware and eliminates many of the bottlenecks associated with the existing SCSI-based stack.

Subscribe to