Sorry, you need to enable JavaScript to visit this website.

SPDM PQC & Authorization

Submitted by Anonymous (not verified) on

DMTF’s Security Protocol and Data Model (SPDM) protocol is a widely used set of standards that enable secure communication and device authentication for platform-level security. This session will give an update on major developments by the SPDM Working Group, and where the group is going over the next year. In the past year, DMTF has released SPDM version 1.4, the first version to support CNSA 2.0 algorithms for post-quantum cryptography.

Host Addressing of NVMe Subsystem Local Memory

Submitted by Anonymous (not verified) on
While a host has been able to address NVMe device memory using Controller Memory Buffer (CMB) and Persistent Memory Region (PMR), that memory has never been addressable by NVMe commands. NVMe introduced the Subsystem Local Memory IO Command Set (SLM), which allowed NVMe device memory to be addressable by NVMe commands; however, this memory could not be addressed by the host using host memory addresses. A new technical proposal is being developed by NVM Express that would allow SLM to be assigned to a host memory address range.

Famfs: Get Ready for Big Pools of Disaggregated Shared Memory

Submitted by Anonymous (not verified) on

CXL enables disaggregated memory, both for composable capacity-on-demand and shared-memory. That is memory is not private to one server. Adding system-ram capacity should largely “just work”, but shared memory is more complex. The Fabric-Attached Memory File System (famfs) enables disaggregated shared memory to be used as memory-mappable files that map directly to the shared memory. Famfs is open source software which is on-track to be merged into the upstream Linux kernel in the coming months. 

Eliminating NTLM in Storage: Modernizing SMB Authentication on Windows

Submitted by Anonymous (not verified) on

NTLM (NT LAN Manager) remains prevalent in storage environments, including SMB, where it’s often used to authenticate access to shared folders, NAS devices, and legacy systems that do not support Kerberos. However, NTLM carries significant security risks, such as susceptibility to relay, pass-the-hash, and brute-force attacks. Windows is now undergoing a transformation to eliminate NTLM, focusing on modernizing on-prem authentication by strengthening Kerberos and introducing new capabilities that close gaps where NTLM is traditionally required.  

Always-On Diagnostics: eBPF-Powered Insights for Linux SMB and NFS Clients

Submitted by Anonymous (not verified) on

Linux users often ask: Why is my application slow? What caused it to crash? Was it a client-side issue—and if so, where? At SambaXP 2025, the Azure Files team introduced a new set of eBPF-based tools to help answer these questions by improving observability and debugging for Linux SMB client issues. We also shared a conceptual overview of the Always-On Diagnostics (AOD) project – a daemon that continuously monitors anomalies and automatically captures relevant logs when issues occur.    

448G/lane? PCIe™ 7.0? E2 form factor? Suffer the egos of the opinionated experts working to meet the bandwidth, interconnect, and capacity for these AI capable interconnects and form factors.

Submitted by Anonymous (not verified) on

The bandwidth and capacity demand explosion of AI are forcing interconnects and storage to rapidly adapt and evolve.  Pushing the bandwidth challenges the limits of the copper interconnect. The SNIA SFF Technical Working Group is proactively addressing the bandwidth needs through new development work on 448G channel modeling and PCIe 7.0 projects as well as studying mechanical aspects of the system design.  In addition, there is a focus on how to meet the capacity and performance needs of storage for AI through a new EDSFF form factor.

Why does NVMe Need to Evolve for Efficient Storage Access from GPUs?

Submitted by Anonymous (not verified) on

Abstract With its introduction in 2011, the NVMe storage protocol has allowed CPUs to handle more data with less latency. This in turn has significantly improved the CPU's ability to manage parallel tasks with multiple queues while improving CPU utilization rates. More recently, the growing relevance of GPUs in AI training and inference has led to innovations that illustrate enabling NVMe storage access directly from GPUs. In this presentation we discuss some challenges with doing this efficiently.

CXL Ecosystem Innovation Leveraging QEMU-based Emulation

Submitted by Anonymous (not verified) on
As with any emerging technology, developing open-source software for CXL is challenging and emulation of features is a key path forward. In this talk, we will introduce the current major CXL features that QEMU can emulate and walk you through how to set up a Linux + QEMU CXL environment to enable testing and developing new CXL features. Some limitations exist, as with any platform, which we will discuss along with the latest developments in support for dynamic capacity devices (DCD) and switches.

Disaggregated KV Storage: A New Tier for Efficient Scalable LLM Inference

Submitted by Anonymous (not verified) on
As generative AI models continue to grow in size and complexity, the infrastructure costs of inference—particularly GPU memory and power consumption—have become a limiting factor. This session presents a disaggregated key-value (KV) storage architecture designed to offload KV-cache tensors efficiently, reducing GPU compute pressure while maintaining low-latency, high-throughput inference. We introduce the first end-to-end system based on shared storage for KV-cache offloading.
Subscribe to