Modern AI/ML and HPC workloads increasingly rely on GPU-direct storage access (SCADA) to achieve the parallelism needed to saturate high-speed NVMe connections. However, this direct access path bypasses traditional OS-level security controls, creating a critical access control gap. Current NVMe persistent reservations operate only at the namespace level, allowing any registered client to write to any block on the device—a significant security vulnerability in multi-tenant and GPU computing environments.
This session presents NVMe LBA Access Control which introduces fine-grained LBA (Logical Block Address) Range Access Control for NVMe devices. This proposal enables multiple hosts to safely access distinct LBA ranges within a single namespace concurrently, with hardware-enforced write protection at the block level. LBA ranges scale from megabytes to terabytes and support dynamic reallocation with microsecond-latency acquire/release operations—essential for GPU job preemption and evolving workload demands.
We'll demonstrate real-world use cases including:
• Dynamic AI training job allocation with zero I/O interruption during range shifts
• Database partitioning across primary writers, read replicas, and archive nodes
• pNFS NVMe layouts with LBA-range-aware access control
Attendees will understand how LBA Access Control bridges the security gap created by SCADA, enabling safe multi-host parallelism for next-generation GPU-accelerated storage architectures.