Sorry, you need to enable JavaScript to visit this website.
Submitted by Anonymous (not verified) on

Amazon S3 is the de facto standard for object storage—simple, scalable, and accessible via HTTP. However, traditional S3 access via TCP/IP is CPU-intensive and not designed for the low-latency, high-throughput needs of modern GPU workloads. S3 RDMA aims to bridge that gap. S3 RDMA implements S3 object PUT/GET data transfers over RDMA, essentially bypassing the HTTP stack entirely. Data flows back and forth from storage into user memory space using zero-copy mechanisms, aligned with how GPUs prefer to operate for AI/ML workloads.

Dell's experience building an S3 RDMA solution for Dell ObjectScale uncovered many valuable benefits compared to traditional S3 access and other RDMA-based storage technologies. These benefits include improvements in client-side data path efficiency, optimized GPU utilization, and high-speed data transfer with fewer control messages. With an eye toward interoperability and standardization, this talk will reflect on this experience for attendees interested in emerging cloud storage and AI/ML data access.

Learning Objectives

What is S3 RDMA?
What problems does S3 RDMA solve for Object Storage?
How does S3 RDMA benefit the unique aspects of AI/ML workloads?

Main Speaker / Moderator
Webform Submission ID
154