SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA

Name
Craig Carlson
First Name
Craig
Last Name
Carlson
Old Speaker ID
493
Is 2024 Speaker
On
Show speaker in homepage block
On
Photo

Fibre Channel, what’s old is new again, 128GFC and beyond

Submitted by Anonymous (not verified) on

Abstract: Fibre Channel extends its renowned compatibility and reliability with a new speed, 128GFC. This talk will discuss the newly completed 128GFC specification as well as uses of Fibre Channel in storage disaggregation and machine learning. The speakers, Rupin Mohan and Craig Carlson, have decades of experience in the architecture and standards definition of Fibre Channel and storage systems.

Accelerating GPU Server Access to Network-Attached Disaggregated Storage using Data Processing Unit (DPU)

Submitted by Anonymous (not verified) on

The recent AI explosion is reshaping storage architectures in data centers, where GPU servers increasingly need to access vast amounts of data on network-attached disaggregated storage servers for more scalability and cost-effectiveness. However, conventional CPU-centric servers encounter critical performance and scalability challenges. First, the software mechanisms required to access remote storage over the network consume considerable CPU resources.

Storage for AI 101 - A Primer on AI Workloads and Their Storage Requirements

Submitted by Anonymous (not verified) on

The SNIA TC AI Taskforce is working on a paper on AI workloads and the storage requirements for those workloads. This presentation will be an introduction to the key AI workloads with a description of how they use the data transports and storage systems. This is intended to be a foundational level presentation that will give participants a basic working knowledge of the subject.

Designing and Optimizing Complex CXL Configurations

Submitted by Anonymous (not verified) on

CXL offers unprecedented opportunities to design and build much larger application and compute arrangements than were available even a few years ago. The ability to connect memory subsystems and other compute resources through a switched network provides a dizzying array of possibilities for custom tailoring an environment for a particular workload.

Birds of a Feather: Storage on a UEC network?

Submitted by Anonymous (not verified) on

Ultra Ethernet (UEC) is the hot new technology that supports today’s AI and HPC workloads. For AI, it is well understood that the expensive accelerators (GPUs) must be constantly supplied with data. In many cases data continues to reside on traditional storage networks which are not designed for the performance and scalability required for today’s HPC and AI workloads. These networks are expected to migrate to UEC for improved bandwidth, tail latency, and scale. 

Subscribe to Craig Carlson