Making Real File Systems Faster with Applied Computational Storage

Submitted by Anonymous (not verified) on

The exploration of computation near flash storage has been prompted by the advent of network-attached flash-based storage enclosures operating at tens of gigabytes/sec, server memory bandwidths struggling to keep up with network and aggregate I/O bandwidths, and the ever-growing need for massive data storage, management, manipulation and analysis. Multiple tasks from distributed analytical/indexing functions to data management tasks like compression, erasure encoding, and deduplication are all potentially more performant, efficient and economical when performed near storage devices.

PCIe® 6.0 Specification and Beyond: Enabling Storage and Machine Learning Applications

Submitted by Anonymous (not verified) on

For the past three decades, PCI-SIG® has delivered a succession of industry-leading PCI Express® (PCIe®) specifications that remain ahead of the increasing demand for a high-bandwidth, low-latency interconnect for compute-intensive systems in diverse market segments, including data centers, Artificial Intelligence and Machine Learning (AI/ML), high-performance computing (HPC) and storage applications. In early 2022, PCI-SIG released the PCIe 6.0 specification to members, doubling the data rate of the PCIe 5.0 specification to 64 GT/s (up to 256 GB/s for a x16 configuration).

Introducing CXL 3.0: Expanded Capabilities for Increased Scale and Optimized Resource Utilization

Submitted by Anonymous (not verified) on

Compute Express Link™ (CXL™) is an open industry-standard interconnect offering coherency and memory semantics using high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. CXL technology is designed to address the growing needs of high-performance computational workloads by supporting heterogeneous processing and memory systems for applications in Artificial Intelligence, Machine Learning, communication systems, and High-Performance Computing.

Debugging of Flash Issues Observed in Hyperscale Environment at Scale

Submitted by Anonymous (not verified) on

A deep dive of the methodology and tooling that we use at Meta, to improve debuggability of failures in the datacenters, especially for failures on components like SSDs where privacy requirements might prohibit us from sending the components back for FA or add custom instrumentations in our datacenter. In particular, we will talk about how the tool tracewatch coupled with Latency Monitoring log page helps us trigger trace collection on failures using BPF based triggers.

SMB3 Landscape and Directions

Submitted by Anonymous (not verified) on

SMB3 has seen significant adoption as the storage protocol of choice for running private cloud deployments. In this iteration of the talk, we’ll update the audience on SMB protocol changes as well as improvements to the Windows implementation of the SMB server and client. Added to the SMB protocol is a new server-to-client notification mechanism, which enables a variety of novel use cases. We’ll present the details of protocol messaging (new message types, etc) as well as the one scenario which leverages this new mechanism (server-triggered graceful session closure).

The DNA Data Storage Rosetta Stone Initiative

Submitted by Anonymous (not verified) on

DNA data storage will dramatically effect the way organizations think about data retention, data protection, and archival by providing capacity density and longevity several orders of magnitude beyond anything available today, while reducing requirements for both power, cooling, and fixity checks. One of challenges of any long term archival storage is being able to recover the data after possibly decades or longer. To do this, the reader must be able to bootstrap the archive, akin to how an OS is loaded after the master boot record is loaded.

Redfish Ecosystem for Storage

Submitted by Anonymous (not verified) on

DMTF’s Redfish® is a standard API designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). This presentation will provide an overview of DMTF’s Redfish standard. It will also provide an overview HPE’s implementation of Redfish, focusing on their storage implementation and needs. HPE will provide insights into the benefits and challenges of the Redfish Storage model, including areas where functionality added to SNIA™ Swordfish is of interest for future releases.

Key per IO - Fine Grain Encryption for Storage

Submitted by Anonymous (not verified) on

The Key Per IO (KPIO) project is a joint initiative between NVM Express® and the Trusted Computing Group (TCG) Storage Work Group to define a new KPIO Security Subsystem Class (SSC) under TCG Opal SSC for NVMe® class of Storage Devices. Self-Encrypting Drives (SED) perform continuous encryption on user accessible data based on contiguous LBA ranges per namespace. This is done at interface speeds using a small number of keys generated/held in persistent media by the storage device. KPIO will allow large number of encryption keys to be managed and securely downloaded into the NVM subsystem.

Accelerating FaaS/container Image Construction via IPU

Submitted by Anonymous (not verified) on

NOTE: this paper was developed by Ziye Yang, a Staff Software Engineer at Intel and is being presented by colleague Yadong Li, a Principal Engineer in the Ethernet Products Group at Intel. In many usage cases, FaaS applications usually run or deployed in container/Virtual machine environment for isolation purpose.

Subscribe to