Sorry, you need to enable JavaScript to visit this website.

The Authoritative Core Model: Design Patterns for Hybrid Multicloud and Edge-Core-Cloud Architectural Strategies

Submitted by Anonymous (not verified) on

This talk describes a set of desired distributed location and data motion patterns that optimize storage/data placement for hybrid/multicloud and/or edge-to-core-to-cloud outcomes. What capabilities do you need to expect from your storage platforms to execute on a strategy that mitigates or completely eliminates cloud egress costs, yet also maximizes performance for workloads wherever they need to reside? How do you include sovereignty and governance into your foundational architecture?

SPDK based IPU/DPU Storage Solutions

Submitted by Anonymous (not verified) on

The Storage Performance Development Kit (SPDK) provides a set of tools and libraries for writing high performance, scalable, user-mode storage applications. It's an ideal open source software framework for building IPU/DPU based storage solutions including initiators, targets and local SSD virtualization. This presentation talks about recent SPDK enhancements for IPU/DPU support, how SPDK is used to enable IPU/DPU based storage solutions, and the potential of SPDK to help drive toward standard based IPU/DPU storage solutions.

Flexible Data Placement Open Source Ecosystem

Submitted by Anonymous (not verified) on

Flexible Data Placement (FDP) represents the latest development in mainstream data placement technology for NVMe. Although its use-cases resemble those of other NVMe features, such as Streams and ZNS, the differences have significant implications for the implementation within host storage stacks and applications. As host stacks adopt various data placement technologies, the risk of bloated codebases and redundant implementations rises, increasing maintenance costs for large mainline projects.

SoC Construction Using Universal Chiplet Interconnect Express (UCIe): A Game Changer

Submitted by Anonymous (not verified) on

High-performance workloads demand on-package integration of heterogeneous processing units, on-package memory, and communication infrastructure to meet the demands of the emerging compute landscape. Applications such as artificial intelligence, machine learning, data analytics, 5G, automotive, and high-performance computing are driving these demands to meet the needs of cloud computing, intelligent edge, enterprise, and client computing infrastructure.

Multi Queue Linux Block Device Drivers in Rust

Submitted by Anonymous (not verified) on

Rust for Linux has brought in Rust as a second programming in the Linux Kernel. The Rust for Linux project is making good progress towards building a general framework for writing Linux kernel device drivers in safe Rust. The Rust NVMe driver is an effort to implement a PCI NVMe driver in safe Rust, for use in the Linux Kernel. The purpose of the driver is to provide a vehicle for development of safe Rust abstractions for the Linux kernel, and to prove feasibility of Rust as an implementation language for high performance device drivers.

Maximizing EDSFF E3 SSD design

Submitted by Anonymous (not verified) on

The EDSFF E3 form factors bring more options and commonality that enterprises and hyperscalers desire compared to traditional U.2, but also bring new challenges. With power envelopes increasing in many cases, it is difficult to fit more components on board to maximize capabilities. With a lack of E3 enclosures on the market, there many uncertainties like enclosure geometries and airflow profiles, and the uncertainty of total system power delivery capability.

Data Management In The Hybrid Multi-cloud Era

Submitted by Anonymous (not verified) on

According to IDC forecast, global data will almost double in 3 years and 90% of the data generated is unstructured data which comes in many forms such as documents, videos, images, audio, IOT data, etc. The difficulty of managing the different forms of unstructured data exacerbated by myriads of interfaces and technologies in hybrid multi-cloud environments lead to data sprawl which is one of the major pain points for any organization.

SNIA SDXI Specification v1.0 and Beyond

Submitted by Anonymous (not verified) on

SDXI v1.0 is a standard for a memory-to-memory data mover and acceleration interface that is extensible, forward-compatible, and independent of I/O interconnect technology. Among other features, SDXI standardizes an interface and architecture that can be abstracted or virtualized with a well-defined capability to quiesce, suspend, and resume the architectural state of a per-address-space data mover. This specification was developed by SNIA’s SDXI Technical Working Group, comprising of 89 individuals representing 23 SNIA member companies.

SFF TA TWG Changes Coming to a Server Near You

Submitted by Anonymous (not verified) on

The SFF TA TWG within SNIA kicks out a bunch of specifications per year. But why should you care? Well, there is a very good chance your server has at least 1 SFF TA connector or cable or transceiver or device defined in it and likely many many more. As server technology changes, these components need to change for performance or size or thermals or to add functional capabilities. In this talk, the co-chairs of the SFF TA TWG will go through what detailed changes have been made to the specifications and what you need to care about.

Breaking Boundaries: Expanding Ceph's Capabilities with NVMe-oF

Submitted by Anonymous (not verified) on

Join us for a technical deep dive into open-source storage technology and its practical application with Ceph and its new SPDK-based NVMe-oF target. This session aims to explore an exciting advancement that brings together Ceph, an open-source software-defined storage system, and industry-standard protocols, focusing on NVMe-over-Fabrics (NVMe-oF). NVMe-oF is a widely adopted, high-performance storage access protocol that provides users seamless access to Ceph clusters, opening up new possibilities for efficient data storage and retrieval.

Subscribe to