AI & Blockchain - The Emerging Duets

Submitted by Anonymous (not verified) on

In this session I will start with identifying the current problems in Artificial Intelligence as data exists in most industries/edge in the form of isolated pockets with privacy constraints for sharing the data for training an AI model. While transferring data from edge storage to a centralized cloud for training -many problems arise. Performance issues, cost and security or regulatory compliance are some of them. I’ll then walk through detailed features of federated learning combined with Blockchain to solve the above mentioned problems .

Accelerating operations on persistent memory device via hardware based memory offloading technique

Submitted by Anonymous (not verified) on

With more and more fast devices (especially persistent memory, aka. PMEM) equipped in data center, there is great pressure on CPU to drive those devices (e.g., Intel Optane DC persistent memory) for persistency purpose under heavy workloads. Because there is no DMA related capability provided by persistent memory compared with those hdds and SSD, thus CPU needs to participate the data operations on the persistent memory all the time. In this talk, we would like to mitigate the CPU pressure via hardware based memory offloading devices (e.g., Intel's IOAT and DSA).

What’s faster than a Cheetah and more flexible than a Cirque du Soleil performer? The new SPDK Accelerator Framework!

Submitted by Anonymous (not verified) on

Many years ago SPDK introduced the “Copy Framework”, a light weight framework for supporting one HW accelerator, Intel® IOAT, behind a generic API so that acceleration would be used if available and if not the operations would be carried out in software. That has since evolved into what we now call the Accelerator Framework and we’ve gone from 2 operations and one accelerator to 8 operations, 7 accelerators (including the brand new Intel® Data Streaming Accelerator) with the ability to either automatically or manually map operations to specific offload engines.

Network Stacks for Storage Developers: A survey of all of the tricks to make your network stack fly

Submitted by Anonymous (not verified) on

Writing a storage network server based on TCP sockets seems straightfoward - you call send() and recv() on the socket and parse the data. But the reality is not so simple. For example, most people know that sockets should be grouped together to make polling the set more efficient via epoll, kqueue, or io_uring. But did you know that it matters which sockets are grouped together, and that there's at least three different ways to group them?

SPDK and Infrastructure Offload

Submitted by Anonymous (not verified) on

Infrastructure offload based around NVMe-oF can deliver performance of direct attached storage with the benefits and composability of shared storage. Storage Performance Development Kit (SPDK) is a set of drivers and libraries uniquely positioned to demonstrate how projects like Infrastructure Programmer Development Kit (IPDK) can provide vendor agnostic high performance and scalable framework. This session will discuss how the SPDK NVMe-oF target has evolved to enable infrastructure offload.

Protecting NVMe/TCP PDU Data at 400 Gbps

Submitted by Anonymous (not verified) on

The Storage Performance Development Kit (SPDK) provides a high-performance NVMe/TCP target that scales very well. As the SPDK NVMe/TCP target gains broader adoption, providing strong error detection using the data digest to protect the NVMe/TCP Protocol Data Units (NVMe/TCP PDUs) is very important. Can the SPDK community implement the data digest at high network throughputs efficiently?

File System Acceleration using Computational Storage for Efficient Data Storage

Submitted by Anonymous (not verified) on

We examine the benefits of using computational storage devices like Xilinx SmartSSD to offload the compression to achieve an ideal compression scheme where higher compression ratios are achieved with lower CPU resources. This offloading of compute intensive task of compression frees up the CPU to cater to real customer applications. The scheme proposed in this paper comprises of Xilinx Storage Services (XSS) with Xilinx Runtime (XRT) software and HLS based GZIP compression kernel that runs on the FPGA.

Improving flash storage on Android phones

Submitted by Anonymous (not verified) on

"There has been tremendous growth in the use of smartphones. Today, there are more than 130 million Android users in the world. Android smartphones leverage flash storage. Flash is great for supporting fast random I/O workloads but due to wear leveling, there is an impact on the life of the underlying flash device. There are strategies that can be used to improve the lifespan of flash storage on smartphones.

Is storage orchestration a headache? Try infrastructure programming

Submitted by Anonymous (not verified) on

Connecting remote storage to compute node requires adding and configuring software stacks for “storage over network” in the compute node. The storage software consumes significant number of cores on the compute node. The complexity and workload can be moved to Infrastructure Programming Unit (IPU) devices. IPU cores enable developing flexible software that can further be accelerated using hardware offloads for storage workloads. The session will discuss target agnostic frameworks and storage use cases which may be easier to deliver with devices like IPU.

Subscribe to