Sorry, you need to enable JavaScript to visit this website.

On Demand Webinars

Webinars
10:00 am PT / 1:00 pm ET

Where do companies see the industry going with regard to persistent memory? With the improvement of SSD and DRAM I/O over CXL, the overlap of CXL and NVMe, high density persistent memory, and memory-semantic SSDs, there is a lot to talk about! Our moderator and panel of experts from Intel, Marvell, Microchip, and SMART Modular will widen the lens on persistent memory, take a system level approach, and see how the persistent memory landscape is being redefined.

Download PDF

Persistent Memory Trends
10:00 am PT / 1:00 pm ET

A new class of cloud and datacenter infrastructure is emerging into the marketplace. This new infrastructure element, often referred to as Data Processing Unit (DPU), Infrastructure Processing Unit (IPU) or xPU as a general term, takes the form of a server hosted PCIe add-in card or on-board chip(s), containing one or more ASIC’s or FPGA's, usually anchored around a single powerful SoC device. 

The OPI project has been created to foster the emergence of an open and creative software ecosystem for DPU/IPU based cloud infrastructure. At this live webcast, experts actively leading this initiative will provide an introduction to the OPI project, discuss OPI workstream definitions and status, and explain how you can get involved. 

Download PDF

Read Q&A Blog

An Introduction to the OPI (Open Programmable Infrastructure) Project
10:00 am PT / 1:00 pm ET

Have you ever wondered how intelligent Industry 4.0 factories or smart cities of the future will process massive amounts of sensor and machine data? What you may not expect is a digital twin will most likely play a role. A digital twin is a virtual representation of an object, system or process that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning and reasoning to help decision-making. Digital twins can be used to help answer what-if AI-analytics questions, yield insights on business objectives and make recommendations on how to control or improve outcomes.

This webinar will introduce digital twin usage in edge IoT applications, highlighting what is available today, what to expect in the next couple of years, and what the future holds. We will provide examples of how digital twin methods help capture virtual representation of real-world entities and processes, discussing:

  • What is driving the edge IoT need
  • Data analytics problems solved by digital twins
  • How digital twins are being used today, tomorrow and beyond
  • Use cases: adaptive agile factories, massive data generation across industries, and system of systems processes
  • Why this is a technology and trend that is here to stay

Download PDF

Read Q&A Blog

Journey to the Center of Massive Data: Digital Twins
10:00 am PT / 1:00 pm ET

Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization.

SNIA's Smart Data Accelerator Interface (SDXI) Technical Work Group (TWG) is at the forefront of standardizing this. The SDXI TWG is designing an industry-open standard for a memory-to-memory data movement and acceleration interface that is – Extensible, Forward-compatible, and Independent of I/O interconnect technology. A candidate for the v1.0 SNIA SDXI standard is now in review.

Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning.

In this webcast, we will:

  • Introduce SDXI and CXL
  • Discuss data movement needs in a CXL ecosystem
  • Cover SDXI advantages in a CXL interconnect

Download PDF

Read Q&A Blog

What’s in a Name? Memory Semantics and Data Movement with CXL™ and SDXI
9:00 am PT / 12:00 noon EDT

At the 2022 Open Compute Global Summit, OEMs, cloud service providers, hyperscale data center, and SSD vendors showcased products and their vision for how the family of EDSFF form factors solves real data challenges. In this webcast, SNIA SSD SIG co-chairs Cameron Brett of KIOXIA and Jonmichael Hands of Chia Network explain how having a flexible and scalable family of form factors allows for optimization for different use cases, different media types on SSDs, scalable performance, and improved data center TCO. They'll highlight the latest SNIA specifications that support these form factors, provide an overview of platforms that are EDSFF-enabled, and discuss the future for new product and application introductions.

Download PDF

EDSFF Taking Center Stage in the Data Center
11:00 am PT / 2:00 pm ET

Kubernetes platforms offer a unique cloud-like experience — all the flexibility, elasticity, and ease of use — on premises, in a private or public cloud, even at the edge. The ease and flexibility of turning on services when you want them, turning them off when you don’t, is an enticing prospect for developers as well as application deployment teams, but it has not been without its challenges.
 
Our Kubernetes panel of experts will debate the challenges and how to address them, discussing:

  • So how are all these trends coming together?
  • Is cloud repatriation really a thing?
  • How are traditional hardware vendors reinventing themselves to compete?
  • Where does the data live?
  • How is the data accessed?
  • What workloads are emerging? 

Download PDF

Read Q&A Blog

Kubernetes Trials & Tribulations: Cloud, Data Center, Edge
10:00 am PT / 1:00 pm ET

With the emergence of GPUs, xPUs (DPU, IPU, FAC, NAPU, etc) and computational storage devices for host offload and accelerated processing, a panoramic wild west of frameworks are emerging, consolidating and vying for the preferred programming software stack that best integrates the application layer with these underlying processing units.

This webcast will provide an overview of programming frameworks that support (1) GPUs (CUDA, SYCL, OpenCL, oneAPI), (2) xPUs (DASH, DOCA, OPI, IPDK), and (3) Computational Storage (SNIA computational storage API, NVMe TP4091 and FPGA programming shells).

We will discuss strengths and challenges and market adoption across these programming frameworks as we untangle the alphabet soup of new frameworks that include:

  • AI/ML: OpenCL, CUDA, SYCL, oneAPI
  • xPU: DOCA, OPI, DASH, IPDK
  • Core data path frameworks: SPDK, DPDK
  • Computational Storage: SNIA Standard 0.8 (in public review), TP4091

Read Q&A Blog

Download PDF

You’ve Been Framed! xPU, GPU & Computational Storage Programming Frameworks
10:00 am PT / 1:00 pm ET

Wide-spread adoption of Kubernetes over the last several years has been remarkable and Kubernetes is now recognized as the most popular orchestration tool for containerized workloads. As applications and workflows in Kubernetes continue to evolve, so must the platform and storage.

So, where are we today, and where are we going? Find out in this “15 Minutes in the Cloud” session, where we’ll discuss:

  • Persistence - From ephemeral to persistent - what has putting persistence in the mix done to applications?
  • Business Continuity - What’s needed for business continuity, backup & recovery and DR?
  • Deployment - Kubernetes delivered as a service, in the cloud, on-premises, data center and edge. How is that different in each case?
  • Performance/Scalability – How do you scale and still ensure performance?
  • Trends – What are the business drivers and what does the future hold?

Download PDF

15 Minutes in the Cloud: Kubernetes is Evolving, Are You?
10:00 am PT / 1:00 pm ET

Our 1st and 2nd webcasts in this xPU series explained what xPUs are, how they work, and what they can do. In this 3rd webcast, we will dive deeper into next steps for xPU deployment and solutions, discussing:

When to deploy

  • Pros and cons of dedicated accelerator chips versus running everything on the CPU
  • xPU use cases across hybrid, multi-cloud and edge environments
  • Cost and power considerations

Where to deploy

  • Deployment operating models: Edge, Core Data Center, CoLo, Public Cloud
  • System location: In the server, with the storage, on the network, or in all those locations?

How to deploy

  • Mapping workloads to hyperconverged and disaggregated infrastructure
  • Integrating xPUs into workload flows
  • Applying offload and acceleration elements within an optimized solution

Download PDF

xPU Deployment and Solutions Deep Dive
10:00 am PT / 1:00 pm ET

Organizations are adopting containers at an increasingly rapid rate. In fact, there are few organizations that haven’t implemented containers in their environment today.
 
Storage implications for Kubernetes will be the topic of this live webcast where storage experts from SNIA and Kubernetes experts from the Cloud Native Computing Foundation (CNCF) will discuss:

  • Key storage attributes of cloud native storage for Kubernetes
  • How do we use cloud native storage in Kubernetes environments?
  • Workloads and real-world use cases

 This webcast will help you better understand and address storage and persistent data challenges in a Kubernetes environment.

Download PDF

Read Q&A Blog

Kubernetes is Everywhere – What About Cloud Native Storage?