With the emergence of GPUs, xPUs (DPU, IPU, FAC, NAPU, etc) and computational storage devices for host offload and accelerated processing, a panoramic wild west of frameworks are emerging, consolidating and vying for the preferred programming software stack that best integrates the application layer with these underlying processing units.
This webcast will provide an overview of programming frameworks that support (1) GPUs (CUDA, SYCL, OpenCL, oneAPI), (2) xPUs (DASH, DOCA, OPI, IPDK), and (3) Computational Storage (SNIA computational storage API, NVMe TP4091 and FPGA programming shells).
We will discuss strengths and challenges and market adoption across these programming frameworks as we untangle the alphabet soup of new frameworks that include:
- AI/ML: OpenCL, CUDA, SYCL, oneAPI
- xPU: DOCA, OPI, DASH, IPDK
- Core data path frameworks: SPDK, DPDK
- Computational Storage: SNIA Standard 0.8 (in public review), TP4091

Wide-spread adoption of Kubernetes over the last several years has been remarkable and Kubernetes is now recognized as the most popular orchestration tool for containerized workloads. As applications and workflows in Kubernetes continue to evolve, so must the platform and storage.
So, where are we today, and where are we going? Find out in this “15 Minutes in the Cloud” session, where we’ll discuss:
- Persistence - From ephemeral to persistent - what has putting persistence in the mix done to applications?
- Business Continuity - What’s needed for business continuity, backup & recovery and DR?
- Deployment - Kubernetes delivered as a service, in the cloud, on-premises, data center and edge. How is that different in each case?
- Performance/Scalability – How do you scale and still ensure performance?
- Trends – What are the business drivers and what does the future hold?

Our 1st and 2nd webcasts in this xPU series explained what xPUs are, how they work, and what they can do. In this 3rd webcast, we will dive deeper into next steps for xPU deployment and solutions, discussing:
When to deploy
- Pros and cons of dedicated accelerator chips versus running everything on the CPU
- xPU use cases across hybrid, multi-cloud and edge environments
- Cost and power considerations
Where to deploy
- Deployment operating models: Edge, Core Data Center, CoLo, Public Cloud
- System location: In the server, with the storage, on the network, or in all those locations?
How to deploy
- Mapping workloads to hyperconverged and disaggregated infrastructure
- Integrating xPUs into workload flows
- Applying offload and acceleration elements within an optimized solution

Organizations are adopting containers at an increasingly rapid rate. In fact, there are few organizations that haven’t implemented containers in their environment today.
Storage implications for Kubernetes will be the topic of this live webcast where storage experts from SNIA and Kubernetes experts from the Cloud Native Computing Foundation (CNCF) will discuss:
- Key storage attributes of cloud native storage for Kubernetes
- How do we use cloud native storage in Kubernetes environments?
- Workloads and real-world use cases
This webcast will help you better understand and address storage and persistent data challenges in a Kubernetes environment.

Edge is the new frontier of compute and data in today’s world, driven by the explosive growth of mobile devices, work from home, digital video, smart cities, and connected cars. An increasing percentage of data is generated and processed at the edge of the network. With this trend comes the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center.
- The increasing need to do more at the edge across compute, storage and networking
- The rise of intelligent edge locations
- Different solutions that provide faster processing or data movement at the edge
- How computational storage can speed up data processing and transmission at the edge
- Security considerations for edge processing
We look forward to having you join us to cover all this and more. We promise to keep you on the edge of your virtual seat!

What do you think is a more secure way of securely removing data from a hard drive - putting it through a shredder, or doing an instant secure erase? The answer might surprise you! Companies go to great lengths to secure their data and prevent confidential information from being made available to others. When a company is done using its ICT equipment, including the storage device, it is important to render the data inaccessible. Sanitization is a process or method to render access to target data on storage media infeasible for a given level of effort. SSDs and HDDs have various security features that make this sanitization quick, secure, and verifiable.
In this webcast, we will go over the different types of sanitization defined in the new IEEE P2883 Specification for Sanitization of Storage and cover easy ways to perform “Clear”, “Purge,” and “Destruct in mainstream storage interfaces like SATA, SAS, and NVMe. We discuss recommendations for the verification of sanitization to ensure that devices are meeting stringent requirements and explain how the purge technique for media sanitization can be quick, secure, reliable, and verifiable - and most importantly keeps the device in one piece.

As covered in our first webcast “SmartNICs and xPUs: Why is the Use of Accelerators Accelerating,” we discussed the trend to deploy dedicated accelerator chips to assist or offload the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC, DPU, IPU, APU, NAPU.
This second webcast in this series will cover a deeper dive into the accelerator offload functions of the xPU. We’ll discuss what problems the xPUs are coming to solve, where in the system they live, and the functions they implement, focusing on:
- Network Offloads
- Security Offloads
- Compute Offloads
- Storage Offloads

As applications continue to increase in complexity and users demand more from their workloads, there is a trend to again deploy dedicated accelerator chips to assist or offload the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC, DPU, IPU, APU, NAPU. How are these different than GPU, TPU, CPU? xPUs accelerate and offload functions including math, networking, storage, cryptography, security, and management. This webcast will cover key topics about, and clarify questions surrounding, xPUs.

The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. This panel of experts will explore these challenges and wade into the debate as to whether existing security practices and standards are adequate for this emerging area of computing. Join us for a discussion that will cover:
- Understanding the key security issues associated with edge computing
- Identify potentially relevant standards and industry guidance (e.g., IoT security)
- Offer awareness of new security initiatives focused on edge computing

This presentation will define the data protection landscape particularly in the context of modern cloud-native containerized applications. It describes various constructs and capabilities of these applications thereby articulating data protection challenges. We will also cover different Backup and Recovery solution considerations for scale, performance, optimizations and protection from any ransomware attack.
