Sorry, you need to enable JavaScript to visit this website.

On Demand Webinars

Webinars
8:00 am PT / 11:00 am ET

In this webinar, we’ll introduce the PCI Express (PCIe) interface which is an essential component of AI systems due to its ability to provide high bandwidth and low latency over short distances. We’ll introduce the basics, then discuss the evolution, requirements and benefits. We’ll cover features in PCIe Gen6 including bandwidth, AER (Advanced Error Reporting), DPC (Downstream Port Containment), different modes and signaling techniques. We will also cover PCIe switching and its applications for AI. Join us for an insightful discussion on:

  • What is PCIe and why it matters?
  • Discovery/Enumeration and Hot-plugging
  • NTB (Non-Transparent Bridging) applications and uses
  • Shared credit/received buffer
  • Selective NAK vs. Standard NAK for Flits replay
  • PCIe Gen6 Enhancements
  • PCIe Switch and applications for AI

 

Download PDF  

 

9:00 am PT / 12:00 pm ET

The rapid growth of AI is driving unprecedented demand for high-bandwidth, high-capacity memory solutions. Compute Express Link (CXL) is emerging as a key enabler in 2025, unlocking new levels of memory scalability and efficiency. Join experts as they discuss how CXL-based solutions are addressing memory bottlenecks and capacity constraints in modern data centers.

Attendees will:
- Learn about real-world applications and customer platforms using CXL
- Gain insights into benchmarks for CXL performance and latency
- Understand how OEM customers are leveraging the CXL solution

Download PDF

 

 

10:00 am PT / 1:00 pm ET

During SNIA's Cloud Object Storage (COS) multi-vendor compatibility plugfests, it became apparent to the community that a lack of commonly accepted open-source COS test tools is one of the more obvious pain points for the industry. The consensus from the plugfest participants is that we need to collaborate on test tools because often tools are proprietary and biased, and given the number of tools, it is nearly impossible to test them all. Participants found that working together, using other people's test code and results, was highly beneficial, providing broader coverage vs. testing solely with their own tools.

With this ongoing COS Plugfest effort, SNIA member companies aim to create and publish COS interoperability test software suites that help automate protocol compatibility testing for popular industry cloud object storage APIs, wire protocols, and services. If you’re curious about cross-vendor S3 compatibility, join us to learn about the work within our community. 

Download PDF

 

10:00 am PT / 1:00 pm ET

Agentic AI is an emergent and disruptive paradigm beyond generative AI that carries great promise and great risk. With generative AI, a large language model (LLM) generates content as the result of a prompt. With agentic AI, the language model now generates actions to execute on behalf of a user using tools at its disposal. It has agency to act on your behalf and even run autonomously.  But how can this be done safely and effectively for enterprises?

This webinar will introduce Agentic AI and cover:

  • What is agentic AI?
  • What makes it different from generative AI?
  • Adoption challenges and what to consider when evaluating solutions for enterprise use
  • Sample use case

Join us to understand both the risks and benefits of Agentic AI for enterprises use cases.

Download PDF

Agentic AI Q&A Blog

 

10:00 am PT / 1:00 pm ET

Explore cutting-edge innovations in solid-state drive (SSD) power efficiency and liquid cooling, designed to mitigate AI workload bottlenecks and reduce Total Cost of Ownership (TCO) in data centers. By focusing on optimizing SSD performance per watt, we can significantly enhance data center sustainability, operational efficiency, and economic viability.

Key topics include:

  • NVMe® Power Tuning Techniques: Maximizing performance per watt through advanced power management strategies.
  • Liquid-Cooled SSD Technologies: Pioneering advancements in liquid cooling that enable significant reductions in power consumption and facilitate the transition to fan-less server designs.

By examining the latest trends in immersion and liquid cooling technologies, this presentation will demonstrate their critical role in driving data center performance, sustainability, and cost-effectiveness forward. Join us to discover how these advancements can help overcome challenges in high-density AI environments.

Download PDF

Read the Q&A Blog

10:00 am PT / 1:00 pm PT

SNIA's Cloud Object Storage community organized and hosted the industry first open multi-vendor compatibility testing (Plugfest) at SDC’24, in Santa Clara, CA. Most of the participants focused on their S3 server and client implementations. Community driven testing revealed significant areas to collaborate, including ambiguities among protocol options, access control mechanisms, missing or incorrect response headers, unsupported API calls and unexpected behavior (as well as sharing best practices and fixes). The success of this event led to community demand for more SNIA Cloud Object Storage Plugfests.

In this webinar, SNIA Cloud Object Storage Plugfest participants will discuss real-world issues hampering interoperability among various S3 implementations, from both server and client perspectives.

Download PDF

10:00 am PT / 1:00 pm ET

Join Jim Handy of Objective Analysis and a panel of industry experts as they delve into Compute Express Link® (CXL), a groundbreaking technology that expands server memory beyond traditional limits and boosts bandwidth. CXL enables seamless memory sharing, reduces costs by optimizing resource utilization, and supports different memory types. Discover how CXL is transforming computing systems with its economic and performance benefits, and explore its future impact across various market segments.

 

Download PDF

10:00 am PT / 1:00 pm ET

In today's world of AI/ML, high-performance computing, and data centers, RDMA (Remote Direct Memory Access) is essential due to its ability to provide high-speed, low-latency data transfer. This technology enhances scalability, reduces CPU overhead, and ensures efficient handling of massive data volumes, making it crucial for modern computing environments. But how does it actually work?

In this session, we'll define RDMA, explore its historical development, and its significant benefits. In addition, we’ll provide an in-depth look at RDMA over Converged Ethernet (RoCE) and InfiniBand (IB), cover the fundamental objects, Verbs API, and wire protocol.

Key Takeaways

  • Insights into the historical development and evolution of RDMA technology
  • Comprehensive understanding of RDMA and its role in modern computing
  • Detailed exploration of RDMA implementations
  • Explanation of fundamental RDMA components, such as the Verbs API and wire protocol (e.g., IB and RocE)

     

Download PDF

Read Q&A Blog

 

10:00 am PT / 1:00 pm ET

As artificial intelligence continues to transform industries, the demands on storage systems are evolving at an unprecedented pace. Join STA for a dynamic roundtable discussion featuring industry experts, including Patrick Kennedy of ServeTheHome, J Metz, Chair of SNIA, and STA board member Jeremiah Tussey. Together, they’ll explore the trends and technologies shaping the future of storage in an AI-driven world and how organizations can prepare for these changes.

What You’ll Learn:

  • The impact of AI on storage architectures, including advances in SAS and other technologies.
  • How storage scalability and performance requirements are adapting to AI workloads.
  • Emerging innovations in storage management and data processing for AI-driven applications.
  • Real-world insights into optimizing storage solutions for machine learning and deep learning.
  • Predictions for the future of storage in AI, including opportunities and challenges.

Don't miss this opportunity to gain expert insights and join the conversation about the next wave of storage innovation.

Download PDF of Slides

10:00 am PT / 1:00 pm ET

Artificial intelligence and Machine learning (AI/ML) is a hot topic in every business at the moment, and there is a growing dialog about what constitutes an Open Model, is it the weights? Is it the data?

Those are important questions, but equally important is ensuring that the tooling and frameworks to train, validate, fine-tune, and perform inference are open source. Storage systems are a crucial component of these workflows, how can open-source solutions address the needs for high capacity and high performance?

Data is key to any and all AI/ML workflows, without it there would be no data to use as an input for model training, re-evaluation and refinement of models, or even just securely storing models once training is complete, especially if they have taken weeks to produce!

Open source solutions like Ceph can provide almost limitless scaling capabilities, both for performance and capacity. In this webinar learn how Ceph can be used as the backing store for AI/ML workloads.

We’ll cover:

  • The demands of AI on storage systems
  • How open source Ceph storage fits into the picture
  • How to approach Ceph cluster scaling to meet AI’s needs
  • How to get started with Ceph

Download PDF

Read Q&A Blog

Ceph Storage in a World of AI/ML Workloads