Agentic AI is an emergent and disruptive paradigm beyond generative AI that carries great promise and great risk. With generative AI, a large language model (LLM) generates content as the result of a prompt. With agentic AI, the language model now generates actions to execute on behalf of a user using tools at its disposal. It has agency to act on your behalf and even run autonomously. But how can this be done safely and effectively for enterprises?
This webinar will introduce Agentic AI and cover:
- What is agentic AI?
- What makes it different from generative AI?
- Adoption challenges and what to consider when evaluating solutions for enterprise use
- Sample use case
Join us to understand both the risks and benefits of agentic AI for enterprises use cases.

Explore cutting-edge innovations in solid-state drive (SSD) power efficiency and liquid cooling, designed to mitigate AI workload bottlenecks and reduce Total Cost of Ownership (TCO) in data centers. By focusing on optimizing SSD performance per watt, we can significantly enhance data center sustainability, operational efficiency, and economic viability.
Key topics include:
- NVMe® Power Tuning Techniques: Maximizing performance per watt through advanced power management strategies.
- Liquid-Cooled SSD Technologies: Pioneering advancements in liquid cooling that enable significant reductions in power consumption and facilitate the transition to fan-less server designs.
By examining the latest trends in immersion and liquid cooling technologies, this presentation will demonstrate their critical role in driving data center performance, sustainability, and cost-effectiveness forward. Join us to discover how these advancements can help overcome challenges in high-density AI environments.
SNIA's Cloud Object Storage community organized and hosted the industry first open multi-vendor compatibility testing (Plugfest) at SDC’24, in Santa Clara, CA. Most of the participants focused on their S3 server and client implementations. Community driven testing revealed significant areas to collaborate, including ambiguities among protocol options, access control mechanisms, missing or incorrect response headers, unsupported API calls and unexpected behavior (as well as sharing best practices and fixes). The success of this event led to community demand for more SNIA Cloud Object Storage Plugfests.
In this webinar, SNIA Cloud Object Storage Plugfest participants will discuss real-world issues hampering interoperability among various S3 implementations, from both server and client perspectives.

Join Jim Handy of Objective Analysis and a panel of industry experts as they delve into Compute Express Link® (CXL), a groundbreaking technology that expands server memory beyond traditional limits and boosts bandwidth. CXL enables seamless memory sharing, reduces costs by optimizing resource utilization, and supports different memory types. Discover how CXL is transforming computing systems with its economic and performance benefits, and explore its future impact across various market segments.
In today's world of AI/ML, high-performance computing, and data centers, RDMA (Remote Direct Memory Access) is essential due to its ability to provide high-speed, low-latency data transfer. This technology enhances scalability, reduces CPU overhead, and ensures efficient handling of massive data volumes, making it crucial for modern computing environments. But how does it actually work?
In this session, we'll define RDMA, explore its historical development, and its significant benefits. In addition, we’ll provide an in-depth look at RDMA over Converged Ethernet (RoCE) and InfiniBand (IB), cover the fundamental objects, Verbs API, and wire protocol.
Key Takeaways
- Insights into the historical development and evolution of RDMA technology
- Comprehensive understanding of RDMA and its role in modern computing
- Detailed exploration of RDMA implementations
Explanation of fundamental RDMA components, such as the Verbs API and wire protocol (e.g., IB and RocE)
As artificial intelligence continues to transform industries, the demands on storage systems are evolving at an unprecedented pace. Join STA for a dynamic roundtable discussion featuring industry experts, including Patrick Kennedy of ServeTheHome, J Metz, Chair of SNIA, and STA board member Jeremiah Tussey. Together, they’ll explore the trends and technologies shaping the future of storage in an AI-driven world and how organizations can prepare for these changes.
What You’ll Learn:
- The impact of AI on storage architectures, including advances in SAS and other technologies.
- How storage scalability and performance requirements are adapting to AI workloads.
- Emerging innovations in storage management and data processing for AI-driven applications.
- Real-world insights into optimizing storage solutions for machine learning and deep learning.
- Predictions for the future of storage in AI, including opportunities and challenges.
Don't miss this opportunity to gain expert insights and join the conversation about the next wave of storage innovation.
Artificial intelligence and Machine learning (AI/ML) is a hot topic in every business at the moment, and there is a growing dialog about what constitutes an Open Model, is it the weights? Is it the data?
Those are important questions, but equally important is ensuring that the tooling and frameworks to train, validate, fine-tune, and perform inference are open source. Storage systems are a crucial component of these workflows, how can open-source solutions address the needs for high capacity and high performance?
Data is key to any and all AI/ML workflows, without it there would be no data to use as an input for model training, re-evaluation and refinement of models, or even just securely storing models once training is complete, especially if they have taken weeks to produce!
Open source solutions like Ceph can provide almost limitless scaling capabilities, both for performance and capacity. In this webinar learn how Ceph can be used as the backing store for AI/ML workloads.
We’ll cover:
- The demands of AI on storage systems
- How open source Ceph storage fits into the picture
- How to approach Ceph cluster scaling to meet AI’s needs
- How to get started with Ceph

New Memories like MRAM, ReRAM, PCM, or FRAM are vying to replace embedded flash and, eventually, even embedded SRAM. Are there other memory technologies threatened with similar fates? What will the memory market look like in another 20 years?
Catch up on the latest in new memory technologies in this fast-paced, entertaining panel, as we explain what these new memory technologies are, the applications that have already adopted them in the marketplace, their impact on computer architectures and AI, the outlook for important near-term changes, and how economics dictate success or failure.

The SNIA Cloud Storage Technologies Initiative (CSTI) conducted a poll early in 2024 during a live webinar “Navigating Complexities of Object Storage Compatibility,” citing 72% of organizations have encountered incompatibility issues between various object storage implementations. These results resulted in a call to action for SNIA to create an open expert community dedicated to resolving these issues and building best practices for the industry.
Since then, SNIA CSTI partnered with the SNIA Cloud Storage Technical Working Group (TWG) and successfully organized, hosted, and completed the first SNIA Cloud Object Storage Plugfest (multi-vendor interoperability testing), co-located at SNIA Developer Conference (SDC), September 2024, in Santa Clara, CA. Participating Plugfest companies included engineers from Dell, Google, Hammerspace, IBM, Microsoft, NetApp, VAST Data, and Versity Software. Three days of Plugfest testing discovered and resolved issues and included a Birds of a Feather (BoF) session to gain consensus on next steps for the industry. Plugfest contributors are now planning two 2025 Plugfest events in Denver in April and Santa Clara in September.
This webinar will share insights into industry best practices, explain the benefits your implementation may gain with improved compatibility, and welcome your client and server cloud object storage team to join us in building momentum. Join us for a discussion on:
- Implications on client applications
- Complexity and variety of APIs
- Access control mechanisms
- Performance and scalability requirements
- Real-world incompatibilities found in various object storage implementations
- Missing or incorrect response headersUnsupported API calls and unexpected behavior

- Understand the different types of Network Topologies typically used with AI workloads.
- Identify the nature of traffic for various AI workloads and their impact on networks.
- Learn about the challenges Ethernet faces with AI workloads and the solutions being implemented.
- Explore a specific use case to see how Ethernet addresses bandwidth and congestion issues.
