Join Jim Handy of Objective Analysis and a panel of industry experts as they delve into Compute Express Link® (CXL), a groundbreaking technology that expands server memory beyond traditional limits and boosts bandwidth. CXL enables seamless memory sharing, reduces costs by optimizing resource utilization, and supports different memory types. Discover how CXL is transforming computing systems with its economic and performance benefits, and explore its future impact across various market segments.
In today's world of AI/ML, high-performance computing, and data centers, RDMA (Remote Direct Memory Access) is essential due to its ability to provide high-speed, low-latency data transfer. This technology enhances scalability, reduces CPU overhead, and ensures efficient handling of massive data volumes, making it crucial for modern computing environments. But how does it actually work?
In this session, we'll define RDMA, explore its historical development, and its significant benefits. In addition, we’ll provide an in-depth look at RDMA over Converged Ethernet (RoCE) and InfiniBand (IB), cover the fundamental objects, Verbs API, and wire protocol.
Key Takeaways
- Insights into the historical development and evolution of RDMA technology
- Comprehensive understanding of RDMA and its role in modern computing
- Detailed exploration of RDMA implementations
Explanation of fundamental RDMA components, such as the Verbs API and wire protocol (e.g., IB and RocE)
As artificial intelligence continues to transform industries, the demands on storage systems are evolving at an unprecedented pace. Join STA for a dynamic roundtable discussion featuring industry experts, including Patrick Kennedy of ServeTheHome, J Metz, Chair of SNIA, and STA board member Jeremiah Tussey. Together, they’ll explore the trends and technologies shaping the future of storage in an AI-driven world and how organizations can prepare for these changes.
What You’ll Learn:
- The impact of AI on storage architectures, including advances in SAS and other technologies.
- How storage scalability and performance requirements are adapting to AI workloads.
- Emerging innovations in storage management and data processing for AI-driven applications.
- Real-world insights into optimizing storage solutions for machine learning and deep learning.
- Predictions for the future of storage in AI, including opportunities and challenges.
Don't miss this opportunity to gain expert insights and join the conversation about the next wave of storage innovation.
Artificial intelligence and Machine learning (AI/ML) is a hot topic in every business at the moment, and there is a growing dialog about what constitutes an Open Model, is it the weights? Is it the data?
Those are important questions, but equally important is ensuring that the tooling and frameworks to train, validate, fine-tune, and perform inference are open source. Storage systems are a crucial component of these workflows, how can open-source solutions address the needs for high capacity and high performance?
Data is key to any and all AI/ML workflows, without it there would be no data to use as an input for model training, re-evaluation and refinement of models, or even just securely storing models once training is complete, especially if they have taken weeks to produce!
Open source solutions like Ceph can provide almost limitless scaling capabilities, both for performance and capacity. In this webinar learn how Ceph can be used as the backing store for AI/ML workloads.
We’ll cover:
- The demands of AI on storage systems
- How open source Ceph storage fits into the picture
- How to approach Ceph cluster scaling to meet AI’s needs
- How to get started with Ceph

New Memories like MRAM, ReRAM, PCM, or FRAM are vying to replace embedded flash and, eventually, even embedded SRAM. Are there other memory technologies threatened with similar fates? What will the memory market look like in another 20 years?
Catch up on the latest in new memory technologies in this fast-paced, entertaining panel, as we explain what these new memory technologies are, the applications that have already adopted them in the marketplace, their impact on computer architectures and AI, the outlook for important near-term changes, and how economics dictate success or failure.

The SNIA Cloud Storage Technologies Initiative (CSTI) conducted a poll early in 2024 during a live webinar “Navigating Complexities of Object Storage Compatibility,” citing 72% of organizations have encountered incompatibility issues between various object storage implementations. These results resulted in a call to action for SNIA to create an open expert community dedicated to resolving these issues and building best practices for the industry.
Since then, SNIA CSTI partnered with the SNIA Cloud Storage Technical Working Group (TWG) and successfully organized, hosted, and completed the first SNIA Cloud Object Storage Plugfest (multi-vendor interoperability testing), co-located at SNIA Developer Conference (SDC), September 2024, in Santa Clara, CA. Participating Plugfest companies included engineers from Dell, Google, Hammerspace, IBM, Microsoft, NetApp, VAST Data, and Versity Software. Three days of Plugfest testing discovered and resolved issues and included a Birds of a Feather (BoF) session to gain consensus on next steps for the industry. Plugfest contributors are now planning two 2025 Plugfest events in Denver in April and Santa Clara in September.
This webinar will share insights into industry best practices, explain the benefits your implementation may gain with improved compatibility, and welcome your client and server cloud object storage team to join us in building momentum. Join us for a discussion on:
- Implications on client applications
- Complexity and variety of APIs
- Access control mechanisms
- Performance and scalability requirements
- Real-world incompatibilities found in various object storage implementations
- Missing or incorrect response headersUnsupported API calls and unexpected behavior

- Understand the different types of Network Topologies typically used with AI workloads.
- Identify the nature of traffic for various AI workloads and their impact on networks.
- Learn about the challenges Ethernet faces with AI workloads and the solutions being implemented.
- Explore a specific use case to see how Ethernet addresses bandwidth and congestion issues.

This presentation examines the critical role of storage solutions in optimizing AI workloads, with a primary focus on storage-intensive AI training workloads. We will highlight how AI models interact with storage systems during training, focusing on data loading and checkpointing mechanisms. We will explore how AI frameworks like PyTorch utilize different storage connectors to access various storage solutions. Finally, the presentation will delve into the use of file-based storage and object storage in the context of AI training:
Attendees will:
- Gain a clear understanding of the critical role of storage in AI model training workloads
- Understand how AI models interact with storage systems during training, focusing on data loading and checkpointing mechanisms
- Learn how AI frameworks like PyTorch use different storage connectors to access various storage solutions.
- Explore how file-based storage and object storage is used in AI training

Unlocking a Sustainable Future for Data Storage
In a world of surging data demands, how can we reduce the environmental toll of storage solutions? Discover the power of the circular economy to reshape the storage industry for a greener tomorrow with this webinar, featuring Jonmichael Hands (Co-Chair SNIA SSD SIG and Board Member, Circular Drive Initiative) and Shruti Sethi (Sr. PM at Microsoft and Leadership Team, Open Compute Project-Sustainability).
Key Highlights:
- Circular Drive Initiative: Rethink the lifecycle of storage devices—from design to end-of-life—to unlock significant environmental benefits.
- Media Sanitization Best Practices: Securely erase data to enable reuse, extend device life, and cut down on e-waste. Explore techniques like:
- Cryptographic erase
- Block erase
- Overwrite methods
- Compliance & Transparency: Learn how standards like IEEE 2883-2022 and ISO/IEC 27040:2024 guide secure data disposal, with organizations like SERI R2 and ADISA leading the charge in setting industry benchmarks.
- Carbon Accounting in Storage: Understand how tracking and reducing carbon emissions in storage aligns with global sustainability goals.
This session is your roadmap to driving real change by adopting circular economy principles, embracing advanced sanitization methods, and leveraging carbon accounting to reduce the industry’s environmental footprint.

The key to optimal SAN performance is managing performance impacts due to congestion. The Fibre Channel industry introduced Fabric Notifications as a key resiliency mechanism for storage networks in 2021 to combat congestion, link integrity, and delivery errors. These functions have been implemented by the ecosystem and have enhanced the overall user experience when deploying Fibre Channel SANs. This webinar explores the evolution of Fabric Notifications and the available solutions of this exciting new technology. In this webinar, you will learn the following:
- The state of Fabric Notifications as defined by the Fibre Channel standards.
- The mechanisms and techniques for implementing Fabric Notifications.
- The currently available solutions deploying Fabric Notifications.
