Getting started with AI in the enterprise is less about training a large language model from scratch and more about selecting a model and fine-tuning it to do what you actually need. The massive compute runs that cost hundreds of millions of dollars and consume months of GPU time? That’s already been done by OpenAI, Meta, Google, Anthropic, and Mistral. For enterprise teams, the work — and the opportunity — begins after model selection.
That’s where post-training comes in — and it’s the natural area for enterprise practitioners to develop expertise. Post-training is the collection of techniques used to take a general-purpose model and make it follow instructions, align with your organization’s preferences, reason through domain-specific problems, and integrate with your tools and workflows. This is where your infrastructure decisions directly affect model quality, where storage and compute choices create real differentiation, and where your team has genuine agency.
This webinar provides a practitioner-oriented overview of the full model development pipeline, with a clear focus on the stages that matter to enterprise teams. We’ll briefly establish the complete taxonomy — training, mid-training, and post-training — so you have the right mental model, then spend the majority of the session on the techniques you’ll actually use or influence: supervised fine-tuning, reinforcement learning from human feedback, parameter-efficient adaptation, and the state-of-the-art reasoning techniques (GRPO, RLVR) behind models like DeepSeek R1 and OpenAI’s o-series.
See Post-Training in Action — Live in Your Browser
What actually happens when you fine-tune a language model? In this live demo, we take a 360-million parameter model that knows nothing about storage and walk it through the full post-training pipeline — supervised fine-tuning, preference optimization, and reinforcement learning — until it can classify storage I/O workloads on sight.
Every training curve, every weight update, every generation is real. And at the end, you'll run the models yourself — right in your browser, no GPU required — and see the difference that post-training makes with your own eyes. Built on HuggingFace SmolLM2, trained on storage I/O patterns, and running entirely client-side.
As Agentic AI moves beyond foundational theory into real-world deployment, the focus shifts to the architectural and operational enablers that make scalable intelligence possible. This session explores the emergence of Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication as critical components for orchestrating distributed, autonomous systems—laying the groundwork for the Agentic Mesh, where agents collaborate, adapt, and self-optimize across domains.
We’ll examine how Agentic AI operates in, on, with, and for storage systems—transforming them from passive data repositories into active participants in decision-making and workflow execution. From client-side perspectives, we’ll discuss how agentic interactions reshape expectations around latency, autonomy, and data locality, and what this means for future-ready infrastructure.
Read the Q&A blog: https://www.snia.org/blog/2026/how-agentic-ai-transforms-role-storage-qa
As AI workloads continue to scale, the physical infrastructure supporting them must evolve to deliver the performance, reliability, and efficiency required by modern data-intensive applications. This webinar will focus on the interconnect foundations of AI infrastructure, including storage. There will be an emphasis on the form factors, connectors, cables, and transceivers standards that enable scalable and interoperable systems.
We’ll explore how SNIA’s SFF Technical Work Group is contributing to the development of physical layer standards that support high-performance interconnects and storage devices used in AI environments. Topics will include the importance of transceiver innovation and the need to evolve; the role that form factors like EDSFF play; and how innovative cabling and connector designs support the adoption of high-speed signaling technologies like PCIe 8.0 and beyond.
Read the webinar Q&A blog: "Q&A: 400G and PCIe 8.0 in Next-Gen Interconnects" https://www.snia.org/blog/2026/qa-400g-and-pcie-80-next-gen-interconnects
Storage security changes and adaptations are a fact of life to deal with the ever-changing threat and regulatory landscapes. A black swan event, new standard, or regulation often serve as catalysts for organizations to review their controls and practices. The recent publication of the NIST Special Publication 800-88 Rev. 2 Guidelines for Media Sanitization is drawing attention to storage security, but it is not the only development worth noting. SNIA, the Open Compute Project (OCP), Trusted Computing Group (TCG), and IEEE have or are developing standards and specifications that could be important going forward. One major theme underlying several of the activities is in storage sanitization (i.e., controlled eradication of data).
This session brings together a unique panel of experts who have served as editors of some of the most important storage sanitization standards. These experts also have broad knowledge of storage security. As a panel, they will provide insight into these new developments as well as observations on what organizations are experiencing.
Smarter data management is the key to unlocking the full potential of next-generation data infrastructure for AI. This webinar explores how the SNIA Cloud Data Management Interface (CDMI™) standard enables government labs, HPC centers, and organizations leveraging the power of AI/ML to reimagine and innovate their infrastructure to meet next-generation data management demands. CDMI is a mature ISO standard (ISO/IEC: 17826:2022) with over 15 years of development and implementation, and is purpose-built for cloud data management. CDMI's extensive capabilities enhance key aspects of AI operations by transforming data management challenges into streamlined, intelligent workflows.
CDMI 3.0 (in development now) will further extend the standard with:
• Automated resource discovery that accelerates AI workload efficiency
• Portable data movement that enables preservation of metadata fidelity during migrations, including graph relationships and knowledge graph
• MCP-based data management that enhances AI accuracy while improving access speed
Read the SNIA CDMI 3.0 white paper
Download PDF
Magnetic RAM, Resistive RAM, Ferroelectric RAM, and other new memory technologies have been pushed to the forefront by issues with Moore’s Law scaling, but the ancillary benefits they bring to computing will provide significantly more important longer-term benefits. In this session, analysts Jim Handy and Tom Coughlin will explain how these new nonvolatile technologies will help accelerate and cost-reduce inference through compute-in-memory engines while standard computing will also see lower costs and higher performance by bringing inexpensive persistence closer to, and even within the processor chip itself. The session will also address ways that new frameworks like CXL will help provide persistence-friendly structures at all levels of the computing hierarchy.
Join us for an insightful webinar and demonstration as we explore the foundational process of training machine learning models, with a special focus on computer vision applications. This session will guide participants through the essential stages of model development—from data preparation to algorithm selection—highlighting how these steps influence performance and accuracy.
Through practical demonstrations and conceptual discussions, attendees will gain a deeper understanding of how vision models are built and validated, and how these techniques can be extended to broader predictive analytics. We’ll also examine common challenges and share strategies to overcome them, emphasizing the value of simplicity, modular design, and thoughtful evaluation.
This webinar will explain the process of model inferencing and deployment options. We will discuss how trained models are used to make predictions and the considerations for deploying models. Attendees will understand the challenges and strategies for optimizing model performance, covering:
• What is model inferencing
• Model inferencing process
• How inferencing deployment differs in Gen AI and visual inspection
- Inferencing deployment software options
- Inferencing hardware considerations (e.g., on the edge, on-prem, etc.)
• Deploying AI models for production best practices and lessons learned
• Strategies for maintaining model performance post-deployment
• Real-world examples of successful model deployments
• The importance of monitoring and maintaining deployed models
In today’s data-driven world, ensuring the resilience of your storage systems is critical for long-term success. This vendor-neutral webinar, hosted by the STA, will bring together industry experts to explore proven methodologies for testing the robustness of Serial Attached SCSI (SAS) and other data storage infrastructures. Craig Foster (Teledyne LeCroy) and Paul Coddington (Amphenol) will discuss practical approaches and the latest standards that enable secure, reliable, and high-performance storage systems. Rick Kutcipal (Broadcom) will moderate the Q&A and offer other perspectives. Join us to gain insights on ensuring your storage solutions meet the demands of modern workloads and prevent costly data loss or downtime.
Key Takeaways:
- Understand the key factors impacting the robustness of Serial Attached SCSI (SAS) and other data storage systems.
- Learn about the latest testing methodologies and standards in SAS and storage infrastructures.
- Discover how vendor-neutral best practices can enhance the security and reliability of SAS-based storage.
Download PDF of Slides
This webinar will provide a foundational understanding of Artificial Intelligence (AI) and Machine Learning (ML). We will explore the basic concepts, history, and applications of AI/ML, setting the stage for more advanced topics in subsequent webinars. Attendees will gain insights into how AI/ML technologies are transforming industries and everyday life and have the opportunity to see one of the earliest AI experiments: Training a neural network to recognize written letters.
Key Concepts:
• Definition and History of AI and ML
• Differences Between AI, ML, and Deep Learning
• Real-World Applications of AI/ML
• Basic Terminology and Concepts