With cloud data privacy regulations evolving worldwide and accelerated adoption of AI technologies such as ChatGPT, Large Language Models (LLMs) and more, companies must ensure data and AI models are compliant. Confidential AI is a new collaborative platform for data and AI teams to work with sensitive data sets and run AI models in a confidential environment. It includes infrastructure, software, and workflow orchestration to create a secure, on-demand work environment that meets organization’s privacy requirements and complies with regulatory mandates.

Since its ratification in late 2018, NVMe/TCP has gained a lot of attention due to its great performance characteristics and relatively low cost. Since then, the NVMe/TCP protocol has been enhanced to add features such as Discovery Automation, Authentication and Secure Channels that make it more suitable for use in enterprise environments. More recently, as customers evaluate their options and consider adopting NVMe/TCP for use in their environment, many find they need a bit more information before deciding how to move forward and are asking questions such as:
- How does NVMe/TCP stack up against my existing block storage protocol of choice in terms of performance?
- Should I use a dedicated storage network when deploying NVMe/TCP or is a converged network ok?
- How can I automate interaction with my IP-Based SAN?
Join us for an open discussion regarding these questions and more.

Data Fabric is an architecture, set of services and platform that standardizes and integrates data across the enterprise regardless of data location (On-Prem, Cloud, Multi Cloud, Hybrid Cloud), enabling self-service data access to support various applications, analytics, and use cases. The data fabric leaves data where it lives and applies intelligent automation to govern, secure and bring AI to your data.
This session will discuss:
- Different types of data sources for data fabric integration
- Unification of structured and unstructured data sources into the data fabric
- How to simplify data access by virtually connecting data end points across a hybrid cloud landscape
- Providing automatic enrichment leveraging AI to contextualize data with semantics and knowledge
- Apply global and automated data governance and data privacy policy enforcement for increased data protection and quality

This webinar featuring panelists from Intel, Samsung, and VMware will discuss Persistent Memory , Compute Express Link™ (CXL™), and Memory Tiering; and how the ecosystem is working together to provide solutions for memory tiering using CXL for customer use cases.

Explore the exciting and rapidly evolving world of decentralized storage and how it is bridging the gap between Web 2.0 and Web 3.0
In this webinar, we will provide an overview of the importance of enterprise decentralized storage and why it is more relevant now than ever before. We will delve into the benefits and demand of decentralized storage and discuss the evolution of on-premises to cloud to decentralized storage (cloud 2.0). We will explore decentralized storage use cases, including its role in data privacy and security and the potential for decentralized applications (dApps) and blockchain technology.

Composable disaggregated infrastructures provide a promising solution to addressing the provisioning and computational efficiency limitations, as well as hardware and operating costs, of integrated, siloed, systems. But how do we solve these problems in an open, standards-based way?
DMTF, SNIA, the OFA, and the CXL Consortium are working together to provide elements of the overall solution, with Redfish and Swordfish manageability providing the standards-based interface. OFA is developing an Open Fabric Management Framework (OFMF) designed for configuring fabric interconnects and managing composable disaggregated resources in dynamic HPC infrastructures using client-friendly abstractions.
In this webinar, attendees will understand how use cases for scaling management of storage and fabrics and beyond are addressed in these architectures, and the applicability to additional technologies.

Data centers continue to expand their environmental footprints, currently consuming 2% of the developed world’s electricity. Experts predict this number could rise to 13% by 2030. This webinar will cover energy-efficiency in data centers and ways to rein in costs and improve sustainability. This includes delivering more power efficiency per capacity, revolutionizing cooling to reduce heat, increasing system processing to enhance performance, and infrastructure consolidation to reduce the physical and carbon footprint. Hear our panel of experts discuss:
- Defining Sustainability
- Why Does Sustainability Matter for IT?
- Sustainability for Storage & Networking
- The Importance of Measurement and KPIs
- Sustainability vs. Efficiency
- Best practices - Now vs. Future
- Bringing your IT, Facilities and Sustainability Organizations Together

The SODA Foundation, in partnership with Linux Foundation Research, has recently published its Data and Storage Trends 2022 Report, “Data and Storage Strategies in the Era of the Data-Driven Enterprise.” The findings from this global study provide a comprehensive look at the intersection of cloud computing, data and storage management, the configuration of environments that end-user organizations are gravitating to, and priorities of selected capabilities over the next several years.
SNIA is pleased to host SODA members who led this important research for a live discussion and in-depth look at key trends driving data and storage.

Cybercriminals have always been about data – stealing data, compromising data, holding data hostage. Businesses continue to respond with malware detection on laptops and networks to protect data and prevent breaches, so why should storage be left out? Storage houses what the bad actors are targeting - your data. Is there anything we can do from within the storage layer to further enhance defense in depth?
Enter "Cyberstorage", a term coined by Gartner, which is defined as doing threat detection and response in storage software or hardware. A parallel, related trend in the security industry is eXtended Detection and Response (XDR) which shifts some of the threat detection from centralized security monitoring tools (SIEMs) down into each domain (e.g., endpoint, network) for faster detection and automated response. Factor in the growing impact of ransomware and all these forces are driving the need to find creative, new ways to detect malware, including from inside the storage domain.
In this session we'll discuss:
- Cyberstorage and XDR – what are these emerging trends?
- Threat detection and response methods through a storage lens
- Possible approaches for detection when used in conjunction with security tooling
- Why silos between security and storage need to be addressed for successful threat detection

Industry analysts predict Deep Learning (DL) will account for the majority of cloud workloads. Additionally, training of deep learning models will represent the majority of server applications in the next few years. Among DL workloads, foundation models -- a new class of AI models that are trained on broad data (typically via self-supervision) using billions of parameters – are expected to consume the majority of the infrastructure.
This webcast will discuss how Deep Learning models are gaining prominence in various industries, and provide examples of the benefits of AI adoption. We’ll enumerate considerations for selection of Deep Learning infrastructure in on-premises and cloud data centers. Our presentation will include an assessment of various solution approaches and identify challenges faced by enterprises in their adoption of AI and Deep Learning technologies. We’ll answer questions like
- What benefits are enterprises enjoying from innovations in AI, Machine Learning, and Deep Learning?
- How should cost, performance, and flexibility be traded off when designing Deep Learning infrastructure?
- How are cloud native AI software stacks such as Kubernetes leveraged by organizations to reduce complexity with rapidly evolving software stack with TensorFlow, PyTorch, etc?
- What are the challenges in operationalizing Deep Learning infrastructure?
- How can Deep Learning solutions scale?
- Besides cost, time-to-train, data storage capacity and data bandwidth, what else should be considered when selecting a Deep Learning infrastructure?
