Data centers continue to expand their environmental footprints, currently consuming 2% of the developed world’s electricity. Experts predict this number could rise to 13% by 2030. This webinar will cover energy-efficiency in data centers and ways to rein in costs and improve sustainability. This includes delivering more power efficiency per capacity, revolutionizing cooling to reduce heat, increasing system processing to enhance performance, and infrastructure consolidation to reduce the physical and carbon footprint. Hear our panel of experts discuss:
- Defining Sustainability
- Why Does Sustainability Matter for IT?
- Sustainability for Storage & Networking
- The Importance of Measurement and KPIs
- Sustainability vs. Efficiency
- Best practices - Now vs. Future
- Bringing your IT, Facilities and Sustainability Organizations Together

The SODA Foundation, in partnership with Linux Foundation Research, has recently published its Data and Storage Trends 2022 Report, “Data and Storage Strategies in the Era of the Data-Driven Enterprise.” The findings from this global study provide a comprehensive look at the intersection of cloud computing, data and storage management, the configuration of environments that end-user organizations are gravitating to, and priorities of selected capabilities over the next several years.
SNIA is pleased to host SODA members who led this important research for a live discussion and in-depth look at key trends driving data and storage.

Cybercriminals have always been about data – stealing data, compromising data, holding data hostage. Businesses continue to respond with malware detection on laptops and networks to protect data and prevent breaches, so why should storage be left out? Storage houses what the bad actors are targeting - your data. Is there anything we can do from within the storage layer to further enhance defense in depth?
Enter "Cyberstorage", a term coined by Gartner, which is defined as doing threat detection and response in storage software or hardware. A parallel, related trend in the security industry is eXtended Detection and Response (XDR) which shifts some of the threat detection from centralized security monitoring tools (SIEMs) down into each domain (e.g., endpoint, network) for faster detection and automated response. Factor in the growing impact of ransomware and all these forces are driving the need to find creative, new ways to detect malware, including from inside the storage domain.
In this session we'll discuss:
- Cyberstorage and XDR – what are these emerging trends?
- Threat detection and response methods through a storage lens
- Possible approaches for detection when used in conjunction with security tooling
- Why silos between security and storage need to be addressed for successful threat detection

Industry analysts predict Deep Learning (DL) will account for the majority of cloud workloads. Additionally, training of deep learning models will represent the majority of server applications in the next few years. Among DL workloads, foundation models -- a new class of AI models that are trained on broad data (typically via self-supervision) using billions of parameters – are expected to consume the majority of the infrastructure.
This webcast will discuss how Deep Learning models are gaining prominence in various industries, and provide examples of the benefits of AI adoption. We’ll enumerate considerations for selection of Deep Learning infrastructure in on-premises and cloud data centers. Our presentation will include an assessment of various solution approaches and identify challenges faced by enterprises in their adoption of AI and Deep Learning technologies. We’ll answer questions like
- What benefits are enterprises enjoying from innovations in AI, Machine Learning, and Deep Learning?
- How should cost, performance, and flexibility be traded off when designing Deep Learning infrastructure?
- How are cloud native AI software stacks such as Kubernetes leveraged by organizations to reduce complexity with rapidly evolving software stack with TensorFlow, PyTorch, etc?
- What are the challenges in operationalizing Deep Learning infrastructure?
- How can Deep Learning solutions scale?
- Besides cost, time-to-train, data storage capacity and data bandwidth, what else should be considered when selecting a Deep Learning infrastructure?

Where do companies see the industry going with regard to persistent memory? With the improvement of SSD and DRAM I/O over CXL, the overlap of CXL and NVMe, high density persistent memory, and memory-semantic SSDs, there is a lot to talk about! Our moderator and panel of experts from Intel, Marvell, Microchip, and SMART Modular will widen the lens on persistent memory, take a system level approach, and see how the persistent memory landscape is being redefined.

A new class of cloud and datacenter infrastructure is emerging into the marketplace. This new infrastructure element, often referred to as Data Processing Unit (DPU), Infrastructure Processing Unit (IPU) or xPU as a general term, takes the form of a server hosted PCIe add-in card or on-board chip(s), containing one or more ASIC’s or FPGA's, usually anchored around a single powerful SoC device.
The OPI project has been created to foster the emergence of an open and creative software ecosystem for DPU/IPU based cloud infrastructure. At this live webcast, experts actively leading this initiative will provide an introduction to the OPI project, discuss OPI workstream definitions and status, and explain how you can get involved.

Have you ever wondered how intelligent Industry 4.0 factories or smart cities of the future will process massive amounts of sensor and machine data? What you may not expect is a digital twin will most likely play a role. A digital twin is a virtual representation of an object, system or process that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning and reasoning to help decision-making. Digital twins can be used to help answer what-if AI-analytics questions, yield insights on business objectives and make recommendations on how to control or improve outcomes.
This webinar will introduce digital twin usage in edge IoT applications, highlighting what is available today, what to expect in the next couple of years, and what the future holds. We will provide examples of how digital twin methods help capture virtual representation of real-world entities and processes, discussing:
- What is driving the edge IoT need
- Data analytics problems solved by digital twins
- How digital twins are being used today, tomorrow and beyond
- Use cases: adaptive agile factories, massive data generation across industries, and system of systems processes
- Why this is a technology and trend that is here to stay

Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization.
SNIA's Smart Data Accelerator Interface (SDXI) Technical Work Group (TWG) is at the forefront of standardizing this. The SDXI TWG is designing an industry-open standard for a memory-to-memory data movement and acceleration interface that is – Extensible, Forward-compatible, and Independent of I/O interconnect technology. A candidate for the v1.0 SNIA SDXI standard is now in review.
Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning.
In this webcast, we will:
- Introduce SDXI and CXL
- Discuss data movement needs in a CXL ecosystem
- Cover SDXI advantages in a CXL interconnect

Kubernetes platforms offer a unique cloud-like experience — all the flexibility, elasticity, and ease of use — on premises, in a private or public cloud, even at the edge. The ease and flexibility of turning on services when you want them, turning them off when you don’t, is an enticing prospect for developers as well as application deployment teams, but it has not been without its challenges.
Our Kubernetes panel of experts will debate the challenges and how to address them, discussing:
- So how are all these trends coming together?
- Is cloud repatriation really a thing?
- How are traditional hardware vendors reinventing themselves to compete?
- Where does the data live?
- How is the data accessed?
- What workloads are emerging?

At the 2022 Open Compute Global Summit, OEMs, cloud service providers, hyperscale data center, and SSD vendors showcased products and their vision for how the family of EDSFF form factors solves real data challenges. In this webcast, SNIA SSD SIG co-chairs Cameron Brett of KIOXIA and Jonmichael Hands of Chia Network explain how having a flexible and scalable family of form factors allows for optimization for different use cases, different media types on SSDs, scalable performance, and improved data center TCO. They'll highlight the latest SNIA specifications that support these form factors, provide an overview of platforms that are EDSFF-enabled, and discuss the future for new product and application introductions.
