Below are the abstracts for the lineup of topics and speakers at our 2024 Compute, Memory, and Storage Summit agenda.
Memories are Driving Big Architectural Changes – Hold Onto Your Hats!
Streamlining Scientific Workflows: Computational Storage Strategies for HPC
Increasing AI and HPC Application Performance with CXL Fabrics
Ethernet Evolved: Powering AI’s Future with the Ultra Ethernet Consortium
Creating a Sustainable Semiconductor Industry for the AI Era
Breakthrough in Cyber Security Detection Using Computational Storage
Bringing Unique Customer Value with CXL Accelerator-Based Memory Solutions
Smart Data Accelerator Interface: Use Cases, Futures, and Proof Points
Overcome Real World Challenges between Data and AI
David McIntyre, SNIA CMS Summit Planning Team Co-Chair, SNIA CMSI Marketing Co-Chair; Director, Product Planning and Business Enablement, Samsung Corporation
Join David McIntyre as he kicks off our virtual Summit with an overview of the Summit content and presenters, with a focus on the "hot" topics for 2024!
Jim Handy, General Director, Objective Analysis
Tom Coughlin, President, Coughlin Associates
Memories, long the slowest-changing of any digital technology, are rapidly springing into new form factors, interfaces, and even core technologies, like MRAM, ReRAM, and FRAM. This accelerated shift, combined with the computing industry’s sudden adoption of AI, is driving an era of extreme architectural change in systems. CXL is competing with DDR, HBM is boosting cache sizes, and emerging memories are bringing persistence closer to, and even within the processing chip, while chiplets promise to accelerate that transition. Meanwhile, new approaches like Processing in Memory (PiM) and endpoint AI are being adopted to increase the amount of information that can be captured and analyzed while reducing the amount of data communicated over the network. In this presentation, based on a recently-released report, IEEE president Tom Coughlin and noted memory analyst Jim Handy detail the changes the loom before the computing community, and show a likely course for the adoption of new technologies throughout the industry over the next decade and beyond.
John Cardente, Member of Technical Staff, Dell Technologies
While GPUs often steal the limelight, it’s essential to recognize the significant role that storage plays in Artificial Intelligence (AI) infrastructure solutions. Throughout the entire AI lifecycle, from data preparation to pre-training, fine-tuning, checkpointing, and inference, storage systems are critical. They not only keep GPUs busy but also safeguard valuable data. In this presentation, we delve into each of these stages, using concrete examples to highlight common patterns and identify key requirements, particularly related to performance. Additionally, we explore the importance of Deep Learning framework data loader libraries and discuss trade-offs between file-based and object-based storage. By attending this talk, participants will gain a better understanding of AI storage workloads and be better equipped to assess their own infrastructure needs.
Dominic Manno, Research Scientist, Los Alamos National Laboratory
Kurtis Bowman, Marketing Working Group Co-Chair, CXL Consortium, Moderator
Sandeep Dattaprasad, Sr. Product Manager, Astera Labs, Panelist
Steve Scargall, Sr. Product Manager and Software Architect, MemVerge, Panelist
Jonmichael Hands, Co-Chair, SNIA SSD Special Interest Group
Brian Rea, Marketing Group Co-Chair, UCI Express and Richelle Ahlvers, Vice Chair and Executive Committee, SNIA
J Michel Metz, Ph.D, Chair, Ultra Ethernet Consortium
JB Baker, VP of Product and Marketing, ScaleFlux
Ahmed Medhioub, Product Line Manager, Astera Labs
Processor performance is rapidly outpacing memory bandwidth, creating a bottleneck or “wall” between compute and memory. This “Memory Wall” limits overall server compute performance, especially in memory-intensive AI applications. This presentation will address how to break through the memory wall with CXL-attached memory. Attendees will learn how popular use cases, such as in-memory databases & AI inferencing, are driving the need for more memory bandwidth and capacity. This session will also introduce how innovations with CXL technology can be used to break through the memory wall to lay the foundation for AI, cloud and enterprise data centers to accelerate computing performance.
Prasad Venkatachar, Sr. Director Products and Solutions, Pliops
Sudhir Balasubramanian, Sr. Staff Solution Architect, VMware By Broadcom
Arvind Jagannath, Lead Platform Product Manager, VMware By Broadcom
VMware has been on an evolving journey on memory innovations mainly first with persistent memory, then with memory tiering, and is now extending that with CXL. CXL provides an opportunity for VMware (by Broadcom) to further improve on performance, and provide further customer benefits such as TCO reduction, server consolidation, and even disaggregation, with increased capacity and bandwidth to run workloads like Mission critical databases, AI/ML and analytics. Use of accelerators increases the number of use-cases that can be supported with a larger variety of workloads with minimum configuration changes. This session aims to provide real-world application examples using memory tiering.
Bill Martin, Principal Engineer, SSD IO Standards, Samsung Electronics Co., Ltd.
Jason Molgaard, Principal Storage Solutions Architect, Solidigm
Paul McLeod, Product Director, Storage, Supermicro
Jeff White, CTO, Edge Product and Operations, Dell Technologies
AI and Edge are two areas that are accelerating and evolving to provide increased utility and transformational capabilities for the enterprise and public sector. AI has been a prevalent Edge workload since the inception of IoT, and now its role is expanding both in terms of application workloads and how it can enable Edge operations. This discussion will present the intersection of Edge and AI and how it will transform the enterprise. After this session, users will understand:
Andy Banta, Storage Janitor, Magnition IO
Creating a Sustainable Semiconductor Industry for the AI EraGarima Desai, Sustainability Manager, Samsung Semiconductor AbstractAI is driving dramatic increases in compute requirements, often combined with equally dramatic power increases. How can we ensure a sustainable approach without losing the innovation potential of AI? This presentation will cover sustainability in the context of semiconductors and AI, drawing on efforts today to reduce carbon emissions and other environmental harms through efficiency improvements, abatement, and recycling. This session will provide a background on sustainability and its relevance today, in addition to the environmental impact of semiconductors. Framing this in the context of the increased demand for semiconductors because of the new AI boom, this presentation will discuss what can be done to help the semiconductor industry move along the green transition.
A Practical Approach to Device-Level Analytical OffloadsDonpaul Stephens, Founder and CEO, AirMettle, Inc. Abstract |
|
The odyssey toward enabling high-volume device-level computational storage has been going on since before today’s college graduates were born. But, conflicting requirements of device vendors and application logic have continually kept practical computational storage just over the horizon, as if a mirage.”
We will propose a groundbreaking approach that not only simplifies this integration but can accomplish this with only user commands requiring no special privileges while while dramatically reducing the data returned to the host — by over an order of magnitude in many common scenarios. This technique ensures easy implementation on devices and seamless access for higher-level software, leveraging familiar tools like SQL for data analysis without adding complexity to device operations. We will demonstrate how our approach seamlessly aligns with current device APIs, offering a significant leap forward in computational storage.
Join us to explore how a reimagined approach to analytical offloads can transform storage devices from mere data repositories into powerful analytical processing engines, marking the end of the computational storage mirage.
|
Mike Allison, Sr. Director NAND Product Planning - Standards, Samsung Semiconductor
To minimize any downtime in virtualized and cloud environments, a seamless migration of the Virtual Machine (VM) and associated resources needs to be completed without affecting the user experience in the case of any load balancing, system failures, or system maintenance. When a VM is migrated from one server to another server, the namespaces that the VM has access to also need to be seamlessly migrated. This presentation is an overview of capabilities being investigated by NVMe to support a host controlling the migration of a VM and the namespaces used by that VM to a different controller where that controller may exist in a different server.
Andy Walls, Chief Architect, CTO, IBM Fellow, IBM Corporation
CyberT-attacks, including ransomware are a huge concern for all organizations. There is significant attention in the industry to perimeter security improvements and such things as immutable copies to recover quickly from attacks. Unfortunately, attackers sometimes get in despite our best efforts. Detecting these intrusions quickly is vitally important. It had been thought that there was not much that block storage could do since it may not know the context of the data it is processing. However, IBM with its computational storage device, the FlashCore Module, has figured out how to do AI based ransomware detection within its storage array. This talk will give insight into how it is done and a vision for where we can go from here.
Manoj Wadekar, Hardware Systems Technologist, Meta
In recent years, hyperscale data centers have been optimized for scale-out stateless applications and zettabyte storage, with a focus on CPU-centric platforms. However, as the infrastructure shifts towards next-generation AI applications, the center of gravity is moving towards GPU/accelerators. This transition from "millions of small stateless applications" to "large AI applications running across clusters of GPUs" is pushing the limits of accelerators, network, memory, topologies, rack power, and other components. To keep up with this dramatic change, innovation is necessary to ensure that hyperscale data centers can continue to support the growing demands of AI applications. This keynote speech will explore the impact of this evolution on Memory use cases and highlight the key areas where innovation is needed to enable the future of hyperscale data centers.
David McIntyre, Director, Product Planning and Business Enablement, Samsung Corporation
Sudhir Balasubramanian, Sr. Staff Solution Architect, VMware by Broadcom
Arvind Jagannath, Lead Platform Product Manager, VMware by Broadcom
CXL is mostly talked about in the memory expansion use-case context. However, VMware and Samsung are working together to bring unique value propositions by enabling newer use-cases beyond just memory. A CXL Type-2 accelerator-based solution using a custom hardware-software co-design has the potential to leverage more of the CXL capabilities such as to improve CapEx by reducing TCO, or provide dynamic memory usage and better workload migration performance and thus improving OpEx. We will also describe advantages provided by such an approach and cover real application benchmarks.
Larrie Carr, VP of Engineering, Rambus Inc.
As compute architectures expand beyond a single socket, the proprietary interconnect within the architecture becomes part of the solution’s innovation options. The presentation will look at the history of open interconnects before CXL within the sea of proprietary connectivity and how CXL will most likely co-exist in the future.
Shyam Iyer, Chair, SNIA SDXI TWG; DIstinguished Engineer, Dell Technologies
Shyam Iyer, Chair of the SNIA Smart Data Accelerator Interface (SDXI) Technical Work Group, provides an update on this SNIA standard for a memory-to-memory data movement and acceleration interface. Learn about SDXI-based accelerators applicable use cases including those in emerging areas like artificial intelligence, and how you can participate in future work in this growing ecosystem.
Eric Hibbard, CISSP, FIP, CISA, Director, Prodcut Planning-Security, Samsung Semiconductor, Inc.
Artificial intelligence (AI) systems are creating numerous opportunities and challenges for many facets of society, including both security and privacy. For security, AI is proving to be a powerful tool for both adversaries and defenders. In addition, AI systems and their associated data must be defended against a wide range of attacks, some of which are unique to AI. The situation with privacy is similar, but the societal concerns are elevated to a point where laws and regulations are already being enacted. This session explores the AI landscape through the lenses of security and privacy.
Eric Hibbard, CISSP, FIP, CISA, Director, Product Planning-Security, Samsung Semiconductor, Inc.
Attacks against data (e.g., data breaches and ransomware) continue at a dizzying pace, so there is pressure to have storage systems and ecosystems play a more active role in defense. Over the past 9-12 months, there have been promising developments in several industry and standards development organizations that may enhance storage security capabilities. This session summarizes these recent storage security developments, highlights a few important interdependencies, and identifies a few activities that are still underway.
Paul Suhler, Principal Engineer, SSD Standards, KIOXIA
The need to eradicate recorded data on storage devices and media is well understood, but the technologies and methodologies to do it correctly can be elusive. A number of new standards build on ISO/IEC 27040 (Storage security) and IEEE 2883-2022 (Standard for Storage Sanitization), and provide clarity on how organizations can evaluate their security needs.
This session will also describe pending revisions of existing standards related to data sanitization, as well as the relationships between the standards developed by various organizations, such as IEEE-SA, ISO/IEC, and NIST.
Eric Hibbard, CISSP, FIP, CISA, Director, Product Planning-Security, Samsung Semiconductor, Inc.
The concept of zero trust (ZT)—no trust by default and assume you are operating in a hostile environment—is not new, but applying this concept requires a paradigm shift in the way an organization protects its data and resources. ZT security frameworks typically require users and entities to be authenticated, authorized, and continuously validated before being granted access to applications, systems, and data. Eliminating implicit trust can significantly reduce the exposures from successful attacks.
The U.S. Government has been spearheading adoption of ZT, which is having an impact on the offerings from the security vendor community. Internationally, ZT is gaining traction and is emerging in important security standards. This session highlights important aspects of ZT and provides an update on the state of international activities.
Steven Yuan CEO, StorageX.ai
In an era characterized by exponential growth in data generation, leveraging new infrastructure to manage data-heavy workloads especially Artificial Intelligence (AI) has become essential.
Today we are going to discuss real world deployment of a new compute & network infrastructure to handle vast amounts of data efficiently and effectively. AI algorithms, particularly those deep learning domain, are designed to process, analyze, and derive insights from large datasets, enabling organizations to make data-driven decisions at unprecedented speed and accuracy.
This topic explores the infrastructure for streamlined data processing, real-time data analytics, and predictive modeling, which significantly reduce the time and effort required to process extensive data volumes with lower latency, demonstrate the transformative impact of AI on data-intensive tasks, highlighting improvements in compute efficiency for decision-making, and predictive accuracy.
By providing a new perspective of infrastructure to improve process capabilities in handling data-heavy and latency critical workloads, it underscores the potential of AI to revolutionize the data analytics practices and drive innovation across diverse industries.