Jul 14, 2022
Jun 27, 2022
The SNIA Networking Storage Forum kicked off its xPU webcast series last month with “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” where SNIA experts defined what xPUs are, explained how they can accelerate offload functions, and cleared up confusion on many other names associated with xPUs such as SmartNIC, DPU, IPU, APU, NAPU. The webcast was highly-rated by our audience and already has more than 1,300 views. If you missed it, you can watch it on-demand and download a copy of the presentation slides at the SNIA Educational Library.
The live audience asked some interesting questions and here are answers from our presenters.
Q. How can we have redundancy on an xPU?
A. xPUs are optimal for optimizing and offloading server/appliance and application redundancy schemes. Being the heart of the data movement and processing at the server, xPUs can expose parallel data-paths and be a reliable control point for server management. Also, the xPUs’ fabric connecting the hosts can provide self-redundancy and elasticity such that redundancy between xPU devices can be seamless and provide simplified redundancy and availability scheme between the different entities in the xPU fabric that is connecting between the servers over the network. The fact that xPUs don’t run the user applications, (or maybe in the worst case run some offload functions for them) makes them a true stable and reliable control point for such redundancy schemes. It’s also possible to put two (or potentially more) xPUs into each server to provide redundancy at the xPU level.
Q. More of a comment. I'm in the SSD space, and with the ramp up in E.1L/S E.3 space is being optimized for these SmartNICs/GPUs, DPUs, etc. Also better utilizing space inside a server/node, and allow for serial interface location on the PCB. Great discussion today.
A. Yes, it’s great to see servers and component devices evolving towards supporting cloud-ready architectures and composable infrastructure for data centers. We anticipate that xPUs will evolve into a variety of physical form factors within the server especially with the modular server component standardization work that is going on. We’re glad you enjoyed the session.
Q. How does CXL impact xPUs and their communication with other components such as DRAM? Will this eliminate DDR and not TCP/IP?
A. xPUs might use CXL as an enhanced interface to the host, to local devices connected to the xPU or to a CXL fabric that acts as an extension of local devices and xPUs network, for example connected to an entity like a shared memory pool. CXL can provide an enhanced, coherent memory interface and can take a role in extending access to slower tiers of memory to the host or devices through the CXL.MEM interface. It can also provide a coherent interface through the CXL.CACHE interface that can create an extended compute interface and allow close interaction between host and devices. We think CXL will provide an additional tier for memory and compute that will be living side by side with current tiers of compute and memory, each having its own merit in different compute scenarios. Will CXL eliminate DDR? Local DDR for the CPU will always have a latency advantage and will provide better compute in some use cases, so CXL memory will add additional tiers of memory/PMEM/storage in addition to that provided by DDR.
Q. Isn't a Fibre Channel (FC) HBA very similar to a DPU, but for FC?
A. The NVMe-oF offloads make the xPU equivalent to an FC HBA, but the xPU can also host additional offloads and services at the same time. Both FC HBAs and xPUs typically accelerate and offload storage networking connections and can enable some amount of remote management. They may also offload storage encryption tasks. However, xPUs typically support general networking and might also support storage tasks, while FC HBAs always support Fibre Channel storage tasks and rarely support any non-storage functions.
Q. Were the old TCP Offload Engine (TOE) cards from Adaptec many years ago considered xPU devices, that were used for iSCSI?
A.They were not considered xPUs as—like FC HBAs—they only offloaded storage networking traffic, in this case for iSCSI traffic over TCP. In addition, the terms “xPU,” “IPU” and “DPU” were not in use at that time. However, TOE and equivalent cards laid the ground work for the evolution to the modern xPU.
Q. For xPU sales to grow dramatically won't that happen after CXL has a large footprint in data centers?
A. The CXL market is focused on a coherent device and memory extension connection to the host, while the xPU market is focused on devices that handle data movement and processing offload for the host connected over networks. As such, CXL and xPU markets are complementary. Each market has its own segment and use case and viability independent on each other. As discussed above, the technical solutions are complements so that the evolution of each market proliferates from the other. Broader adoption of CXL will enable faster and broader functionality for xPUs, but is not required for rapid growth of the xPU market.
Q. What role will CXL play in these disaggregated data centers?
A. The ultimate future of CXL is a little hard to predict. CXL has a potential role in disaggregation of coherent devices and memory pools at the chassis/rack scale level with CXL switch devices, while xPUs have the role of disaggregating at the rack/datacenter level. xPUs will start out connecting multiple servers across multiple racks then extend across the entire data center and potentially across multiple data centers (and potentially from cloud to edge). It is likely that CXL will start out connecting devices within a server then possibly extend across a rack and eventually across multiple racks.
If you are interested in learning more about xPUs, I encourage you to register for our second webcast
“xPU Accelerator Offload Functions”to hear what problems the xPUs are coming to solve, where in the system they live, and the functions they implement.
Jun 27, 2022
Jun 23, 2022
Recently, SNIA On Storage sat down with David McIntyre, Summit Chair from Samsung, on his impressions of this 10th annual event.
SNIA On Storage (SOS): What were your thoughts on key topics coming into the Summit and did they change based on the presentations? David McIntyre (DM): We were excited to attract technology leaders to speak on the state of computational storage and persistent memory. Both mainstage and breakout speakers did a good job of encapsulating and summarizing what is happening today. Through the different talks, we learned more about infrastructure deployments supporting underlying applications and use cases. A new area where attendees gained insight was computational memory. I find it encouraging that as an industry we are moving forward on focusing on applications and use cases, and supporting software and infrastructure that resides across persistent memory and computational storage. And with computational memory, we are now getting more into the system infrastructure concerns and making these technologies more accessible to application developers. SOS: Any sessions you want to recommend to viewers? DM: We had great feedback on our speakers during the live event. Several sessions I might recommend are Gary Grider of Los Alamos National Labs (LANL), who explained how computational storage is being deployed across his lab; Chris Petersen of Meta, who took an infrastructure view on considerations for persistent memory and computational storage; and Andy Walls of IBM, who presented a nice viewpoint of his vision of computational storage and its underlying benefits that make the overall infrastructure more rich and efficient, and how to bring compute to the drives. For a summary, watch Dave Eggleston of In-Cog Computing who led Tuesday and Wednesday panels with the mainstage speakers that provided a wide ranging discussion on the Summit’s key topics. SOS: What do you see as the top takeaways from the Summit presenters? DM: I see three:Jun 10, 2022
Jun 9, 2022
The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. It was the topic of our SNIA Networking Storage Forum (NSF) live webcast “Storage Life on the Edge: Security Challenges” where SNIA security experts Thomas Rivera, CISSP, CIPP/US, CDPSE and Eric Hibbard, CISSP-ISSAP, ISSMP, ISSEP, CIPP/US, CIPT, CISA, CDPSE, CCSK debated as to whether existing security practices and standards are adequate for this emerging area of computing. If you missed the presentation, you can view it on-demand here.
It was a fascinating discussion and as promised, Eric and Thomas have answered the questions from our live audience.
Q. What complexities are introduced from a security standpoint for edge use cases?
A. The sheer number of edge nodes, the heterogeneity of the
nodes, distributed ownership and control, increased number of interfaces,
fit-for-use versus designed solution, etc. complicate the security aspects of
these ecosystems. Performing risk assessments and/or vulnerability assessments
across the full ecosystem can be extremely difficult; remediation activities
can be even harder.
Q. How is data privacy impacted and managed across cloud to edge applications?
A. Movement of data from the edge to core systems could easily cross multiple jurisdictions that have different data protection/privacy requirements. For example, personal information harvested in the EU might find its way into core systems in the US; in such a situation, the US entity would need to deal with GDPR requirements or face significant penalties. The twist is that the operator of the core systems might not know anything about the source of the data.
Q. What are the priority actions that customers can undertake to protect
their data?
A. Avoid giving personal information. If you do, understand your rights (if any) as well as how it will be used, protected, and ultimately eliminated.
This session is part of our “Storage Life on the Edge” webcast series. Our next session will be “Storage Life on the Edge: Accelerated Performance Strategies” where we will dive into the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center. Register here to join us on July 12, 2022. You can also access the other presentations we’ve done in this series at the SNIA Educational Library.
Jun 9, 2022
May 10, 2022
In our SNIA Networking Storage Forum webcast series, “Storage Life on the Edge” we’ve been examining the many ways the edge is impacting how data is processed, analyzed and stored. I encourage you to check out the sessions we’ve done to date:
On July 12, 2022, we continue the series with “Storage Life on the Edge: Accelerated Performance Strategies” where our SNIA experts will discuss the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center, covering:
We look forward to having you join us to cover all this and more. We promise to keep you on the edge of your virtual seat! Register today.
May 10, 2022
May 6, 2022
Our 10th annual Persistent Memory + Computational Storage Summit is right around the corner on May 24 and 25, 2022. We remain virtual this year, and hope this will offer you more flexibility to watch our live-streamed mainstage sessions, chat online, and catch our always popular Computational Storage birds-of-a-feather session on Tuesday afternoon without needing a plane or hotel reservation!
As David McIntyre of Samsung, the 2022 PM+CS Summit chair, says in his 2022 Summit Preview Video, “You won’t want to miss this event!”
This year, the Summit agenda expands knowledge on computational storage and persistent memory, and also features new sessions on computational memory, Compute Express Link TM (CXL)TM, NVM Express, SNIA Smart Data Accelerator Interface (SDXI), and Universal Chiplet Interconnect Express (UCIe).
We thank our many dynamic speakers who are presenting an exciting lineup of talks over the two days, including:
Our full agenda is at www.snia.org/pm-summit.
We’ll have great networking opportunities, a virtual reception, and the ability to connect with leading companies including Samsung, MemVerge, and SMART Modular who are sponsoring the Summit.
Complimentary registration is now available at https://www.snia.org/events/persistent-memory-summit/pm-cs-summit-2022-registration. We will see you there!
Leave a Reply