Jun 25, 2021
Jun 21, 2021
Thanks to big data, artificial intelligence (AI), the Internet of things (IoT), and 5G, demand for data storage continues to grow significantly. The rapid growth is causing storage and database-specific processing challenges within current storage architectures. New architectures, designed with millisecond latency, and high throughput, offer in-network and storage computational processing to offload and accelerate data-intensive workloads.
On June 29, 2021, SNIA Compute, Memory and Storage Initiative will host a lively webcast discussion on today’s storage challenges in an aggregated storage world and if a disaggregated storage model could optimize data-intensive workloads. We’ll talk about the concept of a Data Processing Unit (DPU) and if a DPU should be combined with a storage data processor to accelerate compute-intensive functions. We’ll also introduce the concept of key value and how it can be an enabler to solve storage problems.
Join moderator Tim Lustig, Co- Chair of the CMSI Marketing Committee, and speakers John Kim from NVIDIA and Kfir Wolfson from Pliops as we shift into overdrive to accelerate disaggregated storage. Register now for this free webcast.
The post Accelerating Disaggregated Storage to Optimize Data-Intensive Workloads first appeared on SNIA Compute, Memory and Storage Blog.
Jun 7, 2021
Jun 2, 2021
The SNIA Swordfish™ specification and ecosystem are growing in scope to include full enablement and alignment for NVMe® and NVMe-oF client workloads and use cases. By partnering with other industry-standard organizations including DMTF®, NVM Express, and OpenFabrics Alliance (OFA), SNIA’s Scalable Storage Management Technical Work Group has updated the Swordfish bundles from version 1.2.1 and later to cover an expanding range of NVMe and NVMe-oF functionality including NVMe device management and storage fabric technology management and administration.
The Need
Large-scale computing designs are increasingly multi-node and linked together through high-speed networks. These networks may be comprised of different types of technologies, fungible, and morphing. Over time, many different types of high-performance networking devices will evolve to participate in these modern, coupled-computing platforms. New fabric management capabilities, orchestration, and automation will be required to deploy, secure, and optimally maintain these high-speed networks.
The NVMe and NVMe-oF specifications provide comprehensive management for NVMe devices at an individual level, however, when you want to manage these devices at a system or data center level, DMTF Redfish and SNIA Swordfish are the industry’s gold standards. Together, Redfish and Swordfish enable a comprehensive view across the system, data center, and enterprise, with NVMe and NVMe-oF instrumenting the device-level view. This complete approach provides a way to manage your entire environment across technologies with standards-based management, making it more cost-effective and easier to operate.
The Approach
The expanded NVMe resource management within SNIA Swordfish is comprised of a mapping between the DMTF Redfish, Swordfish, and NVMe specifications, enabling developers to construct a standard implementation within the Redfish and Swordfish service for any NVMe and NVMe-oF managed device.
The architectural approach to creating the SNIA Swordfish 1.2.1 version of the standard began with a deep dive into the existing management models of systems and servers for Redfish, storage for Swordfish, and fabrics management within NVM Express. After evaluating each approach, there was a step-by-step walkthrough to map the models. From that, we created mockups and a comprehensive mapping guide using examples of object and property level mapping between the standard ecosystems.
In addition, Swordfish profiles were created that provide a comprehensive representation of required properties for implementations. These profiles have been incorporated into the new Swordfish Conformance Test Program (CTP), to support NVMe capabilities. Through its set of test suites, the CTP validates that a company’s products conform to a specified version of the Swordfish specification. CTP supports conformance testing against multiple versions of Swordfish.
What’s Next?
In 2021, the Swordfish specification will continue to be enhanced to fully capitalize on the fabrics model by extending fabric technology-specific use cases and creating more profiles for additional device types.
Want to learn more?
Watch SNIA’s on-demand webcast, “Universal Fabric Management for Tomorrow’s Data Centers,” where Phil Cayton, Senior Staff Software Engineer, Intel; and Richelle Ahlvers, storage technology enablement architect, Intel Corporation, Board of Directors, SNIA, provide insights into how standard organizations are working together to improve and promote the vendor-neutral, standards-based management of open source fabrics and remote network services or devices in high-performance data center infrastructures.
Jun 2, 2021
May 26, 2021
Recently, the SNIA Compute, Memory, and Storage Initiative hosted a live webcast “Data Movement and Computational Storage”, moderated by Jim Fister of The Decision Place with Nidish Kamath of KIOXIA, David McIntyre of Samsung, and Eli Tiomkin of NGD Systems as panelists. We had a great discussion on new ways to look at storage, flexible computer systems, and how to put on your security hat.
During our conversation, we answered audience questions, and raised a few of our own! Check out some of the back-and-forth, and tune in to the entire video for customer use cases and thoughts for the future.
Q: What is the value of computational storage?
A: With computational storage, you have latency sensitivity – you can make decisions faster at the edge and can also distribute computing to process decisions anywhere.
Q: Why is it important to consider “data movement” with regard to computational storage?
A: There is a reduction in data movement that computational storage brings to the system, along with higher efficiencies while moving that data and a reduction in power which users may not have yet considered.
Q: How does power use change when computational storage is brought in?
A: You want to “move” compute to that point in the system where operations can be accomplished where the data is “at rest”. In traditional systems, if you need to move data from storage to the host, there are power costs that may not even be currently measured. However, if you can now run applications and not move data, you will realize that power reduction, which is more and more important with the anticipation of massive quantities of data coming in the future.
Q: Are the traditional processing/storage transistor counts the same with computational storage?
A: With computational storage, you can put the programming where it is needed – moving the compute to that point in the system where it can achieve the work with limited amount of overhead and networking bandwidth. Compute moves to where the data sits at rest, which is especially important with the explosion of data sets.
Q: Does computational storage play a role in data security and privacy?
A: Security threats don’t always happen at the same time, so you need to consider a top-down holistic perspective. It will be important both today and in the future to consider new security threats because of data movement.
There is always a risk for security when the data is moving; however, computational storage reduces the data movement significantly, and can play as a more secure way to treat data because the data is not moving as much. Computational storage allows you to lock the data, for example, medical data, and only process when needed and if needed in an authenticated and secure fashion. There’s no requirement to build a whole system around this.
Q: What are the computational storage opportunities at the edge?
A: We need to understand the ecosystem the computational storage device is going into. Computational storage sits at the front line of edge applications and management of edge infrastructure pieces in the cloud. It’s a great time to embrace existing cloud policies and collaborate with customers on how policies will migrate and change to the edge.
Q: In your discussions with customers, how dynamic do they expect the sets of code running on computational storage to be? With the extremes being code never changing (installed once/updated rarely) to being different for every query or operation. Please discuss how challenges differ for these approaches.
A: The heavy lift comes into play with the application and the system integration. To run flexible code, customers want a simple, straightforward, and seamless programming model that enables them to run as many applications as they need and change them in an easy way without disrupting the system. Clients are using computational storage to speed up the processing of their data with dynamic reconfiguring in cutting edge applications. We are putting a lot of effort toward this seamless and transparent model with our work in the SNIA Computational Storage Technical Work Group.
Q: What does computational storage mean for data in the future?
A: The infrastructure of data and data movement will drastically change in the future as edge emerges and cloud continues to grow. Using computational storage will be extremely beneficial in the new infrastructure, and we will need to work together as an ecosystem and under SNIA to make sure we are all aligned to provide the right solutions to the customer.
May 26, 2021
May 25, 2021
May 25, 2021
May 17, 2021
Leave a Reply