Sorry, you need to enable JavaScript to visit this website.

Dive – or Dip – into SNIA Persistent Memory + Computational Storage Summit Content

Marty Foltyn

Apr 29, 2021

title of post
SNIA’s 9th annual Summit was a success with a new name and an expanded focus – Persistent Memory + Computational Storage – from the data center to the edge. The Summit moved to a two-day virtual platform and drew twice as many attendees as the previous year. We experimented with 20-minute sessions to great success.  Attendees saw leading technology experts discussing real world applications and use cases, providing insights on technology trends and futures, and networking  in “live via the internet” panels and Birds-of-a-Feather sessions. The recap of our 2021 event – agenda – abstracts – speaker bios – links to videos and presentations – is summarized on the PM+CS Summit home page. But we know your time is precious – so here are a few ways to sample a lot of great content presented over two full days.
  1. Read our colleague Tom Coughlin’s Forbes blog on the event
  2. Not only did Tom and Jim Handy present on memory futures at the event, but they also provided the fastest sub-7 minute recaps of both Wednesday’s and Thursdays sessions with their lively commentary.
  3. New to persistent memory and/or computational storage technologies?  Check out our tutorials featuring Persistent Memory and Computational Storage Special Interest Group leaders giving you what you need to know.
  4. Love the back and forth?  You’ll enjoy the recordings of our live panel sessions where colleagues debate (and sometimes agree) on the topics of today:
  5. Is Persistent Memory your focus?  We’ve sorted the Persistent Memory Summit content for you in our SNIA Educational Library
  6. A Computational Storage man or woman?  Here is the list of all the Computational Storage content during the Summit to watch via our SNIA Educational Library.
  7. Want to get hands-on?  We have extended the opportunity to experience the  Persistent Memory Workshop and Hackathon with access to new cloud-based PM systems for more learning opportunities.
We extend a thank you and shout-out to our SNIA Compute, Memory, and Storage Initiative members and colleagues who presented in sessions and participated in panels. They represent these leading companies in the industry. AMD, Arm, Coughlin Associates, Dell, Eideticom, Facebook, Futurewei Technologies, G2M Communications, Hewlett Packard Enterprise, Intuitive Cognition Consulting, Intel, Lenovo,  Los Alamos National Laboratory, MemVerge, Micron, Microsoft, MKW Ventures Consulting, NGD Systems, NVIDIA, Objective Analysis, Samsung, ScaleFlux, Silinnov Consulting, and SMART Modular Technologies. Finally, we thank you for your interest in SNIA Compute, Memory, and Storage Initiative outreach and education.  We look forward to seeing you at upcoming SNIA events, including our Storage Developer Conferences in EMEA, India, and the U.S.  Find out more details on SDC. The post Dive – or Dip – into SNIA Persistent Memory + Computational Storage Summit Content first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A: Cloud Analytics Takes Flight

Jim Fister

Apr 28, 2021

title of post
Recently, the SNIA Cloud Storage Technologies Initiative (CSTI) hosted a live webcast “Cloud Analytics Drives Airplanes-as-a-Service” with Ben Howard, CTO of KinectAir. It was a fascinating discussion on how analytics is making this new commercial airline business take off. Ben has a history of innovation with multiple companies working on new flight technology, analytics, and artificial intelligence. In this session, he provided several insights from his experiences on how analytics can have a significant impact on every business. Aside from analytics, services such as business cards can also be incorporated. In the course of the conversation, we covered several questions, all of which were answered in the webcast. Here’s a preview of the questions along with some brief answers. Take an hour of your time to listen to the entire presentation, we think you’ll enjoy it. Q: What’s different about capturing data for Machine Learning? A: There’s a need to ensure that the data you’re capturing is valid data, and that it will contribute to the bottom line. But AI/ML is less rigorous than some other analytics in that it can absorb a broader array of data formats. Q: What are you gleaning from all the other data sources KinectAir is using? A: KinectAir uses a variety of sources for info, including things like weather, other airline’s schedules and flight plans, FAA data, customer preferences, and many other pieces of data.  This allows it to make quick decisions on relocating aircraft to take potential passengers during a weather or mechanical delay by larger airlines. It also allows the company to make intelligent decisions on flight pricing that can make flight options more attractive to customers. Q: How does predictive analytics impact the business? A: By focusing on the passenger, and identifying the true origin and destination of each passenger, the airline can adjust for different potential airports as well as traffic and weather info to route the passenger. For example, the passenger can be routed to a regional airport slightly farther away than his or her house to allow the airplane to pick up other passengers, thus making the flight less expensive. Airplanes can also be staged near large airports that typically have weather delays to pick up potential passengers with a flight cancelled. Q: Explain how KinectAir is using a Monte Carlo model, and how that works. A: The actual comment was: “So, essentially you’re gambling.”  Ben explained how the company uses all the available information to make an informed bet on what passengers will pay to connect to a specific route. In this way, the company can weigh the odds and find a way to generate a price that will make a sale, but also make a profit. This creates an environment to, “always say yes,” to a customer in a way that works for the customer and the company. In the course of the discussion, we not only discussed KinectAir, but we also talked about using analytics for other businesses. Ben discussed using visualization to improve farming, how to create an analytics strategy to run 100 miles, and how to listen to what customers want while providing what they actually need. We hope you enjoy watching this webcast as much as we did making it.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A: Cloud Analytics Takes Flight

Jim Fister

Apr 28, 2021

title of post
Recently, the SNIA Cloud Storage Technologies Initiative (CSTI) hosted a live webcast “Cloud Analytics Drives Airplanes-as-a-Service” with Ben Howard, CTO of KinectAir. It was a fascinating discussion on how analytics is making this new commercial airline business take off.  Ben has a history of innovation with multiple companies working on new flight technology, analytics, and artificial intelligence. In this session, he provided several insights from his experiences on how analytics can have a significant impact on every business. In the course of the conversation, we covered several questions, all of which were answered in the webcast. Here’s a preview of the questions along with some brief answers. Take an hour of your time to listen to the entire presentation, we think you’ll enjoy it. Q: What’s different about capturing data for Machine Learning? A: There’s a need to ensure that the data you’re capturing is valid data, and that it will contribute to the bottom line. But AI/ML is less rigorous than some other analytics in that it can absorb a broader array of data formats. Q: What are you gleaning from all the other data sources KinectAir is using? A: KinectAir uses a variety of sources for info, including things like weather, other airline’s schedules and flight plans, FAA data, customer preferences, and many other pieces of data.  This allows it to make quick decisions on relocating aircraft to take potential passengers during a weather or mechanical delay by larger airlines. It also allows the company to make intelligent decisions on flight pricing that can make flight options more attractive to customers. Q: How does predictive analytics impact the business? A: By focusing on the passenger, and identifying the true origin and destination of each passenger, the airline can adjust for different potential airports as well as traffic and weather info to route the passenger. For example, the passenger can be routed to a regional airport slightly farther away than his or her house to allow the airplane to pick up other passengers, thus making the flight less expensive. Airplanes can also be staged near large airports that typically have weather delays to pick up potential passengers with a flight cancelled. Q: Explain how KinectAir is using a Monte Carlo model, and how that works. A: The actual comment was: “So, essentially you’re gambling.”  Ben explained how the company uses all the available information to make an informed bet on what passengers will pay to connect to a specific route. In this way, the company can weigh the odds and find a way to generate a price that will make a sale, but also make a profit. This creates an environment to, “always say yes,” to a customer in a way that works for the customer and the company. In the course of the discussion, we not only discussed KinectAir, but we also talked about using analytics for other businesses. Ben discussed using visualization to improve farming, how to create an analytics strategy to run 100 miles, and how to listen to what customers want while providing what they actually need. We hope you enjoy watching this webcast as much as we did making it.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Protecting NVMe over Fabrics Data from Day One, The Armored Truck Way

John Kim

Apr 27, 2021

title of post

With ever increasing threat vectors both inside and outside the data center, a compromised customer dataset can quickly result in a torrent of lost business data, eroded trust, significant penalties, and potential lawsuits. Potential vulnerabilities exist at every point when scaling out NVMe® storage, which requires data to be secured every time it leaves a server or the storage media, not just when leaving the data center. NVMe over Fabrics is poised to be the one of the most dominant storage transports of the future and securing and validating the vast amounts of data that will traverse this fabric is not just prudent, but paramount.

Ensuring the security of that data will be the topic of our SNIA Networking Storage Forum (NSF) webcast “Security of Data on NVMe over Fabrics, the Armored Truck Way” on May 12, 2021. Join the webcast to hear industry experts discuss current and future strategies to secure and protect mission critical data.

You will learn:

  • Industry trends and regulations around data security
  • Potential threats and vulnerabilities
  • Existing security mechanisms and best practices
  • How to secure NVMe data in flight and at rest
  • Ecosystem and market dynamics
  • Upcoming standards

For those of you who follow the many educational webcasts that the NSF hosts, you may have noticed that we are discussing the important topic of data security a lot. In fact, there is an entire Storage Networking Security Webcast Series that dives into protecting data at rest, protecting data in flight, encryption, key management, and more. You might find it useful to check out some of the sessions before our May 12th presentation.

Register today! We hope you will join us on May 12th. And please bring your questions. Our experts will be ready to answer them.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Protecting NVMe over Fabrics Data from Day One, The Armored Truck Way

John Kim

Apr 27, 2021

title of post
With ever increasing threat vectors both inside and outside the data center, a compromised customer dataset can quickly result in a torrent of lost business data, eroded trust, significant penalties, and potential lawsuits. Potential vulnerabilities exist at every point when scaling out NVMe® storage, which requires data to be secured every time it leaves a server or the storage media, not just when leaving the data center. NVMe over Fabrics is poised to be the one of the most dominant storage transports of the future and securing and validating the vast amounts of data that will traverse this fabric is not just prudent, but paramount. Ensuring the security of that data will be the topic of our SNIA Networking Storage Forum (NSF) webcast “Security of Data on NVMe over Fabrics, the Armored Truck Way” on May 12, 2021. Join the webcast to hear industry experts discuss current and future strategies to secure and protect mission critical data. You will learn:
  • Industry trends and regulations around data security
  • Potential threats and vulnerabilities
  • Existing security mechanisms and best practices
  • How to secure NVMe data in flight and at rest
  • Ecosystem and market dynamics
  • Upcoming standards
For those of you who follow the many educational webcasts that the NSF hosts, you may have noticed that we are discussing the important topic of data security a lot. In fact, there is an entire Storage Networking Security Webcast Series that dives into protecting data at rest, protecting data in flight, encryption, key management, and more. You might find it useful to check out some of the sessions before our May 12th presentation. Register today! We hope you will join us on May 12th. And please bring your questions. Our experts will be ready to answer them.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuous Delivery Software Development Q&A

Alex McDonald

Apr 26, 2021

title of post
What’s the best way to make a development team lean and agile? It was a question we explored at length during our SNIA Cloud Storage Technologies Initiative live webcast “Continuous Delivery: Cloud Software Development on Speed.” During this session, continuous delivery expert, Davis Frank, Co-creator of the Jasmine Test Framework, explained why product development teams are adopting a continuous delivery (CD) model. If you missed the live event, you can watch it on-demand here. The webcast audience was highly engaged with this topic and asked some interesting questions. Here are Davis Frank’s answers: Q.  What are a few simple tests you can use to determine your team’s capability to deliver CD? A. I would ask the team three questions:
  1. Do you want to move to a Continuous Delivery model?
  2. Are you willing to meet to discover what is preventing your team from working in a CD manner, and then hold yourselves accountable to making improvement?
  3. Are you able to repeat the second step regularly?
If the answers to these questions are all yes, then you have the foundation necessary to get started. Q. If you’re talking about multiple products from different companies, how do you ensure that you can deliver CD type products? A. When building cloud software today, you are going to have dependencies. Dependencies on open-source frameworks, closed-source tooling, and web-based API’s. As these dependencies change, they can affect your products. Automated testing and Continuous integration help here. They will catch issues before delivery just like bugs or issues that come from your team. Finding the issues early means the team can recover, amend, or work around these types of problems so they have a minimal impact on the team’s delivery pace or the overall business. Q. When you get to team rotation, how well do software engineers in your experience adapt to moving?  As you move engineers to new areas of the code, do you experience that they spend time re-writing what’s not broken because they didn’t write it in the first place? A. Every person is different, but most engineers like new problems. Learning a new domain and applying their knowledge to solve new problems often is highly motivating for engineers. The urge to re-write is often just to build understanding. As I mentioned in the webcast, the “rewrite risk” can be mitigated by pair programming sessions with engineers more familiar with the code and well-written test suites. The test suites act as documentation on how the code actually works. These accelerate knowledge transfer, reducing some of the motivation to re-write. There are other reasons to rewrite code. Sometimes it is for reuse, or to make it easier to maintain, or to use newer patterns. These types of rewrites, or refactorings, are natural and happen every day. They improve the code. With good test coverage, the product risk of this type of work is low. Q. Does the Lean Methodology work well in terms of delivering software, is it still too heavy a process for rapid development by software engineering teams? A. Any new process will feel heavyweight to a team. I recommend finding things that are not working and use new techniques – whatever their origin – to attempt to fix or optimize them. With short feedback loops, you can experiment, tweak, improve – this is the Learn cycle of Build-Measure-Learn – until you have fixed a problem. And then pick a new one. Q. Do you still see a need for a “product release” timeline, or does this move software completely to a place where a feature is enabled as soon as it’s ready?  How do you cover “feature regression” if new code breaks an existing feature, or updates the way the feature is supported? A. We touched on the first part of this question in the webcast. A CD team is working well, they are just always delivering new features to production. Whether those features are available to users is a product decision and can be tied to a planned release timeline. Companies often use feature flags, or other similar technology, to hide functionality from users until they are available. Hiding functionality could be necessary due to public announcement or marketing concerns. Or, partial functionality is delivered and waiting until the remaining functionality is ready and the feature flags are removed. As to “feature regression” or updating how a feature is supported, automated testing and continuous integration should detect and or protect these cases – which totally happen – they should just happen during the development process and thus before production. Q. Do you have to do CD using open source, or does it work with closed-source products? A. I see open-sourcing as a product feature around licensing, transparency, and community. It does not directly have to do with how the software is developed and delivered. So, I see no conflicts with closed-source software. Said another way, does Amazon release their store platform as open-source? Or Google for GMail or Google Docs?

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuous Delivery Software Development Q&A

Alex McDonald

Apr 26, 2021

title of post
What’s the best way to make a development team lean and agile? It was a question we explored at length during our SNIA Cloud Storage Technologies Initiative live webcast “Continuous Delivery: Cloud Software Development on Speed.” During this session, continuous delivery expert, Davis Frank, Co-creator of the Jasmine Test Framework, explained why product development teams are adopting a continuous delivery (CD) model. If you missed the live event, you can watch it on-demand here. The webcast audience was highly engaged with this topic and asked some interesting questions. Here are Davis Frank’s answers: Q.  What are a few simple tests you can use to determine your team’s capability to deliver CD? A. I would ask the team three questions:
  1. Do you want to move to a Continuous Delivery model?
  2. Are you willing to meet to discover what is preventing your team from working in a CD manner, and then hold yourselves accountable to making improvement?
  3. Are you able to repeat the second step regularly?
If the answers to these questions are all yes, then you have the foundation necessary to get started. Q. If you’re talking about multiple products from different companies, how do you ensure that you can deliver CD type products? A. When building cloud software today, you are going to have dependencies. Dependencies on open-source frameworks, closed-source tooling, and web-based API’s. As these dependencies change, they can affect your products. Automated testing and Continuous integration help here. They will catch issues before delivery just like bugs or issues that come from your team. Finding the issues early means the team can recover, amend, or work around these types of problems so they have a minimal impact on the team’s delivery pace or the overall business. Q. When you get to team rotation, how well do software engineers in your experience adapt to moving?  As you move engineers to new areas of the code, do you experience that they spend time re-writing what’s not broken because they didn’t write it in the first place? A. Every person is different, but most engineers like new problems. Learning a new domain and applying their knowledge to solve new problems often is highly motivating for engineers. The urge to re-write is often just to build understanding. As I mentioned in the webcast, the “rewrite risk” can be mitigated by pair programming sessions with engineers more familiar with the code and well-written test suites. The test suites act as documentation on how the code actually works. These accelerate knowledge transfer, reducing some of the motivation to re-write. There are other reasons to rewrite code. Sometimes it is for reuse, or to make it easier to maintain, or to use newer patterns. These types of rewrites, or refactorings, are natural and happen every day. They improve the code. With good test coverage, the product risk of this type of work is low. Q. Does the Lean Methodology work well in terms of delivering software, is it still too heavy a process for rapid development by software engineering teams? A. Any new process will feel heavyweight to a team. I recommend finding things that are not working and use new techniques – whatever their origin – to attempt to fix or optimize them. With short feedback loops, you can experiment, tweak, improve – this is the Learn cycle of Build-Measure-Learn – until you have fixed a problem. And then pick a new one. Q. Do you still see a need for a “product release” timeline, or does this move software completely to a place where a feature is enabled as soon as it’s ready?  How do you cover “feature regression” if new code breaks an existing feature, or updates the way the feature is supported? A. We touched on the first part of this question in the webcast. A CD team is working well, they are just always delivering new features to production. Whether those features are available to users is a product decision and can be tied to a planned release timeline. Companies often use feature flags, or other similar technology, to hide functionality from users until they are available. Hiding functionality could be necessary due to public announcement or marketing concerns. Or, partial functionality is delivered and waiting until the remaining functionality is ready and the feature flags are removed. As to “feature regression” or updating how a feature is supported, automated testing and continuous integration should detect and or protect these cases – which totally happen – they should just happen during the development process and thus before production. Q. Do you have to do CD using open source, or does it work with closed-source products? A. I see open-sourcing as a product feature around licensing, transparency, and community. It does not directly have to do with how the software is developed and delivered. So, I see no conflicts with closed-source software. Said another way, does Amazon release their store platform as open-source? Or Google for GMail or Google Docs?

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on NVMe-oF Performance Hero Numbers

Alex McDonald

Apr 13, 2021

title of post
Last month, the SNIA Networking Storage Forum (NSF) hosted a live webcast “NVMe-oF: Looking Beyond Performance Hero Numbers.”It was extremely popular, in fact it has been viewed almost 800 times in just two weeks! If you missed it, it’s available on-demand, along with the presentation slides at the SNIA Educational Library. Our audience asked several great questions during the live event and our expert presenters, Erik Smith, Rob Davis and Nishant Lodha have kindly answered them all here. Q. There are initiators for Linux but not for Windows? What are my options to connect NVMe-oF to Windows Server? A. Correct. For many of the benchmarks, a standard Linux based initiator was used as it provided a consistent platform on which several of the compared NVMe Fabrics are supported/available. Regarding what Fabrics are available on Microsoft Windows, it is best to check with your Microsoft representative for the most current information. As far as we are aware, Microsoft does not natively support NVMe-oF, but there are 3rd party Windows drivers available from other vendors, just search the web for “Windows NVMe-oF Initiator”. In addition, some SmartNICs/DPU/Programmable NICs have the ability to terminate NVMe-oF and present remote NVMe devices as local, such options can be considered for your Windows deployment. Q. Where do storage services like RAID, deduplication, and such come from in the Compute Storage Disaggregation model? A. Storage services can either on the compute node in software (leveraging server CPU cycles), offloaded to a DPU (Data Processing Unit or SmartNIC) that is plugged into the compute node, or on the storage array node, or a combination depending on the implementation and use case. Q. The “emerging heterogenous landscape” slide omits the fact that industry momentum, in both hyperscale and hyperconverged storage variants, has been and continues to be towards proprietary networked block storage protocols. Such providers have shunned both legacy and emerging (e.g. NVMe-oF) network block storage industry standards. Simple block storage does not suffice for them. Isn’t the shown “emerging landscape” a declining niche? A. Definitely this was the direction of the “hyperscale and hyperconverged storage variants,” but with the maturity and industry wide support of NVMe, and NVMe-oF this is changing. The “hyperscale and hyperconverged storage variants” providers went the “proprietary networked block storage protocols” direction initially for their deployments because no standardized secure SAN protocols that met their cost, performance, and converged network requirements existed. With NVMe-oF on Ethernet at 100GbE or higher speeds these requirements are now met. It is also important to note that many of the Hyperscale datacenter companies were instrumental in driving the development of NVMe-oF standards, including NVMe/TCP and RDMA. Q. At hardware components does the TCP-offloading take place? What TCP offload device was used to generate these numbers? A. TCP Offload for NVMe/TCP can occur either in a standard NIC (if it has TCP offload capabilities) or a SmartNIC/DPU/Programmable NIC (via a dedicated hardware block or compute engines on the SmartNIC). The performance charts depicted in the slides used a variety of standard NICs and InfiniBand devices, including those from NVIDIA and Marvell. To the question about TCP offload, those tests were conducted with a Marvell 25GbE NIC. We recommend checking with your NIC, SmartNIC, or DPU vendor about the specific NVMe-oF offloads they can provide. Q. Which Web and File applications use 4k random writes? Even for DBs 4k that is very uncommon. A. We use a common block size (i.e., 4k) across all fabric types for the sake of creating a fair and realistic comparison. Small block I/Os (e.g., 512B) are great to use when you are trying to create hero numbers highlighting the maximum number of IOPS supported by an adapter or a storage system. Large block I/Os (e.g., 128k and above) are great to use when you are trying to create hero numbers that highlight the maximum throughput supported by an adapter or a storage system. We used 4k random writes for this testing because we’ve found (in general) the results obtained represent what end users can reasonably expect to observe at the application. We recommend looking at All Flash Array vendor published IO histograms to see the typical block size and IO operation seen most commonly on fabrics (See Figure 4 here.) Q. Where could I get an overview around shipped ports FC vs. FCoE over time? Curious how impactful FCoE is after all. A. For specific metrics, any of the market research firms like Crehan, IDC or Dell’Oro are a good place to start looking. In general, considering the significantly longer time Fibre Channel technologies have been in the market, one would expect the Fibre Channel ports to far outnumber the FCoE ports in datacenters. Q. You said there would be a follow-up webcast on this topic. When is that happening? A. We’re glad you asked! Our next webcast in this series is “Security of Data on NVMe over Fabrics, The Armored Truck Wayon May 12, 2021. You can register for it here. If you’re reading this blog after May 12th, you can view it on-demand.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuing to Refine and Define Computational Storage

SNIAOnStorage

Mar 31, 2021

title of post

The SNIA Computational Storage Technical Work Group (TWG) has been hard at work on the SNIA Technical Document Computational Storage Architecture and Programming Model.  SNIAOnStorage recently sat down via zoom with the document editor Bill Martin of Samsung and TWG Co-Chairs Jason Molgaard of Arm and Scott Shadley of NGD Systems to understand the work included in the model and why definitions of computational storage are so important.

SNIAOnStorage (SOS): Shall we start with the fundamentals?  Just what is the Computational Storage Architecture and Programming Model?

Scott Shadley (SS):  The SNIA Computational Storage Architecture and Programming Model (Model) introduces the framework of how to use a new tool to architect your infrastructure by deploying compute resources in the storage layer.

Bill Martin (BM): The Model enables architecture and programming of computational storage devices. These kinds of devices include those with storage physically attached, and also those with storage not physically attached but considered computational because the devices are associated with storage.

SOS: How did the TWG approach creating the Model and what does it cover?

SS:  SNIA is known for bringing standardization to customized operations; and with the Model, users now have a common way to identify the different solutions offered in computational storage devices and a standard way to discover and interact with these devices. Like the way NVMe brought common interaction to the wild west of PCIe, the SNIA Model ensures the many computational storage products already on the market can align to interact in a common way, minimizing the need for unique programming to use solutions most effectively.  

Jason Molgaard (JM):  The Model covers both the hardware architecture and software application programming interface (API) for developing and interacting with computational storage.

BM:  The architecture sections of the Model cover the components that make up computational storage and the API provides a programming interface to those components.

SOS:  I know the TWG members have had many discussions to develop standard terms for computational storage.  Can you share some of these definitions and why it was important to come to consensus?

BM:  The model defines Computational Storage Devices (CSxs) which are composed of Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs). 

Each Computational Storage Device contains a Computational Storage Engine (CSE) and some form of host accessible memory for that engine to utilize. 

The Computational Storage Processor is a device that has a Computational Storage Engine but does not contain storage. The Computational Storage Drive contains a Computational Storage Engine and storage.  And the Computational Storage Array contains an array with an array processor and a Computational Storage Engine.

Finally, the Computational Storage Engine executes Computational Storage Functions (CSFs) which are the entities that define the particular computation.  

All of the computational storage terms can be found online in the SNIA Dictionary. 

SS: An architecture and programing model is necessary to allow vendor-neutral, interoperable implementations of this industry architecture, and clear, accurate definitions help to define how the computational storage hierarchy works.  The TWG spent many hours to define these standard nomenclatures to be used by providers of computational storage products. 

JM: It has been a work in process over the last 18 months, and the perspectives of all the different TWG member companies have brought more clarity to the terms and refined them to better meet the needs of the ecosystem.

BM: One example has been the change of what was called computational storage services to the more accurate and descriptive Computational Storage Functions.   The Model defines a list of potential functions such as compression/decompression, encoding/decoding, and database search.  These and many more are described in the document.

SOS: Is SNIA working with the industry on computational storage standards?

BM:    SNIA has an alliance with the NVM Express® organization and they are working on computational storage. As other organizations (e.g., CXL Consortium) develop computational storage for their interface, SNIA will pursue alliances with those organizations.  You can find details on SNIA alliances here.

SS:  SNIA is also monitoring other Technical Work Group activity inside SNIA such as the Smart Data Accelerator Interface (SDXI) TWG working on the memory layer and efforts around Security, which is a key topic being investigated now.

SOS:  Is a new release of the Computational Storage Architecture and Programming Model pending?

BM:  Stay tuned, the next release of the Model – v.06 - is coming very soon.  It will contain updates and an expansion of the architecture.

JM: We have also been working on an API document which will be released at the same time as the V.6 release of the Model.

SOS:  Who will write the software based on the Computational Storage Architecture and Programming Model?

JM:  Computational Storage TWG members will develop open-source software aligned with the API, and application programmers will use those libraries.

SOS: How can the public find out about the next release of the Model?

SS: We will announce it via our SNIA Matters newsletter. Version 0.6 of the Model as well as the API will be up for public review and comment at this link.  And we encourage companies interested in the future of computational storage to join SNIA and contribute to the further development of the Model and the API.  You can reach us with your questions and comments at askcmsi@snia.org.

SOS:  Where can our readers learn more about computational storage?

SS:  Eli Tiomkin, Chair of the SNIA Computational Storage Special Interest Group (CS SIG), Jason, and I sat down to discuss the future of computational storage in January 2021.  The CS SIG also has a series of videos that provide a great way to get up to speed on computational storage.  You can find them and “Geek Out on Computational Storage” here,

SOS:  Thanks for the update, and we’ll look forward to a future SNIA webcast on your computational storage work.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuing to Refine and Define Computational Storage

SNIAOnStorage

Mar 31, 2021

title of post
The SNIA Computational Storage Technical Work Group (TWG) has been hard at work on the SNIA Technical Document Computational Storage Architecture and Programming Model.  SNIAOnStorage recently sat down via zoom with the document editor Bill Martin of Samsung and TWG Co-Chairs Jason Molgaard of Arm and Scott Shadley of NGD Systems to understand the work included in the model and why definitions of computational storage are so important. SNIAOnStorage (SOS): Shall we start with the fundamentals?  Just what is the Computational Storage Architecture and Programming Model? Scott Shadley (SS): The SNIA Computational Storage Architecture and Programming Model (Model) introduces the framework of how to use a new tool to architect your infrastructure by deploying compute resources in the storage layer. Bill Martin (BM): The Model enables architecture and programming of computational storage devices. These kinds of devices include those with storage physically attached, and also those with storage not physically attached but considered computational because the devices are associated with storage. SOS: How did the TWG approach creating the Model and what does it cover? SS:  SNIA is known for bringing standardization to customized operations; and with the Model, users now have a common way to identify the different solutions offered in computational storage devices and a standard way to discover and interact with these devices. Like the way NVMe brought common interaction to the wild west of PCIe, the SNIA Model ensures the many computational storage products already on the market can align to interact in a common way, minimizing the need for unique programming to use solutions most effectively. Jason Molgaard (JM): The Model covers both the hardware architecture and software application programming interface (API) for developing and interacting with computational storage. BM:  The architecture sections of the Model cover the components that make up computational storage and the API provides a programming interface to those components. SOS:  I know the TWG members have had many discussions to develop standard terms for computational storage.  Can you share some of these definitions and why it was important to come to consensus? BM:  The model defines Computational Storage Devices (CSxs) which are composed of Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs). Each Computational Storage Device contains a Computational Storage Engine (CSE) and some form of host accessible memory for that engine to utilize. The Computational Storage Processor is a device that has a Computational Storage Engine but does not contain storage. The Computational Storage Drive contains a Computational Storage Engine and storage. And the Computational Storage Array contains an array with an array processor and a Computational Storage Engine. Finally, the Computational Storage Engine executes Computational Storage Functions (CSFs) which are the entities that define the particular computation. All of the computational storage terms can be found online in the SNIA Dictionary. SS: An architecture and programing model is necessary to allow vendor-neutral, interoperable implementations of this industry architecture, and clear, accurate definitions help to define how the computational storage hierarchy works. The TWG spent many hours to define these standard nomenclatures to be used by providers of computational storage products. JM: It has been a work in process over the last 18 months, and the perspectives of all the different TWG member companies have brought more clarity to the terms and refined them to better meet the needs of the ecosystem. BM: One example has been the change of what was called computational storage services to the more accurate and descriptive Computational Storage Functions.   The Model defines a list of potential functions such as compression/decompression, encoding/decoding, and database search.  These and many more are described in the document. SOS: Is SNIA working with the industry on computational storage standards? BM:    SNIA has an alliance with the NVM Express® organization and they are working on computational storage. As other organizations (e.g., CXL Consortium) develop computational storage for their interface, SNIA will pursue alliances with those organizations.  You can find details on SNIA alliances here. SS:  SNIA is also monitoring other Technical Work Group activity inside SNIA such as the Smart Data Accelerator Interface (SDXI) TWG working on the memory layer and efforts around Security, which is a key topic being investigated now. SOS:  Is a new release of the Computational Storage Architecture and Programming Model pending? BM:  Stay tuned, the next release of the Model – v.06 – is coming very soon.  It will contain updates and an expansion of the architecture. JM: We have also been working on an API document which will be released at the same time as the V.6 release of the Model. SOS:  Who will write the software based on the Computational Storage Architecture and Programming Model? JM:  Computational Storage TWG members will develop open-source software aligned with the API, and application programmers will use those libraries. SOS: How can the public find out about the next release of the Model? SS: We will announce it via our SNIA Matters newsletter. Version 0.6 of the Model as well as the API will be up for public review and comment at this link.  And we encourage companies interested in the future of computational storage to join SNIA and contribute to the further development of the Model and the API. You can reach us with your questions and comments at askcmsi@snia.org. SOS: Where can our readers learn more about computational storage? SS:  Eli Tiomkin, Chair of the SNIA Computational Storage Special Interest Group (CS SIG), Jason, and I sat down to discuss the future of computational storage in January 2021.  The CS SIG also has a series of videos that provide a great way to get up to speed on computational storage.  You can find them and “Geek Out on Computational Storage” here, SOS: Thanks for the update, and we’ll look forward to a future SNIA webcast on your computational storage work. The post Continuing to Refine and Define Computational Storage first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to