Sorry, you need to enable JavaScript to visit this website.

Protecting NVMe over Fabrics Data from Day One, The Armored Truck Way

John Kim

Apr 27, 2021

title of post
With ever increasing threat vectors both inside and outside the data center, a compromised customer dataset can quickly result in a torrent of lost business data, eroded trust, significant penalties, and potential lawsuits. Potential vulnerabilities exist at every point when scaling out NVMe® storage, which requires data to be secured every time it leaves a server or the storage media, not just when leaving the data center. NVMe over Fabrics is poised to be the one of the most dominant storage transports of the future and securing and validating the vast amounts of data that will traverse this fabric is not just prudent, but paramount. Ensuring the security of that data will be the topic of our SNIA Networking Storage Forum (NSF) webcast “Security of Data on NVMe over Fabrics, the Armored Truck Way” on May 12, 2021. Join the webcast to hear industry experts discuss current and future strategies to secure and protect mission critical data. You will learn:
  • Industry trends and regulations around data security
  • Potential threats and vulnerabilities
  • Existing security mechanisms and best practices
  • How to secure NVMe data in flight and at rest
  • Ecosystem and market dynamics
  • Upcoming standards
For those of you who follow the many educational webcasts that the NSF hosts, you may have noticed that we are discussing the important topic of data security a lot. In fact, there is an entire Storage Networking Security Webcast Series that dives into protecting data at rest, protecting data in flight, encryption, key management, and more. You might find it useful to check out some of the sessions before our May 12th presentation. Register today! We hope you will join us on May 12th. And please bring your questions. Our experts will be ready to answer them.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuous Delivery Software Development Q&A

Alex McDonald

Apr 26, 2021

title of post
What’s the best way to make a development team lean and agile? It was a question we explored at length during our SNIA Cloud Storage Technologies Initiative live webcast “Continuous Delivery: Cloud Software Development on Speed.” During this session, continuous delivery expert, Davis Frank, Co-creator of the Jasmine Test Framework, explained why product development teams are adopting a continuous delivery (CD) model. If you missed the live event, you can watch it on-demand here. The webcast audience was highly engaged with this topic and asked some interesting questions. Here are Davis Frank’s answers: Q.  What are a few simple tests you can use to determine your team’s capability to deliver CD? A. I would ask the team three questions:
  1. Do you want to move to a Continuous Delivery model?
  2. Are you willing to meet to discover what is preventing your team from working in a CD manner, and then hold yourselves accountable to making improvement?
  3. Are you able to repeat the second step regularly?
If the answers to these questions are all yes, then you have the foundation necessary to get started. Q. If you’re talking about multiple products from different companies, how do you ensure that you can deliver CD type products? A. When building cloud software today, you are going to have dependencies. Dependencies on open-source frameworks, closed-source tooling, and web-based API’s. As these dependencies change, they can affect your products. Automated testing and Continuous integration help here. They will catch issues before delivery just like bugs or issues that come from your team. Finding the issues early means the team can recover, amend, or work around these types of problems so they have a minimal impact on the team’s delivery pace or the overall business. Q. When you get to team rotation, how well do software engineers in your experience adapt to moving?  As you move engineers to new areas of the code, do you experience that they spend time re-writing what’s not broken because they didn’t write it in the first place? A. Every person is different, but most engineers like new problems. Learning a new domain and applying their knowledge to solve new problems often is highly motivating for engineers. The urge to re-write is often just to build understanding. As I mentioned in the webcast, the “rewrite risk” can be mitigated by pair programming sessions with engineers more familiar with the code and well-written test suites. The test suites act as documentation on how the code actually works. These accelerate knowledge transfer, reducing some of the motivation to re-write. There are other reasons to rewrite code. Sometimes it is for reuse, or to make it easier to maintain, or to use newer patterns. These types of rewrites, or refactorings, are natural and happen every day. They improve the code. With good test coverage, the product risk of this type of work is low. Q. Does the Lean Methodology work well in terms of delivering software, is it still too heavy a process for rapid development by software engineering teams? A. Any new process will feel heavyweight to a team. I recommend finding things that are not working and use new techniques – whatever their origin – to attempt to fix or optimize them. With short feedback loops, you can experiment, tweak, improve – this is the Learn cycle of Build-Measure-Learn – until you have fixed a problem. And then pick a new one. Q. Do you still see a need for a “product release” timeline, or does this move software completely to a place where a feature is enabled as soon as it’s ready?  How do you cover “feature regression” if new code breaks an existing feature, or updates the way the feature is supported? A. We touched on the first part of this question in the webcast. A CD team is working well, they are just always delivering new features to production. Whether those features are available to users is a product decision and can be tied to a planned release timeline. Companies often use feature flags, or other similar technology, to hide functionality from users until they are available. Hiding functionality could be necessary due to public announcement or marketing concerns. Or, partial functionality is delivered and waiting until the remaining functionality is ready and the feature flags are removed. As to “feature regression” or updating how a feature is supported, automated testing and continuous integration should detect and or protect these cases – which totally happen – they should just happen during the development process and thus before production. Q. Do you have to do CD using open source, or does it work with closed-source products? A. I see open-sourcing as a product feature around licensing, transparency, and community. It does not directly have to do with how the software is developed and delivered. So, I see no conflicts with closed-source software. Said another way, does Amazon release their store platform as open-source? Or Google for GMail or Google Docs?

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuous Delivery Software Development Q&A

Alex McDonald

Apr 26, 2021

title of post
What’s the best way to make a development team lean and agile? It was a question we explored at length during our SNIA Cloud Storage Technologies Initiative live webcast “Continuous Delivery: Cloud Software Development on Speed.” During this session, continuous delivery expert, Davis Frank, Co-creator of the Jasmine Test Framework, explained why product development teams are adopting a continuous delivery (CD) model. If you missed the live event, you can watch it on-demand here. The webcast audience was highly engaged with this topic and asked some interesting questions. Here are Davis Frank’s answers: Q.  What are a few simple tests you can use to determine your team’s capability to deliver CD? A. I would ask the team three questions:
  1. Do you want to move to a Continuous Delivery model?
  2. Are you willing to meet to discover what is preventing your team from working in a CD manner, and then hold yourselves accountable to making improvement?
  3. Are you able to repeat the second step regularly?
If the answers to these questions are all yes, then you have the foundation necessary to get started. Q. If you’re talking about multiple products from different companies, how do you ensure that you can deliver CD type products? A. When building cloud software today, you are going to have dependencies. Dependencies on open-source frameworks, closed-source tooling, and web-based API’s. As these dependencies change, they can affect your products. Automated testing and Continuous integration help here. They will catch issues before delivery just like bugs or issues that come from your team. Finding the issues early means the team can recover, amend, or work around these types of problems so they have a minimal impact on the team’s delivery pace or the overall business. Q. When you get to team rotation, how well do software engineers in your experience adapt to moving?  As you move engineers to new areas of the code, do you experience that they spend time re-writing what’s not broken because they didn’t write it in the first place? A. Every person is different, but most engineers like new problems. Learning a new domain and applying their knowledge to solve new problems often is highly motivating for engineers. The urge to re-write is often just to build understanding. As I mentioned in the webcast, the “rewrite risk” can be mitigated by pair programming sessions with engineers more familiar with the code and well-written test suites. The test suites act as documentation on how the code actually works. These accelerate knowledge transfer, reducing some of the motivation to re-write. There are other reasons to rewrite code. Sometimes it is for reuse, or to make it easier to maintain, or to use newer patterns. These types of rewrites, or refactorings, are natural and happen every day. They improve the code. With good test coverage, the product risk of this type of work is low. Q. Does the Lean Methodology work well in terms of delivering software, is it still too heavy a process for rapid development by software engineering teams? A. Any new process will feel heavyweight to a team. I recommend finding things that are not working and use new techniques – whatever their origin – to attempt to fix or optimize them. With short feedback loops, you can experiment, tweak, improve – this is the Learn cycle of Build-Measure-Learn – until you have fixed a problem. And then pick a new one. Q. Do you still see a need for a “product release” timeline, or does this move software completely to a place where a feature is enabled as soon as it’s ready?  How do you cover “feature regression” if new code breaks an existing feature, or updates the way the feature is supported? A. We touched on the first part of this question in the webcast. A CD team is working well, they are just always delivering new features to production. Whether those features are available to users is a product decision and can be tied to a planned release timeline. Companies often use feature flags, or other similar technology, to hide functionality from users until they are available. Hiding functionality could be necessary due to public announcement or marketing concerns. Or, partial functionality is delivered and waiting until the remaining functionality is ready and the feature flags are removed. As to “feature regression” or updating how a feature is supported, automated testing and continuous integration should detect and or protect these cases – which totally happen – they should just happen during the development process and thus before production. Q. Do you have to do CD using open source, or does it work with closed-source products? A. I see open-sourcing as a product feature around licensing, transparency, and community. It does not directly have to do with how the software is developed and delivered. So, I see no conflicts with closed-source software. Said another way, does Amazon release their store platform as open-source? Or Google for GMail or Google Docs?

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on NVMe-oF Performance Hero Numbers

Alex McDonald

Apr 13, 2021

title of post
Last month, the SNIA Networking Storage Forum (NSF) hosted a live webcast “NVMe-oF: Looking Beyond Performance Hero Numbers.”It was extremely popular, in fact it has been viewed almost 800 times in just two weeks! If you missed it, it’s available on-demand, along with the presentation slides at the SNIA Educational Library. Our audience asked several great questions during the live event and our expert presenters, Erik Smith, Rob Davis and Nishant Lodha have kindly answered them all here. Q. There are initiators for Linux but not for Windows? What are my options to connect NVMe-oF to Windows Server? A. Correct. For many of the benchmarks, a standard Linux based initiator was used as it provided a consistent platform on which several of the compared NVMe Fabrics are supported/available. Regarding what Fabrics are available on Microsoft Windows, it is best to check with your Microsoft representative for the most current information. As far as we are aware, Microsoft does not natively support NVMe-oF, but there are 3rd party Windows drivers available from other vendors, just search the web for “Windows NVMe-oF Initiator”. In addition, some SmartNICs/DPU/Programmable NICs have the ability to terminate NVMe-oF and present remote NVMe devices as local, such options can be considered for your Windows deployment. Q. Where do storage services like RAID, deduplication, and such come from in the Compute Storage Disaggregation model? A. Storage services can either on the compute node in software (leveraging server CPU cycles), offloaded to a DPU (Data Processing Unit or SmartNIC) that is plugged into the compute node, or on the storage array node, or a combination depending on the implementation and use case. Q. The “emerging heterogenous landscape” slide omits the fact that industry momentum, in both hyperscale and hyperconverged storage variants, has been and continues to be towards proprietary networked block storage protocols. Such providers have shunned both legacy and emerging (e.g. NVMe-oF) network block storage industry standards. Simple block storage does not suffice for them. Isn’t the shown “emerging landscape” a declining niche? A. Definitely this was the direction of the “hyperscale and hyperconverged storage variants,” but with the maturity and industry wide support of NVMe, and NVMe-oF this is changing. The “hyperscale and hyperconverged storage variants” providers went the “proprietary networked block storage protocols” direction initially for their deployments because no standardized secure SAN protocols that met their cost, performance, and converged network requirements existed. With NVMe-oF on Ethernet at 100GbE or higher speeds these requirements are now met. It is also important to note that many of the Hyperscale datacenter companies were instrumental in driving the development of NVMe-oF standards, including NVMe/TCP and RDMA. Q. At hardware components does the TCP-offloading take place? What TCP offload device was used to generate these numbers? A. TCP Offload for NVMe/TCP can occur either in a standard NIC (if it has TCP offload capabilities) or a SmartNIC/DPU/Programmable NIC (via a dedicated hardware block or compute engines on the SmartNIC). The performance charts depicted in the slides used a variety of standard NICs and InfiniBand devices, including those from NVIDIA and Marvell. To the question about TCP offload, those tests were conducted with a Marvell 25GbE NIC. We recommend checking with your NIC, SmartNIC, or DPU vendor about the specific NVMe-oF offloads they can provide. Q. Which Web and File applications use 4k random writes? Even for DBs 4k that is very uncommon. A. We use a common block size (i.e., 4k) across all fabric types for the sake of creating a fair and realistic comparison. Small block I/Os (e.g., 512B) are great to use when you are trying to create hero numbers highlighting the maximum number of IOPS supported by an adapter or a storage system. Large block I/Os (e.g., 128k and above) are great to use when you are trying to create hero numbers that highlight the maximum throughput supported by an adapter or a storage system. We used 4k random writes for this testing because we’ve found (in general) the results obtained represent what end users can reasonably expect to observe at the application. We recommend looking at All Flash Array vendor published IO histograms to see the typical block size and IO operation seen most commonly on fabrics (See Figure 4 here.) Q. Where could I get an overview around shipped ports FC vs. FCoE over time? Curious how impactful FCoE is after all. A. For specific metrics, any of the market research firms like Crehan, IDC or Dell’Oro are a good place to start looking. In general, considering the significantly longer time Fibre Channel technologies have been in the market, one would expect the Fibre Channel ports to far outnumber the FCoE ports in datacenters. Q. You said there would be a follow-up webcast on this topic. When is that happening? A. We’re glad you asked! Our next webcast in this series is “Security of Data on NVMe over Fabrics, The Armored Truck Wayon May 12, 2021. You can register for it here. If you’re reading this blog after May 12th, you can view it on-demand.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuing to Refine and Define Computational Storage

SNIAOnStorage

Mar 31, 2021

title of post

The SNIA Computational Storage Technical Work Group (TWG) has been hard at work on the SNIA Technical Document Computational Storage Architecture and Programming Model.  SNIAOnStorage recently sat down via zoom with the document editor Bill Martin of Samsung and TWG Co-Chairs Jason Molgaard of Arm and Scott Shadley of NGD Systems to understand the work included in the model and why definitions of computational storage are so important.

SNIAOnStorage (SOS): Shall we start with the fundamentals?  Just what is the Computational Storage Architecture and Programming Model?

Scott Shadley (SS):  The SNIA Computational Storage Architecture and Programming Model (Model) introduces the framework of how to use a new tool to architect your infrastructure by deploying compute resources in the storage layer.

Bill Martin (BM): The Model enables architecture and programming of computational storage devices. These kinds of devices include those with storage physically attached, and also those with storage not physically attached but considered computational because the devices are associated with storage.

SOS: How did the TWG approach creating the Model and what does it cover?

SS:  SNIA is known for bringing standardization to customized operations; and with the Model, users now have a common way to identify the different solutions offered in computational storage devices and a standard way to discover and interact with these devices. Like the way NVMe brought common interaction to the wild west of PCIe, the SNIA Model ensures the many computational storage products already on the market can align to interact in a common way, minimizing the need for unique programming to use solutions most effectively.  

Jason Molgaard (JM):  The Model covers both the hardware architecture and software application programming interface (API) for developing and interacting with computational storage.

BM:  The architecture sections of the Model cover the components that make up computational storage and the API provides a programming interface to those components.

SOS:  I know the TWG members have had many discussions to develop standard terms for computational storage.  Can you share some of these definitions and why it was important to come to consensus?

BM:  The model defines Computational Storage Devices (CSxs) which are composed of Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs). 

Each Computational Storage Device contains a Computational Storage Engine (CSE) and some form of host accessible memory for that engine to utilize. 

The Computational Storage Processor is a device that has a Computational Storage Engine but does not contain storage. The Computational Storage Drive contains a Computational Storage Engine and storage.  And the Computational Storage Array contains an array with an array processor and a Computational Storage Engine.

Finally, the Computational Storage Engine executes Computational Storage Functions (CSFs) which are the entities that define the particular computation.  

All of the computational storage terms can be found online in the SNIA Dictionary. 

SS: An architecture and programing model is necessary to allow vendor-neutral, interoperable implementations of this industry architecture, and clear, accurate definitions help to define how the computational storage hierarchy works.  The TWG spent many hours to define these standard nomenclatures to be used by providers of computational storage products. 

JM: It has been a work in process over the last 18 months, and the perspectives of all the different TWG member companies have brought more clarity to the terms and refined them to better meet the needs of the ecosystem.

BM: One example has been the change of what was called computational storage services to the more accurate and descriptive Computational Storage Functions.   The Model defines a list of potential functions such as compression/decompression, encoding/decoding, and database search.  These and many more are described in the document.

SOS: Is SNIA working with the industry on computational storage standards?

BM:    SNIA has an alliance with the NVM Express® organization and they are working on computational storage. As other organizations (e.g., CXL Consortium) develop computational storage for their interface, SNIA will pursue alliances with those organizations.  You can find details on SNIA alliances here.

SS:  SNIA is also monitoring other Technical Work Group activity inside SNIA such as the Smart Data Accelerator Interface (SDXI) TWG working on the memory layer and efforts around Security, which is a key topic being investigated now.

SOS:  Is a new release of the Computational Storage Architecture and Programming Model pending?

BM:  Stay tuned, the next release of the Model – v.06 - is coming very soon.  It will contain updates and an expansion of the architecture.

JM: We have also been working on an API document which will be released at the same time as the V.6 release of the Model.

SOS:  Who will write the software based on the Computational Storage Architecture and Programming Model?

JM:  Computational Storage TWG members will develop open-source software aligned with the API, and application programmers will use those libraries.

SOS: How can the public find out about the next release of the Model?

SS: We will announce it via our SNIA Matters newsletter. Version 0.6 of the Model as well as the API will be up for public review and comment at this link.  And we encourage companies interested in the future of computational storage to join SNIA and contribute to the further development of the Model and the API.  You can reach us with your questions and comments at askcmsi@snia.org.

SOS:  Where can our readers learn more about computational storage?

SS:  Eli Tiomkin, Chair of the SNIA Computational Storage Special Interest Group (CS SIG), Jason, and I sat down to discuss the future of computational storage in January 2021.  The CS SIG also has a series of videos that provide a great way to get up to speed on computational storage.  You can find them and “Geek Out on Computational Storage” here,

SOS:  Thanks for the update, and we’ll look forward to a future SNIA webcast on your computational storage work.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Continuing to Refine and Define Computational Storage

SNIAOnStorage

Mar 31, 2021

title of post
The SNIA Computational Storage Technical Work Group (TWG) has been hard at work on the SNIA Technical Document Computational Storage Architecture and Programming Model.  SNIAOnStorage recently sat down via zoom with the document editor Bill Martin of Samsung and TWG Co-Chairs Jason Molgaard of Arm and Scott Shadley of NGD Systems to understand the work included in the model and why definitions of computational storage are so important. SNIAOnStorage (SOS): Shall we start with the fundamentals?  Just what is the Computational Storage Architecture and Programming Model? Scott Shadley (SS): The SNIA Computational Storage Architecture and Programming Model (Model) introduces the framework of how to use a new tool to architect your infrastructure by deploying compute resources in the storage layer. Bill Martin (BM): The Model enables architecture and programming of computational storage devices. These kinds of devices include those with storage physically attached, and also those with storage not physically attached but considered computational because the devices are associated with storage. SOS: How did the TWG approach creating the Model and what does it cover? SS:  SNIA is known for bringing standardization to customized operations; and with the Model, users now have a common way to identify the different solutions offered in computational storage devices and a standard way to discover and interact with these devices. Like the way NVMe brought common interaction to the wild west of PCIe, the SNIA Model ensures the many computational storage products already on the market can align to interact in a common way, minimizing the need for unique programming to use solutions most effectively. Jason Molgaard (JM): The Model covers both the hardware architecture and software application programming interface (API) for developing and interacting with computational storage. BM:  The architecture sections of the Model cover the components that make up computational storage and the API provides a programming interface to those components. SOS:  I know the TWG members have had many discussions to develop standard terms for computational storage.  Can you share some of these definitions and why it was important to come to consensus? BM:  The model defines Computational Storage Devices (CSxs) which are composed of Computational Storage Processors (CSPs), Computational Storage Drives (CSDs), and Computational Storage Arrays (CSAs). Each Computational Storage Device contains a Computational Storage Engine (CSE) and some form of host accessible memory for that engine to utilize. The Computational Storage Processor is a device that has a Computational Storage Engine but does not contain storage. The Computational Storage Drive contains a Computational Storage Engine and storage. And the Computational Storage Array contains an array with an array processor and a Computational Storage Engine. Finally, the Computational Storage Engine executes Computational Storage Functions (CSFs) which are the entities that define the particular computation. All of the computational storage terms can be found online in the SNIA Dictionary. SS: An architecture and programing model is necessary to allow vendor-neutral, interoperable implementations of this industry architecture, and clear, accurate definitions help to define how the computational storage hierarchy works. The TWG spent many hours to define these standard nomenclatures to be used by providers of computational storage products. JM: It has been a work in process over the last 18 months, and the perspectives of all the different TWG member companies have brought more clarity to the terms and refined them to better meet the needs of the ecosystem. BM: One example has been the change of what was called computational storage services to the more accurate and descriptive Computational Storage Functions.   The Model defines a list of potential functions such as compression/decompression, encoding/decoding, and database search.  These and many more are described in the document. SOS: Is SNIA working with the industry on computational storage standards? BM:    SNIA has an alliance with the NVM Express® organization and they are working on computational storage. As other organizations (e.g., CXL Consortium) develop computational storage for their interface, SNIA will pursue alliances with those organizations.  You can find details on SNIA alliances here. SS:  SNIA is also monitoring other Technical Work Group activity inside SNIA such as the Smart Data Accelerator Interface (SDXI) TWG working on the memory layer and efforts around Security, which is a key topic being investigated now. SOS:  Is a new release of the Computational Storage Architecture and Programming Model pending? BM:  Stay tuned, the next release of the Model – v.06 – is coming very soon.  It will contain updates and an expansion of the architecture. JM: We have also been working on an API document which will be released at the same time as the V.6 release of the Model. SOS:  Who will write the software based on the Computational Storage Architecture and Programming Model? JM:  Computational Storage TWG members will develop open-source software aligned with the API, and application programmers will use those libraries. SOS: How can the public find out about the next release of the Model? SS: We will announce it via our SNIA Matters newsletter. Version 0.6 of the Model as well as the API will be up for public review and comment at this link.  And we encourage companies interested in the future of computational storage to join SNIA and contribute to the further development of the Model and the API. You can reach us with your questions and comments at askcmsi@snia.org. SOS: Where can our readers learn more about computational storage? SS:  Eli Tiomkin, Chair of the SNIA Computational Storage Special Interest Group (CS SIG), Jason, and I sat down to discuss the future of computational storage in January 2021.  The CS SIG also has a series of videos that provide a great way to get up to speed on computational storage.  You can find them and “Geek Out on Computational Storage” here, SOS: Thanks for the update, and we’ll look forward to a future SNIA webcast on your computational storage work. The post Continuing to Refine and Define Computational Storage first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Another Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized

David McIntyre

Mar 26, 2021

title of post

The SNIA Networking Storage Forum’s “Great Storage Debate” webcast series is back! This time, SNIA experts will be discussing the ongoing evolution of the data center, in particular how storage is allocated and managed. There are three competing visions about how storage should be done: Hyperconverged Infrastructure (HCI), Disaggregated Storage, and Centralized Storage. Join us on May 4, 2021 for our live webcast Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized.

IT architects, storage vendors, and industry analysts argue constantly over which is the best approach and even the exact definition of each. Isn’t Hyperconverged constrained? Is Disaggregated designed only for large cloud service providers? Is Centralized storage only for legacy applications?

Tune in to debate these questions and more:  

  • What is the difference between centralized, hyperconverged, and disaggregated infrastructure, when it comes to storage?
  • Where does the storage controller or storage intelligence live in each?
  • How and where can the storage capacity and intelligence be distributed?
  • What is the difference between distributing the compute or application and distributing the storage?
  • What is the role of a JBOF or EBOF (Just a Bunch of Flash or Ethernet Bunch of Flash) in these storage models?
  • What are the implications for data center, cloud, and edge?  

Register today as leading storage minds converge to argue the definitions and merits of where to put the storage and storage intelligence.

For anyone not familiar with the Great Storage Debates it is very important to note that this series isn’t about winners and losers; it’s about providing essential compare and contrast information between similar technologies. We won’t settle any arguments as to which is better – but we will debate the arguments, point out advantages and disadvantages, and make the case for specific use cases.  

To date, the SNIA NSF has hosted several great storage debates, including: File vs. Block vs. Object Storage, Fibre Channel vs. iSCSI, FCoE vs. iSCSI vs. iSER, RoCE vs. iWARP, and Centralized vs. Distributed. You can view them all on our SNIAVideo YouTube Channel.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Another Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized

David McIntyre

Mar 26, 2021

title of post
The SNIA Networking Storage Forum’s “Great Storage Debate” webcast series is back! This time, SNIA experts will be discussing the ongoing evolution of the data center, in particular how storage is allocated and managed. There are three competing visions about how storage should be done: Hyperconverged Infrastructure (HCI), Disaggregated Storage, and Centralized Storage. Join us on May 4, 2021 for our live webcast Great Storage Debate: Hyperconverged vs. Disaggregated vs. Centralized. IT architects, storage vendors, and industry analysts argue constantly over which is the best approach and even the exact definition of each. Isn’t Hyperconverged constrained? Is Disaggregated designed only for large cloud service providers? Is Centralized storage only for legacy applications? Tune in to debate these questions and more:
  • What is the difference between centralized, hyperconverged, and disaggregated infrastructure, when it comes to storage?
  • Where does the storage controller or storage intelligence live in each?
  • How and where can the storage capacity and intelligence be distributed?
  • What is the difference between distributing the compute or application and distributing the storage?
  • What is the role of a JBOF or EBOF (Just a Bunch of Flash or Ethernet Bunch of Flash) in these storage models?
  • What are the implications for data center, cloud, and edge?
Register today as leading storage minds converge to argue the definitions and merits of where to put the storage and storage intelligence. For anyone not familiar with the Great Storage Debates it is very important to note that this series isn’t about winners and losers; it’s about providing essential compare and contrast information between similar technologies. We won’t settle any arguments as to which is better – but we will debate the arguments, point out advantages and disadvantages, and make the case for specific use cases. To date, the SNIA NSF has hosted several great storage debates, including: File vs. Block vs. Object Storage, Fibre Channel vs. iSCSI, FCoE vs. iSCSI vs. iSER, RoCE vs. iWARP, and Centralized vs. Distributed. You can view them all on our SNIAVideo YouTube Channel.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on the Ethics of AI

Jim Fister

Mar 25, 2021

title of post
Earlier this month, the SNIA Cloud Storage Technologies Initiative (CSTI) hosted an intriguing discussion on the Ethics of Artificial Intelligence (AI). Our expert, Rob Enderle, Founder of The Enderle Group, and Eric Hibbard, Chair of the SNIA Security Technical Work Group, shared their experiences and insights on what it takes to keep AI ethical. If you missed the live event, it Is available on-demand along with the presentation slides at the SNIA Educational Library. As promised during the live event, our experts have provided written answers to the questions from this session, many of which we did not have time to get to. Q. The webcast cited a few areas where AI as an attacker could make a potential cyber breach worse, are there also some areas where AI as a defender could make cybersecurity or general welfare more dangerous for humans? A. Indeed, we address several different scenarios where AI running at a speed of thought and reaction is much faster than human reaction. Some that we didn’t address are the impact of AI on general cybersecurity. Phishing attacks using AI are getting more sophisticated, and an AI that can compromise systems with cameras or microphones has the ability to pick up significant amounts of information from users. As we continue to automate a response to an attack there could be situations where an attacker is misidentified, and an innocent person is charged by mistake. AI operates at large scale, sometimes making decisions on data that it not apparent to humans looking at the same data. This might cause an issue where an AI believes a human is in the wrong in ways that we could not otherwise see. An AI might also overreach to an attack, for instance noticing there is an attempt to hack into the infrastructure of a company, shutting down that infrastructure in an abundance of caution could leave workers with no power, lights, or air conditioning. Some water-cooling systems if shut down suddenly will burst and that could cause both safety and severe damage issues. Q. What are some of the technical and legal standards that are currently in place that are trying to regulate AI from an ethics standpoint?  Are legal experts actually familiar enough with AI technology and bias training to make informed decisions? A. The legal community is definitely aware of AI. As an example, the American Bar Association Science and Technology Law Section’s (ABA SciTech) Artificial Intelligence & Robotics Committee has been active since at least 2008. ABA SciTech is currently planning its third National Institute Artificial Intelligence (AI) and Robotics for October 2021 in which AI ethics will figure prominently. That said, case law on AI ethics/bias in the U.S. is still limited, but expected to grow as AI becomes more prevalent in business decisions and operations. It is also worth noting that international standards on AI ethics/bias either exist or are under development. For example, the IEEE 7000 Standards Working Groups are already developing standards for the future of ethical intelligent and autonomous technologies. In addition, ISO/IEC JTC 1/SC 42 is developing AI and Machine Learning standards that includes ethics/bias as an element. Q. The webcast talked a lot about automated vehicles and the work done by companies in terms of safety as well as in terms of liability protection.  Is there a possibility that these two conflict? A. In the webcast we discussed the fact that autonomous vehicle safety requires a multi-layered approach that could include connectivity in-vehicle, with other vehicles, with smart city infrastructure, and with individuals’ schedules and personal information. This is obviously a complex environment, and current liability process makes it difficult for companies and municipalities to work together without encountering legal risk. For instance, let’s say an autonomous car sees a pedestrian in danger and could place itself between the pedestrian and that danger. But it doesn’t because the resulting accident could result in the vehicle attracting liability. Or, hitting ice on a corner, turning control over to the driver so the driver is clearly responsible for the accident even though the autonomous system could be more effective at reducing the chance of a fatal outcome. Q. You didn’t discuss much on AI as a teacher. Is there a possibility that AI could be used to educate students, and what are some of the ethical implications of AI teaching humans? A. An AI can scale to individually-focused custom teaching plans far better than a human could.  However, AIs aren’t inherently unbiased and were they’re corrupted through their training they will perform consistently with that training. If the training promotes unethical behavior that is what the AI will teach. Q. Could an ethical issue involving AI become unsolvable by current human ethical standards?  What is an example of that, and what are some steps to mitigate that circumstance? A. Certainly, ethics are grounded in rules and those rules aren’t consistent and are in flux.  These two conditions make it virtually impossible to assure the AI is truly ethical because the related standard is fluid.  Machines like immutable rules, ethics rules aren’t immutable. Q. I can’t believe that nobody’s brought up HAL from Arthur C. Clarke’s 2001 book. Wasn’t this a prototype of AI ethics issues? A. We spent some time at the end of the session, where Jim mentioned that our “Socratic Forebearers” were some of the early science fiction writers such as Clarke and Isaac Asimov. We spent some time discussing Asimov’s Three Laws of Robotics and how Asimov and others later theorized how smart robots could get around the three laws. In truth, there’s been decades of thought into the ethics of an artificial intelligence, and we’re fortunate to be able to build on that as we address what are now real-world problems.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on the Ethics of AI

Jim Fister

Mar 25, 2021

title of post
Earlier this month, the SNIA Cloud Storage Technologies Initiative (CSTI) hosted an intriguing discussion on the Ethics of Artificial Intelligence (AI). Our expert, Rob Enderle, Founder of The Enderle Group, and Eric Hibbard, Chair of the SNIA Security Technical Work Group, shared their experiences and insights on what it takes to keep AI ethical. If you missed the live event, it Is available on-demand along with the presentation slides at the SNIA Educational Library. As promised during the live event, our experts have provided written answers to the questions from this session, many of which we did not have time to get to. Q. The webcast cited a few areas where AI as an attacker could make a potential cyber breach worse, are there also some areas where AI as a defender could make cybersecurity or general welfare more dangerous for humans? A. Indeed, we address several different scenarios where AI running at a speed of thought and reaction is much faster than human reaction. Some that we didn’t address are the impact of AI on general cybersecurity. Phishing attacks using AI are getting more sophisticated, and an AI that can compromise systems with cameras or microphones has the ability to pick up significant amounts of information from users. As we continue to automate a response to an attack there could be situations where an attacker is misidentified, and an innocent person is charged by mistake. AI operates at large scale, sometimes making decisions on data that it not apparent to humans looking at the same data. This might cause an issue where an AI believes a human is in the wrong in ways that we could not otherwise see. An AI might also overreach to an attack, for instance noticing there is an attempt to hack into the infrastructure of a company, shutting down that infrastructure in an abundance of caution could leave workers with no power, lights, or air conditioning. Some water-cooling systems if shut down suddenly will burst and that could cause both safety and severe damage issues. Q. What are some of the technical and legal standards that are currently in place that are trying to regulate AI from an ethics standpoint?  Are legal experts actually familiar enough with AI technology and bias training to make informed decisions? A. The legal community is definitely aware of AI. As an example, the American Bar Association Science and Technology Law Section’s (ABA SciTech) Artificial Intelligence & Robotics Committee has been active since at least 2008. ABA SciTech is currently planning its third National Institute Artificial Intelligence (AI) and Robotics for October 2021 in which AI ethics will figure prominently. That said, case law on AI ethics/bias in the U.S. is still limited, but expected to grow as AI becomes more prevalent in business decisions and operations. It is also worth noting that international standards on AI ethics/bias either exist or are under development. For example, the IEEE 7000 Standards Working Groups are already developing standards for the future of ethical intelligent and autonomous technologies. In addition, ISO/IEC JTC 1/SC 42 is developing AI and Machine Learning standards that includes ethics/bias as an element. Q. The webcast talked a lot about automated vehicles and the work done by companies in terms of safety as well as in terms of liability protection.  Is there a possibility that these two conflict? A. In the webcast we discussed the fact that autonomous vehicle safety requires a multi-layered approach that could include connectivity in-vehicle, with other vehicles, with smart city infrastructure, and with individuals’ schedules and personal information. This is obviously a complex environment, and current liability process makes it difficult for companies and municipalities to work together without encountering legal risk. For instance, let’s say an autonomous car sees a pedestrian in danger and could place itself between the pedestrian and that danger. But it doesn’t because the resulting accident could result in the vehicle attracting liability. Or, hitting ice on a corner, turning control over to the driver so the driver is clearly responsible for the accident even though the autonomous system could be more effective at reducing the chance of a fatal outcome. Q. You didn’t discuss much on AI as a teacher. Is there a possibility that AI could be used to educate students, and what are some of the ethical implications of AI teaching humans? A. An AI can scale to individually-focused custom teaching plans far better than a human could.  However, AIs aren’t inherently unbiased and were they’re corrupted through their training they will perform consistently with that training. If the training promotes unethical behavior that is what the AI will teach. Q. Could an ethical issue involving AI become unsolvable by current human ethical standards?  What is an example of that, and what are some steps to mitigate that circumstance? A. Certainly, ethics are grounded in rules and those rules aren’t consistent and are in flux.  These two conditions make it virtually impossible to assure the AI is truly ethical because the related standard is fluid.  Machines like immutable rules, ethics rules aren’t immutable. Q. I can’t believe that nobody’s brought up HAL from Arthur C. Clarke’s 2001 book. Wasn’t this a prototype of AI ethics issues? A. We spent some time at the end of the session, where Jim mentioned that our “Socratic Forebearers” were some of the early science fiction writers such as Clarke and Isaac Asimov. We spent some time discussing Asimov’s Three Laws of Robotics and how Asimov and others later theorized how smart robots could get around the three laws. In truth, there’s been decades of thought into the ethics of an artificial intelligence, and we’re fortunate to be able to build on that as we address what are now real-world problems.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to