Confidential Computing FAQ

Jim Fister

Jun 25, 2021

title of post
Recently, the SNIA Cloud Storage Technologies Initiative (CSTI) I hosted a lively panel discussion “What is Confidential Computing and Why Should I Care?” It was the first in a 3-part series of Confidential Computing security discussions. You can learn about the series here.  The webcast featured three experts who are working to define the Confidential Computing architecture, Mike Bursell of the Enarx Project, David Kaplan at AMD, and Ronald Perez from Intel. This session served as an introduction to the concept of Confidential Computing and examined the technology and its initial uses. The audience asked several interesting questions. We’re answering some of the more basic questions here, as well as some that did not get addressed directly during the live event. Q. What is Confidential Computing?  How does it complement existing security efforts, such as the Trusted Platform Model (TPM)? A.  Confidential Computing is an architectural approach to security that uses virtualization to create a Trusted Execution Environment (TEE).  This environment can run any amount of code within it, though the volume of code is usually selective in the protected environment. This allows data to be completely protected, even from other code and data running in the system. Q.  Is Confidential Computing only for a CPU architecture? A. The current architecture is focused on delivering this capability via the CPU, but nothing limits other system components such as GPU, FPGA, or the like from implementing a similar architecture. Q. It was mentioned that with Confidential Compute, one only needs to trust their own code along with the hardware. With the prevalence of microarchitectural attacks that break down various isolation mechanisms, can the hardware really be trusted? A. Most of the implementations to create a TEE are using fairly well-tested hardware and security infrastructure.  As such, the threat profile is fairly low. However, any implementation in the market does need to ensure that it’s following proper protocol to best protect data.  An example would be ensuring that data in the TEE is only used or accessed there and is not passed to non-trusted execution areas. Q. Are there potential pitfalls in the TEE implementations that might become security issues later, similar to speculative execution?  Are there potential side-channel attacks using TEE? A. No security solution is 100% secure and there is always a risk of vulnerabilities in any product. But perfect cannot be the enemy of good, and TEEs are a great defense-in-depth tool to provide an additional layer of isolation on top of existing security controls, making data that much more secure.  Additionally, the recent trend has been to consider security much earlier in the design process and perform targeted security testing to try to identify and mitigate issues as early as possible. Q. Is this just a new technology, or is there a bigger value proposition?  What’s in it for the CISO or the CIO? A. There are a variety of answers to this. One would be that running TEE in the cloud provides the protection for vital workloads that otherwise would not be able to run on a shared system.  Another benefit is that key secrets can be secured while much of the rest of the code can be run at a lower privilege level, which helps with costs. In terms of many security initiatives, Confidential Computing might be one that is easier to explain to the management team. Q. Anybody have a guess at what a regulation/law might look like? Certification test analogous to FCC (obviously more complex)? Other approaches? A. This technology is in response to the need for stronger security and privacy which includes legal compliance with regulations being passed by states like California. But this has not taken the form of certifications at this time.  Individual vendors will retain the necessary functions of their virtualization products and may consider security as one of the characteristics within their certification. To hear answers to all the questions that our esteemed panel answered during the live event. Please watch this session on-demand.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Accelerating Disaggregated Storage to Optimize Data-Intensive Workloads

SNIA CMS Community

Jun 21, 2021

title of post

Thanks to big data, artificial intelligence (AI), the Internet of things (IoT), and 5G, demand for data storage continues to grow significantly. The rapid growth is causing storage and database-specific processing challenges within current storage architectures. New architectures, designed with millisecond latency, and high throughput, offer in-network and storage computational processing to offload and accelerate data-intensive workloads.

On June 29, 2021, SNIA Compute, Memory and Storage Initiative will host a lively webcast discussion on today’s storage challenges in an aggregated storage world and if a disaggregated storage model could optimize data-intensive workloads.  We’ll talk about the concept of a Data Processing Unit (DPU) and if a DPU should be combined with a storage data processor to accelerate compute-intensive functions.   We’ll also introduce the concept of key value and how it can be an enabler to solve storage problems.

Join moderator Tim Lustig, Co- Chair of the CMSI Marketing Committee, and speakers John Kim from NVIDIA and Kfir Wolfson from Pliops as we shift into overdrive to accelerate disaggregated storage. Register now for this free webcast.

The post Accelerating Disaggregated Storage to Optimize Data-Intensive Workloads first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Technologies & Practices Ripe for Refresh – Part 2

Alex McDonald

Jun 7, 2021

title of post
So much of what we discuss in SNIA is the latest emerging technologies in storage. While it’s good to know all about the latest and greatest technologies, it’s also important to understand those technologies being sunsetted. In this SNIA Networking Storage Forum (NSF) webcast series “Storage Technologies & Practices Ripe for Refresh” we cover technologies that are at (or close to) being past their useful life. On June 22, 2021, we’ll host the second installment of this series, Storage Technologies & Practices Ripe for Refresh – Part 2 where we’ll discuss obsolete hardware, protocols, interfaces and other aspects of storage. We’ll offer advice on how to replace these older technologies in production environments as well as why these changes are recommended. We’ll also cover protocols that you should consider removing from your networks, either older versions of protocols where only newer versions should be used, or protocols that have been supplanted by superior options and should be discontinued entirely. Finally, we will look at physical networking interfaces and cabling that are popular today but face an uncertain future as networking speeds grow ever faster. Join us on June 22nd to learn if there is anything ripe for refresh in your data center.  And if you missed the first webcast in this series, you can view it on demand here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How SNIA Swordfish™ Expanded with NVMe® and NVMe-oF™

Richelle Ahlvers

Jun 2, 2021

title of post

The SNIA Swordfish™ specification and ecosystem are growing in scope to include full enablement and alignment for NVMe® and NVMe-oF client workloads and use cases. By partnering with other industry-standard organizations including DMTF®, NVM Express, and OpenFabrics Alliance (OFA), SNIA’s Scalable Storage Management Technical Work Group has updated the Swordfish bundles from version 1.2.1 and later to cover an expanding range of NVMe and NVMe-oF functionality including NVMe device management and storage fabric technology management and administration.

The Need
Large-scale computing designs are increasingly multi-node and linked together through high-speed networks. These networks may be comprised of different types of technologies, fungible, and morphing. Over time, many different types of high-performance networking devices will evolve to participate in these modern, coupled-computing platforms. New fabric management capabilities, orchestration, and automation will be required to deploy, secure, and optimally maintain these high-speed networks.

The NVMe and NVMe-oF specifications provide comprehensive management for NVMe devices at an individual level, however, when you want to manage these devices at a system or data center level, DMTF Redfish and SNIA Swordfish are the industry’s gold standards. Together, Redfish and Swordfish enable a comprehensive view across the system, data center, and enterprise, with NVMe and NVMe-oF instrumenting the device-level view. This complete approach provides a way to manage your entire environment across technologies with standards-based management, making it more cost-effective and easier to operate.

The Approach
The expanded NVMe resource management within SNIA Swordfish is comprised of a mapping between the DMTF Redfish, Swordfish, and NVMe specifications, enabling developers to construct a standard implementation within the Redfish and Swordfish service for any NVMe and NVMe-oF managed device.

The architectural approach to creating the SNIA Swordfish 1.2.1 version of the standard began with a deep dive into the existing management models of systems and servers for Redfish, storage for Swordfish, and fabrics management within NVM Express. After evaluating each approach, there was a step-by-step walkthrough to map the models. From that, we created mockups and a comprehensive mapping guide using examples of object and property level mapping between the standard ecosystems.

In addition, Swordfish profiles were created that provide a comprehensive representation of required properties for implementations. These profiles have been incorporated into the new Swordfish Conformance Test Program (CTP), to support NVMe capabilities. Through its set of test suites, the CTP validates that a company’s products conform to a specified version of the Swordfish specification. CTP supports conformance testing against multiple versions of Swordfish.

What’s Next?
In 2021, the Swordfish specification will continue to be enhanced to fully capitalize on the fabrics model by extending fabric technology-specific use cases and creating more profiles for additional device types.

Want to learn more?
Watch SNIA’s on-demand webcast, “Universal Fabric Management for Tomorrow’s Data Centers,” where Phil Cayton, Senior Staff Software Engineer, Intel; and Richelle Ahlvers, storage technology enablement architect, Intel Corporation, Board of Directors, SNIA, provide insights into how standard organizations are working together to improve and promote the vendor-neutral, standards-based management of open source fabrics and remote network services or devices in high-performance data center infrastructures.







   

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How SNIA Swordfish™ Expanded with NVMe® and NVMe-oF™

Linda Capcara

Jun 2, 2021

title of post
The SNIA Swordfish™ specification and ecosystem are growing in scope to include full enablement and alignment for NVMe® and NVMe-oF client workloads and use cases. By partnering with other industry-standard organizations including DMTF®, NVM Express, and OpenFabrics Alliance (OFA), SNIA’s Scalable Storage Management Technical Work Group has updated the Swordfish bundles from version 1.2.1 and later to cover an expanding range of NVMe and NVMe-oF functionality including NVMe device management and storage fabric technology management and administration. The Need Large-scale computing designs are increasingly multi-node and linked together through high-speed networks. These networks may be comprised of different types of technologies, fungible, and morphing. Over time, many different types of high-performance networking devices will evolve to participate in these modern, coupled-computing platforms. New fabric management capabilities, orchestration, and automation will be required to deploy, secure, and optimally maintain these high-speed networks. The NVMe and NVMe-oF specifications provide comprehensive management for NVMe devices at an individual level, however, when you want to manage these devices at a system or data center level, DMTF Redfish and SNIA Swordfish are the industry’s gold standards. Together, Redfish and Swordfish enable a comprehensive view across the system, data center, and enterprise, with NVMe and NVMe-oF instrumenting the device-level view. This complete approach provides a way to manage your entire environment across technologies with standards-based management, making it more cost-effective and easier to operate. The Approach The expanded NVMe resource management within SNIA Swordfish is comprised of a mapping between the DMTF Redfish, Swordfish, and NVMe specifications, enabling developers to construct a standard implementation within the Redfish and Swordfish service for any NVMe and NVMe-oF managed device. The architectural approach to creating the SNIA Swordfish 1.2.1 version of the standard began with a deep dive into the existing management models of systems and servers for Redfish, storage for Swordfish, and fabrics management within NVM Express. After evaluating each approach, there was a step-by-step walkthrough to map the models. From that, we created mockups and a comprehensive mapping guide using examples of object and property level mapping between the standard ecosystems. In addition, Swordfish profiles were created that provide a comprehensive representation of required properties for implementations. These profiles have been incorporated into the new Swordfish Conformance Test Program (CTP), to support NVMe capabilities. Through its set of test suites, the CTP validates that a company’s products conform to a specified version of the Swordfish specification. CTP supports conformance testing against multiple versions of Swordfish. What’s Next? In 2021, the Swordfish specification will continue to be enhanced to fully capitalize on the fabrics model by extending fabric technology-specific use cases and creating more profiles for additional device types. Want to learn more? Watch SNIA’s on-demand webcast, “Universal Fabric Management for Tomorrow’s Data Centers,” where Phil Cayton, Senior Staff Software Engineer, Intel; and Richelle Ahlvers, storage technology enablement architect, Intel Corporation, Board of Directors, SNIA, provide insights into how standard organizations are working together to improve and promote the vendor-neutral, standards-based management of open source fabrics and remote network services or devices in high-performance data center infrastructures. [contact-form]

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on Data Movement and Computational Storage

SNIA CMSI

May 26, 2021

title of post

Recently, the SNIA Compute, Memory, and Storage Initiative hosted a live webcast “Data Movement and Computational Storage”, moderated by Jim Fister of The Decision Place with Nidish Kamath of KIOXIA, David McIntyre of Samsung, and Eli Tiomkin of NGD Systems as panelists.  We had a great discussion on new ways to look at storage, flexible computer systems, and how to put on your security hat.

During our conversation, we answered audience questions, and raised a few of our own!  Check out some of the back-and-forth, and tune in to the entire video for customer use cases and thoughts for the future.

Q:  What is the value of computational storage?

A:  With computational storage, you have latency sensitivity – you can make decisions faster at the edge and can also distribute computing to process decisions anywhere.

Q:  Why is it important to consider “data movement” with regard to computational storage?

A:  There is a reduction in data movement that computational storage brings to the system, along with higher efficiencies while moving that data and a reduction in power which users may not have yet considered.   

Q: How does power use change when computational storage is brought in?

A:  You want to “move” compute to that point in the system where operations can be accomplished where the data is “at rest”. In traditional systems, if you need to move data from storage to the host, there are power costs that may not even be currently measured.  However, if you can now run applications and not move data, you will realize that power reduction, which is more and more important with the anticipation of massive quantities of data coming in the future.

Q: Are the traditional processing/storage transistor counts the same with computational storage?

A:  With computational storage, you can put the programming where it is needed – moving the compute to that point in the system where it can achieve the work with limited amount of overhead and networking bandwidth. Compute moves to where the data sits at rest, which is especially important with the explosion of data sets.

Q:  Does computational storage play a role in data security and privacy?

A: Security threats don’t always happen at the same time, so you need to consider a top-down holistic perspective. It will be important both today and in the future to consider new security threats because of data movement.

There is always a risk for security when the data is moving; however, computational storage reduces the data movement significantly, and can play as a more secure way to treat data because the data is not moving as much. Computational storage allows you to lock the data, for example, medical data, and only process when needed and if needed in an authenticated and secure fashion.  There’s no requirement to build a whole system around this.

Q:  What are the computational storage opportunities at the edge? 

A:  We need to understand the ecosystem the computational storage device is going into. Computational storage sits at the front line of edge applications and management of edge infrastructure pieces in the cloud.  It’s a great time to embrace existing cloud policies and collaborate with customers on how policies will migrate and change to the edge.

Q: In your discussions with customers, how dynamic do they expect the sets of code running on computational storage to be? With the extremes being code never changing (installed once/updated rarely) to being different for every query or operation. Please discuss how challenges differ for these approaches.

A:  The heavy lift comes into play with the application and the system integration.  To run flexible code, customers want a simple, straightforward, and seamless programming model that enables them to run as many applications as they need and change them in an easy way without disrupting the system.  Clients are using computational storage to speed up the processing of their data with dynamic reconfiguring in cutting edge applications.  We are putting a lot of effort toward this seamless and transparent model with our work in the SNIA Computational Storage Technical Work Group.

Q:  What does computational storage mean for data in the future?

A: The infrastructure of data and data movement will drastically change in the future as edge emerges and cloud continues to grow. Using computational storage will be extremely beneficial in the new infrastructure, and we will need to work together as an ecosystem and under SNIA to make sure we are all aligned to provide the right solutions to the customer.  

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on Data Movement and Computational Storage

SNIA CMS Community

May 26, 2021

title of post
Recently, the SNIA Compute, Memory, and Storage Initiative hosted a live webcast “Data Movement and Computational Storage”, moderated by Jim Fister of The Decision Place with Nidish Kamath of KIOXIA, David McIntyre of Samsung, and Eli Tiomkin of NGD Systems as panelists. We had a great discussion on new ways to look at storage, flexible computer systems, and how to put on your security hat. During our conversation, we answered audience questions, and raised a few of our own!  Check out some of the back-and-forth, and tune in to the entire video for customer use cases and thoughts for the future. Q:  What is the value of computational storage? A:  With computational storage, you have latency sensitivity – you can make decisions faster at the edge and can also distribute computing to process decisions anywhere. Q:  Why is it important to consider “data movement” with regard to computational storage? A:  There is a reduction in data movement that computational storage brings to the system, along with higher efficiencies while moving that data and a reduction in power which users may not have yet considered. Q: How does power use change when computational storage is brought in? A:  You want to “move” compute to that point in the system where operations can be accomplished where the data is “at rest”. In traditional systems, if you need to move data from storage to the host, there are power costs that may not even be currently measured.  However, if you can now run applications and not move data, you will realize that power reduction, which is more and more important with the anticipation of massive quantities of data coming in the future. Q: Are the traditional processing/storage transistor counts the same with computational storage? A:  With computational storage, you can put the programming where it is needed – moving the compute to that point in the system where it can achieve the work with limited amount of overhead and networking bandwidth. Compute moves to where the data sits at rest, which is especially important with the explosion of data sets. Q:  Does computational storage play a role in data security and privacy? A: Security threats don’t always happen at the same time, so you need to consider a top-down holistic perspective. It will be important both today and in the future to consider new security threats because of data movement. There is always a risk for security when the data is moving; however, computational storage reduces the data movement significantly, and can play as a more secure way to treat data because the data is not moving as much. Computational storage allows you to lock the data, for example, medical data, and only process when needed and if needed in an authenticated and secure fashion.  There’s no requirement to build a whole system around this. Q:  What are the computational storage opportunities at the edge?  A:  We need to understand the ecosystem the computational storage device is going into. Computational storage sits at the front line of edge applications and management of edge infrastructure pieces in the cloud.  It’s a great time to embrace existing cloud policies and collaborate with customers on how policies will migrate and change to the edge. Q: In your discussions with customers, how dynamic do they expect the sets of code running on computational storage to be? With the extremes being code never changing (installed once/updated rarely) to being different for every query or operation. Please discuss how challenges differ for these approaches. A:  The heavy lift comes into play with the application and the system integration.  To run flexible code, customers want a simple, straightforward, and seamless programming model that enables them to run as many applications as they need and change them in an easy way without disrupting the system.  Clients are using computational storage to speed up the processing of their data with dynamic reconfiguring in cutting edge applications.  We are putting a lot of effort toward this seamless and transparent model with our work in the SNIA Computational Storage Technical Work Group. Q:  What does computational storage mean for data in the future? A: The infrastructure of data and data movement will drastically change in the future as edge emerges and cloud continues to grow. Using computational storage will be extremely beneficial in the new infrastructure, and we will need to work together as an ecosystem and under SNIA to make sure we are all aligned to provide the right solutions to the customer. The post Q&A on Data Movement and Computational Storage first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Confidential Computing Webcast Series

Jim Fister

May 25, 2021

title of post
The need for improved data security and privacy seems to grow bigger every day. The continuous attacks and bad actors from hackers and rogue governments are increasing the demand from businesses and consumers alike to make stronger data protection a top priority. In the midst of this need, Confidential Computing has emerged as a solution for stronger data security and is gaining traction from a variety of start-ups and established companies. The SNIA Cloud Storage Technologies Initiative (CSTI) will be presenting a series of new webcasts on Confidential Computing. This three-part series will provide an introduction to Confidential Computing, dive into its unique approach for protecting data in use as well as use cases. I will be hosting the first discussion “What is Confidential Computing and Why Should I Care?” on June 9, 2021 featuring Mike Bursell, Co-founder, Enarx Project; David Kaplan of AMD and Ronald Perez at Intel – all members of the Confidential Computing Consortium. This panel discussion will detail the need for Confidential Computing, explain the technology basics, how it’s used, and why you should consider deploying some of these new concepts. These industry-expert panelists are the architects of Confidential Computing and they will be ready to take your questions. I encourage you to register today. The second session “Confidential Compute: Protecting Data in Use” will follow two weeks later on June 23, 2021 with a focus on how Confidential Computing works in multi-tenant cloud environments and how sensitive data can be isolated from other privileged portions of the stack. It will also provide insight on applications in financial services, healthcare industries, and broader enterprise applications. Glyn Bowden of HPE will moderate this session with our expert presenters Paul O’Neill and Parviz Peiravi from Intel. You can register here for this session. For the third session, we will delve deeper into use cases and provide more details on real-world opportunities for this new technology. Stay tuned at this blog and on Twitter @sniacloud_com for more details. CSTI is dedicated to helping educated on new technologies related the cloud and how they affect data.  Come join us as we explore Confidential Computing and how it will impact your business.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Confidential Computing Webcast Series

Jim Fister

May 25, 2021

title of post
The need for improved data security and privacy seems to grow bigger every day. The continuous attacks and bad actors from hackers and rogue governments are increasing the demand from businesses and consumers alike to make stronger data protection a top priority. In the midst of this need, Confidential Computing has emerged as a solution for stronger data security and is gaining traction from a variety of start-ups and established companies. The SNIA Cloud Storage Technologies Initiative (CSTI) will be presenting a series of new webcasts on Confidential Computing. This three-part series will provide an introduction to Confidential Computing, dive into its unique approach for protecting data in use as well as use cases. I will be hosting the first discussion “What is Confidential Computing and Why Should I Care?” on June 9, 2021 featuring Mike Bursell, Co-founder, Enarx Project; David Kaplan of AMD and Ronald Perez at Intel – all members of the Confidential Computing Consortium. This panel discussion will detail the need for Confidential Computing, explain the technology basics, how it’s used, and why you should consider deploying some of these new concepts. These industry-expert panelists are the architects of Confidential Computing and they will be ready to take your questions. I encourage you to register today. The second session “Confidential Compute: Protecting Data in Use” will follow two weeks later on June 23, 2021 with a focus on how Confidential Computing works in multi-tenant cloud environments and how sensitive data can be isolated from other privileged portions of the stack. It will also provide insight on applications in financial services, healthcare industries, and broader enterprise applications. Glyn Bowden of HPE will moderate this session with our expert presenters Paul O’Neill and Parviz Peiravi from Intel. You can register here for this session. For the third session, we will delve deeper into use cases and provide more details on real-world opportunities for this new technology. Stay tuned at this blog and on Twitter @sniacloud_com for more details. CSTI is dedicated to helping educated on new technologies related the cloud and how they affect data.  Come join us as we explore Confidential Computing and how it will impact your business.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What is Confidential Computing?

Jim Fister

May 17, 2021

title of post
While data security in the enterprise has never been for the faint of heart, the move to a more contiguous enterprise/cloud workflow as well as the increase in Edge data processing has significantly impacted the work (and the blood pressure) of security professionals. In the “arms race” of security, new defensive tactics are always needed. One significant approach is Confidential Computing: a technology that can isolate data and execution in a secure space on a system, which takes the concept of security to new levels. This SNIA Cloud Storage Technologies Initiative (CSTI) webcast “What is Confidential Computing and Why Should I Care?” will provide an introduction and explanation of Confidential Computing and will feature a panel of industry architects responsible for defining Confidential Compute. It will be a lively conversation on topics including:
  • The basics of hardware-based Trusted Execution Environments (TEEs) and how they work to enable confidential computing
  • How to architect solutions based around TEEs
  • How this foundation fits with other security technologies
  • Adjacencies to storage technologies
Register here to join us on June 9, 2021 for a discussion that’s sure to be both informational and entertaining.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to