Sorry, you need to enable JavaScript to visit this website.

Keeping Edge Data Secure Q&A

David McIntyre

Jun 9, 2022

title of post
The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. It was the topic of our SNIA Networking Storage Forum (NSF) live webcast “Storage Life on the Edge: Security Challenges” where SNIA security experts Thomas Rivera, CISSP, CIPP/US, CDPSE and Eric Hibbard, CISSP-ISSAP, ISSMP, ISSEP, CIPP/US, CIPT, CISA, CDPSE, CCSK debated as to whether existing security practices and standards are adequate for this emerging area of computing. If you missed the presentation, you can view it on-demand here. It was a fascinating discussion and as promised, Eric and Thomas have answered the questions from our live audience. Q. What complexities are introduced from a security standpoint for edge use cases? A. The sheer number of edge nodes, the heterogeneity of the nodes, distributed ownership and control, increased number of interfaces, fit-for-use versus designed solution, etc. complicate the security aspects of these ecosystems. Performing risk assessments and/or vulnerability assessments across the full ecosystem can be extremely difficult; remediation activities can be even harder. Q. How is data privacy impacted and managed across cloud to edge applications? A. Movement of data from the edge to core systems could easily cross multiple jurisdictions that have different data protection/privacy requirements. For example, personal information harvested in the EU might find its way into core systems in the US; in such a situation, the US entity would need to deal with GDPR requirements or face significant penalties. The twist is that the operator of the core systems might not know anything about the source of the data. Q. What are the priority actions that customers can undertake to protect their data? A. Avoid giving personal information. If you do, understand your rights (if any) as well as how it will be used, protected, and ultimately eliminated. This session is part of our “Storage Life on the Edge” webcast series. Our next session will be “Storage Life on the Edge: Accelerated Performance Strategies” where we will dive into the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center. Register here to join us on July 12, 2022. You can also access the other presentations we’ve done in this series at the SNIA Educational Library.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Implications of Doing More at the Edge

Alex McDonald

May 10, 2022

title of post

In our SNIA Networking Storage Forum webcast series, “Storage Life on the Edge” we’ve been examining the many ways the edge is impacting how data is processed, analyzed and stored. I encourage you to check out the sessions we’ve done to date:

On July 12, 2022, we continue the series with “Storage Life on the Edge: Accelerated Performance Strategies” where our SNIA experts will discuss the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center, covering:

  • The rise of intelligent edge locations
  • Different solutions that provide faster processing or data movement at the edge
  • How computational storage can speed up data processing and transmission at the edge
  • Security considerations for edge processing

We look forward to having you join us to cover all this and more. We promise to keep you on the edge of your virtual seat! Register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Implications of Doing More at the Edge

Alex McDonald

May 10, 2022

title of post
In our SNIA Networking Storage Forum webcast series, “Storage Life on the Edge” we’ve been examining the many ways the edge is impacting how data is processed, analyzed and stored. I encourage you to check out the sessions we’ve done to date: On June 15, 2022, we continue the series with “Storage Life on the Edge: Accelerated Performance Strategies” where our SNIA experts will discuss the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center, covering:
  • The rise of intelligent edge locations
  • Different solutions that provide faster processing or data movement at the edge
  • How computational storage can speed up data processing and transmission at the edge
  • Security considerations for edge processing
We look forward to having you join us to cover all this and more. We promise to keep you on the edge of your virtual seat! Register today.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Dynamic Speakers on Tap for the 2022 SNIA Persistent Memory + Computational Storage Summit

SNIA CMSI

May 6, 2022

title of post

Our 10th annual Persistent Memory + Computational Storage Summit is right around the corner on May 24 and 25, 2022.  We remain virtual this year, and hope this will offer you more flexibility to watch our live-streamed mainstage sessions, chat online, and catch our always popular Computational Storage birds-of-a-feather session on Tuesday afternoon without needing a plane or hotel reservation!

As David McIntyre of Samsung, the 2022 PM+CS Summit chair, says in his 2022 Summit Preview Video, “You won’t want to miss this event!”   

This year, the Summit agenda expands knowledge on computational storage and persistent memory, and also features new sessions on computational memory, Compute Express Link TM (CXL)TM, NVM Express, SNIA Smart Data Accelerator Interface (SDXI), and Universal Chiplet Interconnect Express (UCIe).

We thank our many dynamic speakers who are presenting an exciting lineup of talks over the two days, including:

  • Yang Seok Ki of Samsung on Innovation with SmartSSD for Green Computing
  • Charles Fan of MemVerge on Persistent Memory Breaks Through the Clouds
  • Gary Grider of Los Alamos National Labs on HPC for Science Based Motivations for Computation Near Storage
  • Alan Benjamin of the CXL Consortium on Compute Express Link (CXL): Advancing the Next Generation of Data Centers
  • Cheolmin Park of Samsung on CXL and The Universal Chiplet Interconnect Express (UCIe)
  • Stephen Bates and Kim Malone of NVM Express on NVMe Computational Storage - An Update on the Standard
  • Andy Walls of IBM on Computational Storage for Storage Applications

Our full agenda is at www.snia.org/pm-summit.

We’ll have great networking opportunities, a virtual reception, and the ability to connect with leading companies including Samsung, MemVerge, and SMART Modular who are sponsoring the Summit. 

Complimentary registration is now available at https://www.snia.org/events/persistent-memory-summit/pm-cs-summit-2022-registration.  We will see you there!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Dynamic Speakers on Tap for the 2022 SNIA Persistent Memory + Computational Storage Summit

SNIA CMS Community

May 6, 2022

title of post
Our 10th annual Persistent Memory + Computational Storage Summit is right around the corner on May 24 and 25, 2022.  We remain virtual this year, and hope this will offer you more flexibility to watch our live-streamed mainstage sessions, chat online, and catch our always popular Computational Storage birds-of-a-feather session on Tuesday afternoon without needing a plane or hotel reservation! This year, the Summit agenda expands knowledge on computational storage and persistent memory, and also features new sessions on computational memory, Compute Express Link TM (CXL)TM, NVM Express, SNIA Smart Data Accelerator Interface (SDXI), and Universal Chiplet Interconnect Express (UCIe). We thank our many dynamic speakers who are presenting an exciting lineup of talks over the two days, including:
  • Yang Seok Ki of Samsung on Innovation with SmartSSD for Green Computing
  • Charles Fan of MemVerge on Persistent Memory Breaks Through the Clouds
  • Gary Grider of Los Alamos National Labs on HPC for Science Based Motivations for Computation Near Storage
  • Alan Benjamin of the CXL Consortium on Compute Express Link (CXL): Advancing the Next Generation of Data Centers
  • Cheolmin Park of Samsung on CXL and The Universal Chiplet Interconnect Express (UCIe)
  • Stephen Bates and Kim Malone of NVM Express on NVMe Computational Storage – An Update on the Standard
  • Andy Walls of IBM on Computational Storage for Storage Applications
Our full agenda is at www.snia.org/pm-summit. We’ll have great networking opportunities, a virtual reception, and the ability to connect with leading companies including Samsung, MemVerge, and SMART Modular who are sponsoring the Summit. As David McIntyre of Samsung, the 2022 PM+CS Summit chair, says in his 2022 Summit Preview Video, “You won’t want to miss this event!” Complimentary registration is now available at https://www.snia.org/events/persistent-memory-summit/pm-cs-summit-2022-registration.  We will see you there! The post Dynamic Speakers on Tap for the 2022 SNIA Persistent Memory + Computational Storage Summit first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Edge Use Cases Q&A

SNIAOnStorage

Apr 1, 2022

title of post

Our “Storage Life on the Edge” webcast series continued on March 22, 2022 where our expert panelists, Stephen Bates, Bill Martin, Mayank Saxena and Tong Zhang highlighted several real-world edge use cases, the implications for storage, and the benefits of computational storage standards. You can access the on-demand session and the presentation slides at the SNIA Educational Library. The panel answered several questions during the live event. We only had time to get to a handful. As promised, here are answers to all of them.

Q.  I have heard NVMe® is developing an open and vendor-neutral standard for computational storage devices. How important do you think standards like this one are for mass adoption of these types of devices on the edge and why?

A. Yes, NVMe is working to develop an architectural model for NVMe-based computational storage devices. The specifics of this are under development, but it will lead to new commands in NVMe that pertain to computation. Standards like this are of vital importance to the adoption of computational storage at the edge since it will lead to a rich ecosystem of software and allow for the multi-sourcing of computational storage devices from multiple vendors.

Q.  Computation storage devices come in three main forms. Computational storage processor, computational storage drive and computational storage array. How do you see each of these being deployed on the edge and why?

A. I think we can expect to see all three types of computational storage devices in the edge. Computational storage drives combine the storage of an SSD with compute power. This will be very useful in the edge where physical space is a very real constraint. That said, computational storage processors will also have a role as they separate the compute element from the storage while providing peer-to-peer communication between the compute element and storage at the edge, and that can be desirable in certain instances. Finally, the computational storage array is appealing because it is a plug and play solution for computational storage that can be inserted into a 1U or 2U rack space and consumed via standards based APIs.

Q. What percentage of data at the edge have you experienced to be compressible? Can you provide some examples of edge use cases which have a high percentage of compressible data and some examples which have low percentages, and comment on the specific percentages? How does this affect the capacities of storage devices in these use cases?

A. Except image/video, most other data at edge tend to have decent compressibility. Experience shows that we may expect 2:1~4:1 compression ratio in general. Example edge use cases with highly compressible data are time series data from various IoT devices, and most edge database and data analytics systems. Typically low compressibility is caused by the use of special-purpose compression (e.g., JPEG and H.264) before data storage. By leveraging the good runtime data compressibility, computational storage drives with built-in transparent compression could very well contribute to lowering the TCO and power consumption of edge infrastructure.

Q. Will applications push encrypt/decrypt keys to the computational storage processor? Or is there pre-configuration and storage of keys?

A. It can support both models. In the traditional PKI model, keys can be stored in trusted platform module (TPM) at the edge server following certificate signing request (CSR) process, tied to some root certificate, which can be revoked when needed. There can also be an encryption key per IO, and these keys can be managed and rotated by hosts. Notably, there are a lot of innovations happening in the field of data security benefiting the edge security directly e.g. ICN (Information-Centric Network). With ICN, data can be secured at packet level with ephemeral keys agnostic to transport protocol. Computational storage can perform such encryptions near to data without involving CPU, increasing data sanctity and performance.

Q. Given the heterogeneity nature of the edge data and system, how can computational storage be a value add?

A. Heterogeneity is indeed the central nature of edge, which is very important to understand. At the edge, data is at the heart - everything else is just peripheral. There are always two primary things to consider i.e. TCO & compute for data. It is becoming more apparent for edge servers and gateways that data processing compute should be done at the edge where data is ingested. Offloading of repetitive data-intensive processing tasks can help reduce the cost and improve the ecosystem for protocol processing and governance in these heterogenous environments.

Now with this, one can have storage with a standard interface for everyday data-intensive tasks, serving varied use cases, which can be plugged into any compute entity i.e. from Rasberry Pi to 1U server in the cloudlet datacenter. That’s powerful.

Q.  Would it make sense to use a computational storage drive (CSD) for general-purpose programmable computation for edge? If yes, would using embedded CPUs inside CSDs be more efficient than using external CPUs?

A. Yes, compared with external host CPUs, embedded processors inside computational storage drives tend to be much less powerful and have much less cache memory. So it does not make much sense if we only want to off-load some computation-intensive tasks into the embedded processors on a computational storage drive. However, because the computational storage drive could integrate customized hardware engines for functions like compression, encryption, and data filtering, it still makes sense to off-load certain programmable computation into the computational storage drive if it involves certain pre- or post-processing that could leverage those customized hardware engines.

Q. What could a computation storage drive do to seamlessly contribute to reducing power consumption in edge environments?

A. The low-hanging fruit here is for the computational storage drive to carry out internal transparent lossless data compression. By reducing the data volume through compression, we will write much a smaller amount of data into NAND flash memory. We know that writing data into NAND flash memory is the most energy-consuming operation inside of a storage drive. So in-storage transparent compression could seamlessly reduce the power consumption.

Remember, this is a series. If you missed the introduction “Storage Life on the Edge: Managing Data from the Edge to the Cloud and Back,” you can view it on-demand here. I also encourage you to register for the next session in this series on April 27, 2022 “Storage Life of the Edge: Security Challenges” where our security experts will discuss the multitude of security challenges created by the edge. I hope you’ll join us.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?

Alex McDonald

Mar 28, 2022

title of post

As applications continue to increase in complexity and users demand more from their workloads, there is a trend to again deploy dedicated accelerator chips to assist by offloading work from the main CPU.  These new accelerators (xPUs) have multiple names such as SmartNIC (Smart Network Interface Card), DPU, IPU, APU, NAPU. How are these different than GPU, TPU and the venerable CPU? xPUs can accelerate and offload functions including math, networking, storage functions, compression, cryptography, security and management.

It’s a topic that the SNIA Networking Storage Forum will spotlight in our 3-part xPU webcast series. The first webcast on May 19, 2022 “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” will cover key topics about, and clarify questions surrounding, xPUs, including…

  1. xPU Definition: What is an xPU (SmartNIC, DPU, IPU, APU, NAPU), GPU, TPU, CPU? A focus on high-level architecture and definition of the xPU.
  • Trends and Workloads: What is driving the trend to use hardware accelerators again after years of software-defined everything? What types of workloads are typically offloaded or accelerated?  How do cost and power factor in? 
  • Deployment and Solutions: What are the pros and cons of dedicated accelerator chips versus running everything on the CPU?  
  • Market landscape Who provides these new accelerators—the CPU, storage, networking, and/or cloud vendors? 

Register here to join us on May 19th to get the answers to these questions. Part 2 of this series will take a deep dive on accelerator offload functions and Part 3 will focus on deployment and solutions. Keep an eye on this blog and follow us on Twitter @SNIANSF for details and dates for the future sessions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?

Alex McDonald

Mar 28, 2022

title of post
As applications continue to increase in complexity and users demand more from their workloads, there is a trend to again deploy dedicated accelerator chips to assist by offloading work from the main CPU.  These new accelerators (xPUs) have multiple names such as SmartNIC (Smart Network Interface Card), DPU, IPU, APU, NAPU. How are these different than GPU, TPU and the venerable CPU? xPUs can accelerate and offload functions including math, networking, storage functions, compression, cryptography, security and management. It’s a topic that the SNIA Networking Storage Forum will spotlight in our 3-part xPU webcast series. The first webcast on May 19, 2022 “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” will cover key topics about, and clarify questions surrounding, xPUs, including…
  1. xPU Definition: What is an xPU (SmartNIC, DPU, IPU, APU, NAPU), GPU, TPU, CPU? A focus on high-level architecture and definition of the xPU.
  • Trends and Workloads: What is driving the trend to use hardware accelerators again after years of software-defined everything? What types of workloads are typically offloaded or accelerated?  How do cost and power factor in?
  • Deployment and Solutions: What are the pros and cons of dedicated accelerator chips versus running everything on the CPU?
  • Market landscape Who provides these new accelerators—the CPU, storage, networking, and/or cloud vendors?
Register here to join us on May 19th to get the answers to these questions. Part 2 of this series will take a deep dive on accelerator offload functions and Part 3 will focus on deployment and solutions. Keep an eye on this blog and follow us on Twitter @SNIANSF for details and dates for the future sessions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Experts Discuss Key Edge Storage Security Challenges

David McIntyre

Mar 25, 2022

title of post

The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. It’s a topic the SNIA Networking Storage Forum (NSF) will take on as our "Storage Life on the Edge" webcast series continues. Join us on April 27, 2022 for “Storage Life on the Edge: Security Challenges” where I’ll be joined by security experts Thomas Rivera, CISSP, CIPP/US, CDPSE and Eric Hibbard, CISSP-ISSAP, ISSMP, ISSEP, CIPP/US, CIPT, CISA, CDPSE, CCSK as they explore these challenges and wade into the debate as to whether existing security practices and standards are adequate for this emerging area of computing. Our discussion will cover:

  • Understanding the key security issues associated with edge computing
  • Identify potentially relevant standards and industry guidance (e.g., IoT security)
  • Offer awareness of new security initiatives focused on edge computing

Register today and bring your questions. Eric and Thomas will be on-hand to answer them. And if you’re interested in the other “Storage Life on the Edge” presentations we’ve done. You can find them in the SNIA Educational Library.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Experts Discuss Key Edge Storage Security Challenges

David McIntyre

Mar 25, 2022

title of post
The complex and changeable structure of edge computing, together with its network connections, massive real-time data, challenging operating environment, distributed edge cloud collaboration, and other characteristics, create a multitude of security challenges. It’s a topic the SNIA Networking Storage Forum (NSF) will take on as our “Storage Life on the Edge” webcast series continues. Join us on April 27, 2022 for “Storage Life on the Edge: Security Challenges” where I’ll be joined by security experts Thomas Rivera, CISSP, CIPP/US, CDPSE and Eric Hibbard, CISSP-ISSAP, ISSMP, ISSEP, CIPP/US, CIPT, CISA, CDPSE, CCSK as they explore these challenges and wade into the debate as to whether existing security practices and standards are adequate for this emerging area of computing. Our discussion will cover:
  • Understanding the key security issues associated with edge computing
  • Identify potentially relevant standards and industry guidance (e.g., IoT security)
  • Offer awareness of new security initiatives focused on edge computing
Register today and bring your questions. Eric and Thomas will be on-hand to answer them. And if you’re interested in the other “Storage Life on the Edge” presentations we’ve done. You can find them in the SNIA Educational Library.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to