Sorry, you need to enable JavaScript to visit this website.

Are We at the End of the 2.5-inch Disk Era?

Jonmichael Hands

Jul 20, 2020

title of post
The SNIA Solid State Storage Special Interest Group (SIG) recently updated the Solid State Drive Form Factor page to provide detailed information on dimensions; mechanical, electrical, and connector specifications; and protocols. On our August 4, 2020 SNIA webcast, we will take a detailed look at one of these form factors – Enterprise and Data Center SSD Form Factor (EDSFF) – challenging an expert panel to consider if we are at the end of the 2.5-in disk era. Enterprise and Data Center Form Factor (EFSFF) is designed natively for data center NVMe SSDs to improve thermal, power, performance, and capacity scaling. EDSFF has different variants for flexible and scalable performance, dense storage configurations, general purpose servers, and improved data center TCO.  At the 2020 Open Compute Virtual Summit, OEMs, cloud service providers, hyperscale data center, and SSD vendors showcased products and their vision for how this new family of SSD form factors solves real data challenges. During the webcast, our SNIA experts from companies that have been involved in EDSFF since the beginning will discuss how they will use the EDSFF form factor:
  • Hyperscale data center and cloud service provider panelists Facebook and Microsoft will discuss how E1.S (SNIA specification SFF-TA-1006) helps solve performance scalability, serviceability, capacity, and thermal challenges for future NVMe SSDs and persistent memory in 1U servers.
  • Server and storage system panelists Dell, HPE, Kioxia, and Lenovo will discuss their goals for the E3 family and the new updated version of the E3 specification (SNIA specification SFF-TA-1008)
We hope you can join us as we spend some time on this important topic.  Register here to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Where Does Cyber Insurance Fit in Your Security Strategy?

Paul Talbut

Jul 17, 2020

title of post
Protection against cyber threats is recognized as a necessary component of an effective risk management approach, typically based on a well-known cybersecurity framework. A growing area to further mitigate risks and provide organizations with the high level of protection they need is cyber insurance. However, it’s not as simple as buying a pre-packaged policy. In fact, it’s critical to identify what risks and conditions are excluded from a cyber insurance policy before you buy. Determining what kind of cyber insurance your business needs or if the policy you have will really cover you in the event of an incident is challenging. On August 27, 2020 the SNIA Cloud Storage Technologies Initiative (CSTI) will host a live webcast, “Does Your Storage Need a Cyber Insurance Tune-Up?” where we’ll examine how cyber insurance fits in a risk management program. We’ll identify key terms and conditions that should be understood and carefully negotiated as cyber insurance policies may not cover all types of losses. Join this webcast to learn:
  • General threat tactics, risk management approaches, cybersecurity frameworks
  • How cyber insurance fits within an enterprise data security strategy
  • Nuances of cyber insurance – exclusions, exemption, triggers, deductibles and payouts
  • Reputational damage considerations
  • Challenges associated with data stored in the cloud
There’s a lot to cover when it comes to this topic. In fact, we may need to offer a “Part Two” to this webcast, but hope you will register today to join us on August 27th.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Applied Cryptography Techniques and Use Cases

AlexMcDonald

Jul 15, 2020

title of post
The rapid growth in infrastructure to support real time and continuous collection and sharing of data to make better business decisions has led to an age of unprecedented information storage and easy access. While collection of large amounts of data has increased knowledge and allowed improved efficiencies for business, it has also made attacks upon that information—theft, modification, or holding it for ransom — more profitable for criminals and easier to accomplish. As a result, strong cryptography is often used to protect valuable data. The SNIA Networking Storage Forum (NSF) has recently covered several specific security topics as part of our Storage Networking Security Webcast Series, including Encryption101, Protecting Data at Rest, and Key Management 101. Now, on August 5, 2020, we are going to present Applied Cryptography. In this webcast, our SNIA experts will present an overview of cryptography techniques for the most popular and pressing use cases. We’ll discuss ways of securing data, the factors and trade-off that must be considered, as well as some of the general risks that need to be mitigated. We’ll be looking at:
  • Encryption techniques for authenticating users
  • Encrypting data—either at rest or in motion
  • Using hashes to authenticate information coding and data transfer methodologies
  • Cryptography for Blockchain
As the process for storing and transmitting data securely has evolved, this Storage Networking Security Series provides ongoing education for placing these very important parts into the much larger whole. We hope you can join us as we spend some time on this very important piece of the data security landscape. Register here to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Notable Questions on NVMe-oF 1.1

Tim Lustig

Jul 14, 2020

title of post

At our recent SNIA Networking Storage Forum (NSF) webcast, Notable Updates in NVMe-oF™ 1.1we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what's new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all:

Q. Is there an implementation of NVMe-oF with direct CMB access?

A. The Controller Memory Buffer (CMB) was introduced in NVMe 1.2 and first supported in the NVMe-oF 1.0 specification. It's supported if the storage vendor has implemented this within the hardware and the network supports it. We recommend that you ask your favorite vendor if they support the feature.

Q. What is the different between PMR in an NVMe device and the persistent memory in general?

A. The Persistent Memory Region (PMR) is a region within the SSD controller and it is reserved for system level persistent memory that is exposed to the host. Just like a Controller Memory Buffer (CMB), the PMR may be used to store command data, but because it's persistent it allows the content to remain even after power cycles and resets. To go further into this answer would require a follow up webinar.

Q. Are any special actions required on the host side over Controller Memory Buffers to maintain the data consistency?

A. To prevent possible disruption and to maintain data consistency, first the control address range must be configured so that addresses will not overlap, as described in the latest specification. There is also a flush command so that persistent memory can be cleared, (also described in the specification).

Q. Is there a field to know the size of CMB and PMR supported by controller? What is the general size of CMR in current devices?

A. The general size of CMB/PMR is vendor-specific but there is a size register field in both that is defined in the specification by the size register.

Q. Does having PMR guarantee that write requests to the PMR region have been committed to media, even though they have not been acknowledged before the power fail? Is there a max time limit in spec, within which NVMe drive should recover after power fail?

A. The implementation must ensure that the previous write has completed and that it is persistent. Time limit is vendor-specific.

Q. What is the average latency of an unladen swallow using NVMe-oF 1.1?

A. Average latency will depend on the media, the network and the way the devices are implemented. It also depends on whether or not the swallow is African or European (African swallows are non-migratory).

Q. Doesn't RDMA provide an 'implicit' queue on the controller side (negating the need for CMB for queues). Can the CMB also be used for data?

A. Yes, the CMB can be used to hold both commands and command data and the queues are managed by RDMA within host memory or within the adapter. By having the queue in the CMB you can gain performance advantages.

Q. What is a ballpark latency difference number between CMB and PMR access, can you provide a number based on assumption that both of these are accessed over RDMA fabric?

A. When using CMB, latency goes down but there are no specific latency numbers available as of this writing.

Q. What is the performance of NVMe/TCP in terms of IOPS as compared to NVMe/RDMA? (Good implementation assumed)

A. This is heavily implementation dependent as the network adapter may provide offloads for TCP. NVMe/RDMA generally will have lower latency.

Q. If there are several sequence-level errors, how can we correct the errors in an appropriate order?

Q. How could we control the right order for the error corrections in FC-NVMe-2?

  1. These two questions are related and the response below is applicable to both questions.

As mentioned in the presentation, Sequence-level error recovery provides the ability to detect and recover from lost commands, lost data, and lost status responses. For Fibre Channel, a Sequence consists of one or more frames: e.g., a Sequence containing a NVMe command, a Sequence containing data, or a Sequence containing a status response. 

The order for error correction is based on information returned from the target on the given state of the Exchange compared to the state of the Exchange at the initiator. To do this, and from a high-level overview, upon sending an Exchange containing an NVMe command, a timer is started at the initiator. The default value for this timer is 2 seconds, and if a response is not received for the Exchange before the timer expires, a message is sent from the initiator to the target to determine the status of the Exchange.

Also, a response from the target may be received before the information on the Exchange is obtained from the target. If this occurs the command just continues on as normal, the timer is restarted if the Exchange is still in progress, and all is good. Otherwise, if no response from the target has been received since sending the Exchange information message, then one of two actions usually take place:

a) If the information returned from the target indicates the Exchange is not known, then the Exchange resources are cleaned up and released, and the Exchange containing the NVMe command is re-transmitted; or

b) If the information returned from the target indicates the Exchange is known and the target is still working on the command, then no error recovery is needed; the timer is restarted, and the initiator continues to wait for a response from the target.

An example of this behavior is a format command, where it may take a while for the command to complete, and the status response to be sent.

For some other typical information returned from the target per the Exchange status query:

  1. If the information returned from the target indicates the Exchange is known, and a ready to receive data message was sent by the target (e.g., a write operation), then the initiator requests the target to re-transmit the ready-to-receive data message, and the write operation continues at the transport level;
  2. If the information returned from the target indicates the Exchange is known, and data was sent by the target (e.g., a read operation), then the initiator requests the target to re-transmit the data and the status response, and the read operation continues at the transport level; and
  3. If the information returned from the target indicates the Exchange is known, and the status response was sent by the target, then the initiator requests the target to re-transmit the status response and the command completes accordingly at the transport level.

For further information, detailed informative Sequence level error recovery diagrams are provided in Annex E of the FC-NVMe-2 standard available via INCITS. 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Notable Questions on NVMe-oF 1.1

Tim Lustig

Jul 14, 2020

title of post
At our recent SNIA Networking Storage Forum (NSF) webcast, “Notable Updates in NVMe-oF™ 1.1” we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what’s new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all: Q. Is there an implementation of NVMe-oF with direct CMB access? A. The Controller Memory Buffer (CMB) was introduced in NVMe 1.2 and first supported in the NVMe-oF 1.0 specification. It’s supported if the storage vendor has implemented this within the hardware and the network supports it. We recommend that you ask your favorite vendor if they support the feature. Q. What is the different between PMR in an NVMe device and the persistent memory in general? A. The Persistent Memory Region (PMR) is a region within the SSD controller and it is reserved for system level persistent memory that is exposed to the host. Just like a Controller Memory Buffer (CMB), the PMR may be used to store command data, but because it’s persistent it allows the content to remain even after power cycles and resets. To go further into this answer would require a follow up webinar. Q. Are any special actions required on the host side over Controller Memory Buffers to maintain the data consistency? A. To prevent possible disruption and to maintain data consistency, first the control address range must be configured so that addresses will not overlap, as described in the latest specification. There is also a flush command so that persistent memory can be cleared, (also described in the specification). Q. Is there a field to know the size of CMB and PMR supported by controller? What is the general size of CMR in current devices? A. The general size of CMB/PMR is vendor-specific but there is a size register field in both that is defined in the specification by the size register. Q. Does having PMR guarantee that write requests to the PMR region have been committed to media, even though they have not been acknowledged before the power fail? Is there a max time limit in spec, within which NVMe drive should recover after power fail? A. The implementation must ensure that the previous write has completed and that it is persistent. Time limit is vendor-specific. Q. What is the average latency of an unladen swallow using NVMe-oF 1.1? A. Average latency will depend on the media, the network and the way the devices are implemented. It also depends on whether or not the swallow is African or European (African swallows are non-migratory). Q. Doesn’t RDMA provide an ‘implicit’ queue on the controller side (negating the need for CMB for queues). Can the CMB also be used for data? A. Yes, the CMB can be used to hold both commands and command data and the queues are managed by RDMA within host memory or within the adapter. By having the queue in the CMB you can gain performance advantages. Q. What is a ballpark latency difference number between CMB and PMR access, can you provide a number based on assumption that both of these are accessed over RDMA fabric? A. When using CMB, latency goes down but there are no specific latency numbers available as of this writing. Q. What is the performance of NVMe/TCP in terms of IOPS as compared to NVMe/RDMA? (Good implementation assumed) A. This is heavily implementation dependent as the network adapter may provide offloads for TCP. NVMe/RDMA generally will have lower latency. Q. If there are several sequence-level errors, how can we correct the errors in an appropriate order? Q. How could we control the right order for the error corrections in FC-NVMe-2?
  1. These two questions are related and the response below is applicable to both questions.
As mentioned in the presentation, Sequence-level error recovery provides the ability to detect and recover from lost commands, lost data, and lost status responses. For Fibre Channel, a Sequence consists of one or more frames: e.g., a Sequence containing a NVMe command, a Sequence containing data, or a Sequence containing a status response. The order for error correction is based on information returned from the target on the given state of the Exchange compared to the state of the Exchange at the initiator. To do this, and from a high-level overview, upon sending an Exchange containing an NVMe command, a timer is started at the initiator. The default value for this timer is 2 seconds, and if a response is not received for the Exchange before the timer expires, a message is sent from the initiator to the target to determine the status of the Exchange. Also, a response from the target may be received before the information on the Exchange is obtained from the target. If this occurs the command just continues on as normal, the timer is restarted if the Exchange is still in progress, and all is good. Otherwise, if no response from the target has been received since sending the Exchange information message, then one of two actions usually take place: a) If the information returned from the target indicates the Exchange is not known, then the Exchange resources are cleaned up and released, and the Exchange containing the NVMe command is re-transmitted; or b) If the information returned from the target indicates the Exchange is known and the target is still working on the command, then no error recovery is needed; the timer is restarted, and the initiator continues to wait for a response from the target. An example of this behavior is a format command, where it may take a while for the command to complete, and the status response to be sent. For some other typical information returned from the target per the Exchange status query:
  1. If the information returned from the target indicates the Exchange is known, and a ready to receive data message was sent by the target (e.g., a write operation), then the initiator requests the target to re-transmit the ready-to-receive data message, and the write operation continues at the transport level;
  2. If the information returned from the target indicates the Exchange is known, and data was sent by the target (e.g., a read operation), then the initiator requests the target to re-transmit the data and the status response, and the read operation continues at the transport level; and
  3. If the information returned from the target indicates the Exchange is known, and the status response was sent by the target, then the initiator requests the target to re­transmit the status response and the command completes accordingly at the transport level.
For further information, detailed informative Sequence level error recovery diagrams are provided in Annex E of the FC-NVMe-2 standard available via INCITS.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Kubernetes Business Resiliency FAQ

Jim Fister

Jul 13, 2020

title of post

The SNIA Cloud Storage Technologies Initiative continued our webcast series on Kubernetes last month with an interesting look at the business resiliency of Kubernetes. If you missed “A Multi-tenant, Multi-cluster Kubernetes Datapocalypse in Coming” it’s available along with the slide deck in the SNIA Educational Library here. In this Q&A blog, our Kubernetes expert, Paul Burt, answers some frequently asked questions on this topic.

Q: Multi-cloud: Departments might have their own containers; would they have their own cloud (i.e. Hybrid Cloud)?  Is that how multi-cloud might start in a company?

A: Multi-cloud or hybrid cloud is absolutely a result of different departments scaling containers in a deployment. Multi-cloud means multiple clusters, but those can be of various configurations. Different clusters and clouds need to be tuned for the needs of the organization.

Netflix is perhaps one of the most popular adopters of the cloud. Following their success, most organizations prefer a gradual shift towards cloud. That practice naturally results in different environments during the growth and exploration phase.

Q: Service Mesh and Kube Federation: From the perspective of tools, how is the development of the various federation tools progressing?

A: There are some tools (Istio and Linkerd) that are quite far along. The official community solution, KubeFed, is just on the cusp of moving into Beta. We’re starting to see some standards or standard practices developing in this space. Companies like SnapChat and Uber are already sharing some of their benefits and challenges with multi-tenancy and multi-cluster at scale. More concrete recommendations should emerge as more organizations gain experience and share their findings.

Q: Defining cluster characteristics: Within a given container, does the service needed define or predict the statefulness of the application?

A: In Kubernetes a service defines how an app might be discovered by other apps and services that need to rely on it. Generally, services tend to push the state down. That means where the data is stored will be defined in the YAML manifest that gets pushed to Kubernetes, and stateful apps are pretty well-defined in that environment.

Stateful apps will naturally be harder to manage, because of the need for uptime. The good news is they are at least easy to identify when running on Kubernetes.

Q: On how knowledge grows: It’s possible to start simply with containers and move to more complex environments. Does the knowledge gained from containers transfer to a multi-cluster, multi-tenant space?

A: Yes, it’s possible to start and grow. A lot of the knowledge will transfer. Think of it like math, with each course building on the last. We might learn trigonometry, and find that many of those trig problems simplify into algebra problems. Similarly, we might learn about Pods in Kubernetes, and find that they simplify down to many of the things we learned about containers.

In the end, your problems will likely resolve down to something familiar. Start simply, and then grow your complexity and mindset.

Q: Version control: Do we need coordinated version pinning when deploying homogenous clusters?  Does that allow us to write once and run everywhere?

A: Kubernetes is capable of that, but there are certainly some caveats. By having a homogenous infrastructure and tightly maintaining versions, you are more likely to be successful and have less versioning and bug issues that you have to track over time.

On the other hand, a machine learning team that creates a recommendation system has vastly different needs than a team building an e-commerce website.

Our goal should be to start with a well-defined base platform. With a well-defined base, it’s easier to test the compatibility of those common components. When specific teams have specific needs, we’ll inevitably need to adapt our platform. That base should mean it’s easier to add new components with confidence. Troubleshooting, and maintaining the resulting distinct version of the platform, should also be easier because the base platform ensures a lot of familiarity and common knowledge transfers.

Q: Interface and management of security: There needs to be a balance between customer demand for cluster storage and security. Is there an appropriate interface that can manage security problems and give users enough space to easily consume storage on clusters? What’s the right design in this case?

A: Yes. Many of the current tools have really strong interfaces for management of security.  Rook and Astra are two good examples. Most of these solutions are open-source, but that does not necessarily mean they approach problems like security in the same way.

For any cluster storage solution, we’re probably looking for a few common features. Encryption of data at rest, RBAC / permissions, snapshots, and backups. For components that are defined by Kubernetes (like RBAC), we’re more likely to see a consistent way of doing things amongst tools. For other items like encryption of data or backing up, it’s more likely we’ll see each solution tackle the problem in a slightly different way.

The implication is that the cluster storage solution you start with will likely be with you for a while, so make sure you’re picking the right tool for the overall corporate needs.

Q: So that means it’s really important to do the research early? Is it possible to move between tools?

A: It is possible to move between tools, but it’s likely that your first choices will be the ones you carry forward. Beginning to research this early can help reduce the panic and stress that might come later on, when your organization discovers a hard need and resulting deadline for cluster storage.

Remember, we said this webcast was part of a series? Watch all of the CSTI’s Kubernetes in the Cloud presentations to learn more.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Kubernetes Business Resiliency FAQ

Jim Fister

Jul 13, 2020

title of post
The SNIA Cloud Storage Technologies Initiative continued our webcast series on Kubernetes last month with an interesting look at the business resiliency of Kubernetes. If you missed “A Multi-tenant, Multi-cluster Kubernetes Datapocalypse in Coming” it’s available along with the slide deck in the SNIA Educational Library here. In this Q&A blog, our Kubernetes expert, Paul Burt, answers some frequently asked questions on this topic. Q: Multi-cloud: Departments might have their own containers; would they have their own cloud (i.e. Hybrid Cloud)?  Is that how multi-cloud might start in a company? A: Multi-cloud or hybrid cloud is absolutely a result of different departments scaling containers in a deployment. Multi-cloud means multiple clusters, but those can be of various configurations. Different clusters and clouds need to be tuned for the needs of the organization. Netflix is perhaps one of the most popular adopters of the cloud. Following their success, most organizations prefer a gradual shift towards cloud. That practice naturally results in different environments during the growth and exploration phase. Q: Service Mesh and Kube Federation: From the perspective of tools, how is the development of the various federation tools progressing? A: There are some tools (Istio and Linkerd) that are quite far along. The official community solution, KubeFed, is just on the cusp of moving into Beta. We’re starting to see some standards or standard practices developing in this space. Companies like SnapChat and Uber are already sharing some of their benefits and challenges with multi-tenancy and multi-cluster at scale. More concrete recommendations should emerge as more organizations gain experience and share their findings. Q: Defining cluster characteristics: Within a given container, does the service needed define or predict the statefulness of the application? A: In Kubernetes a service defines how an app might be discovered by other apps and services that need to rely on it. Generally, services tend to push the state down. That means where the data is stored will be defined in the YAML manifest that gets pushed to Kubernetes, and stateful apps are pretty well-defined in that environment. Stateful apps will naturally be harder to manage, because of the need for uptime. The good news is they are at least easy to identify when running on Kubernetes. Q: On how knowledge grows: It’s possible to start simply with containers and move to more complex environments. Does the knowledge gained from containers transfer to a multi-cluster, multi-tenant space? A: Yes, it’s possible to start and grow. A lot of the knowledge will transfer. Think of it like math, with each course building on the last. We might learn trigonometry, and find that many of those trig problems simplify into algebra problems. Similarly, we might learn about Pods in Kubernetes, and find that they simplify down to many of the things we learned about containers. In the end, your problems will likely resolve down to something familiar. Start simply, and then grow your complexity and mindset. Q: Version control: Do we need coordinated version pinning when deploying homogenous clusters?  Does that allow us to write once and run everywhere? A: Kubernetes is capable of that, but there are certainly some caveats. By having a homogenous infrastructure and tightly maintaining versions, you are more likely to be successful and have less versioning and bug issues that you have to track over time. On the other hand, a machine learning team that creates a recommendation system has vastly different needs than a team building an e-commerce website. Our goal should be to start with a well-defined base platform. With a well-defined base, it’s easier to test the compatibility of those common components. When specific teams have specific needs, we’ll inevitably need to adapt our platform. That base should mean it’s easier to add new components with confidence. Troubleshooting, and maintaining the resulting distinct version of the platform, should also be easier because the base platform ensures a lot of familiarity and common knowledge transfers. Q: Interface and management of security: There needs to be a balance between customer demand for cluster storage and security. Is there an appropriate interface that can manage security problems and give users enough space to easily consume storage on clusters? What’s the right design in this case? A: Yes. Many of the current tools have really strong interfaces for management of security.  Rook and Astra are two good examples. Most of these solutions are open-source, but that does not necessarily mean they approach problems like security in the same way. For any cluster storage solution, we’re probably looking for a few common features. Encryption of data at rest, RBAC / permissions, snapshots, and backups. For components that are defined by Kubernetes (like RBAC), we’re more likely to see a consistent way of doing things amongst tools. For other items like encryption of data or backing up, it’s more likely we’ll see each solution tackle the problem in a slightly different way. The implication is that the cluster storage solution you start with will likely be with you for a while, so make sure you’re picking the right tool for the overall corporate needs. Q: So that means it’s really important to do the research early? Is it possible to move between tools? A: It is possible to move between tools, but it’s likely that your first choices will be the ones you carry forward. Beginning to research this early can help reduce the panic and stress that might come later on, when your organization discovers a hard need and resulting deadline for cluster storage. This panic and stress might need products from CBD Shop. Remember, we said this webcast was part of a series? Watch all of the CSTI’s Kubernetes in the Cloud presentations to learn more.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Ready for a Lesson on Security & Privacy Regulations?

J Metz

Jul 10, 2020

title of post

Worldwide, regulations are being promulgated and aggressively enforced with the intention of protecting personal data. These regulatory actions are being taken to help mitigate exploitation of this data by cybercriminals and other opportunistic groups who have turned this into a profitable enterprise. Failure to meet these data protection requirements puts individuals at risk (e.g., identity theft, fraud, etc.), as well as subjecting organizations to significant harm (e.g., legal penalties).

The SNIA Networking Storage Forum (NSF) is going to dive into this topic at our Security & Privacy Regulations webcast on July 28, 2020. We are fortunate to have experts, Eric Hibbard and Thomas Rivera, share their expertise in security standards, data protection and data privacy at this live event. 

This webcast will highlight common privacy principles and themes within key privacy regulations. In addition, the related cybersecurity implications will be explored. We'll also probe a few of the recent regulations/laws to outline interesting challenges due to over and under-specification of data protection requirements (e.g., "reasonable" security).

Attendees will have a better understanding of:

  • How privacy and security is characterized
  • Data retention and deletion requirements
  • Core data protection requirements of sample privacy regulations from around the globe
  • The role that security plays with key privacy regulations
  • Data breach implications and consequences

This webcast is part of our Storage Networking Security Webcast Series. I encourage you to watch the presentations we've done to date on:

And I hope you will register today and join us on July 28th for what is sure to be an interesting look into the history, development and impact of these regulations.   

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Ready for a Lesson on Security & Privacy Regulations?

J Metz

Jul 10, 2020

title of post
Worldwide, regulations are being promulgated and aggressively enforced with the intention of protecting personal data. These regulatory actions are being taken to help mitigate exploitation of this data by cybercriminals and other opportunistic groups who have turned this into a profitable enterprise. Failure to meet these data protection requirements puts individuals at risk (e.g., identity theft, fraud, etc.), as well as subjecting organizations to significant harm (e.g., legal penalties). The SNIA Networking Storage Forum (NSF) is going to dive into this topic at our Security & Privacy Regulations webcast on July 28, 2020. We are fortunate to have experts, Eric Hibbard and Thomas Rivera, share their expertise in security standards, data protection and data privacy at this live event.  This webcast will highlight common privacy principles and themes within key privacy regulations. In addition, the related cybersecurity implications will be explored. We’ll also probe a few of the recent regulations/laws to outline interesting challenges due to over and under-specification of data protection requirements (e.g., “reasonable” security). Attendees will have a better understanding of:
  • How privacy and security is characterized
  • Data retention and deletion requirements
  • Core data protection requirements of sample privacy regulations from around the globe
  • The role that security plays with key privacy regulations
  • Data breach implications and consequences
This webcast is part of our Storage Networking Security Webcast Series. I encourage you to watch the presentations we’ve done to date on: And I hope you will register today and join us on July 28th for what is sure to be an interesting look into the history, development and impact of these regulations.   

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered on CMSI and More

Marty Foltyn

Jun 29, 2020

title of post

The “new” SNIA Compute, Memory, and Storage Initiative (CMSI) was formed at the beginning of 2020 out of the SNIA Solid State Storage Initiative.  The 45 companies who comprise the CMSI recognized the opportunity to combine storage, memory, and compute in new, novel, and useful ways; and to bring together technology, alliances, education, and outreach to better understand new opportunities and applications. 

To better explain this decision, and to talk about the various aspects of the Initiative, CMSI co-chair Alex McDonald invited CMSI members Eli Tiomkin, Jonmichael Hands, and Jim Fister to join him in a live SNIA webcast. 

If you missed the live webcast, we
encourage you to watch
it on demand
as it was highly rated by attendees. Our panelists answered
questions on computational storage, persistent memory, and solid state storage during
the live event; here are answers to those and to ones we did not have time to
get to.

Q1: In terms of the
overall definition of Computational Storage, how does Computational Storage and
the older Composable Storage terms interact?  Are they the same?  Are
SNIA and their Computational Storage Technical Work Group (CS TWG) working on expanding
computational storage uses?

A1: Some of the
definitions that the CS TWG explores range from single use storage functions —
such as compression — or multiple services running in a more complex and
programmable environment. The latter encompasses more of the thoughts
around composable storage.  So compostable storage is a
part of computational storage, and will continue to be incorporated as the
definitions and programming models are developed.  If you like to see the
latest working document on the computational storage model, a draft can be
found here.

Q2: In terms of some
of the definitions of drive form factors, are the naming conventions completed?

A2: There are still
opportunities to change definitions and naming.  The work group is continuing
to work on naming conventions for the latest specifications.

A2a: If you’d like to
hear a great dialog on Alex’s thoughts on naming conventions followed by Jim’s
notes on lunch menus, tune in at minute 48 of the webcast.

Q3: Is the E3 drive
specification backward compatible with the existing E1.L or E1.S?

A3: The connector is
the same, but the speeds are different.  Existing testing infrastructure
should work to test the drives.  On a mainstream server, E3 is meant to be
used in a backplane, which the prior standards would fit in either an
orthogonal connector or a backplane.  So the two are sometimes compatible.

Q4: Will E1.S be a
alternative for workstation class laptops, replacing M.2?  So would it be
useful for higher capacity drives?

A4: M.2 is the mainstream form factor for laptops, desktops, and workstations.  But the low power profile (8W) can limit drive performance.  E1.S has a 25W specified, and may be much more effective for higher-end workstations and desktops.  Both specifications are likely to remain in volume. You can check out the various SSD specs on our Solid State Drive Form Factor page.

Q5: Does SNIA use too
many S’s in its acronyms?

A5: Alex MacDonald
thinks so.

Q6: Can we talk more
about computational and composable storage?

A6: Alex MacDonald
gave the order for a detailed future SNIA CMSI webcast.  Stay tuned!

Q7: Is there a PMDK
port for Oracle Solaris?

A7: Currently no, but
someone should submit a pull request at PMDK.org, and the magic geeks might
work their powers.  Given the close similarities, there is a distinct
possibility that it can be done.

Q8: Does deduplication technology
come into play with computational storage?

A8: Not immediately.
These are mostly fixed functions right now, available on many drives.  If
it becomes an accelerator function, then that would be incorporated.

Q9: Is there that much
difference between how software should handle magnetic drives, NVMe drives, and
Persistent Memory?

A9: Yes.  Any
other questions?

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to