Sorry, you need to enable JavaScript to visit this website.

An FAQ on the “Fine Print” of Cyber Insurance

Paul Talbut

Sep 30, 2020

title of post
Last month, the SNIA Cloud Storage Technologies Initiative, convened experts, Eric Hibbard and Casey Boggs, for a webcast on cyber insurance – a growing area to further mitigate risks from cyber attacks. However, as our attendees learned, cyber insurance is not as simple as buying a pre-packaged policy. If you missed the live event “Does Your Cyber Insurance Strategy Need a Tune-Up” you can watch it on-demand. Determining where and how cyber insurance fits in a risk management program generates a lot of questions. Our experts have provided answer sto them all here: Q. Do “mega” companies buy cyber insurance or do they self-insure? A. Many Fortune 500 companies do carry cyber insurance. The scope of coverage can vary significantly. Concerns over ransomware are often a driver. Publicly traded companies have a need to meet due care obligations and cyber insurance is a way of demonstrating this. Q. Insurance companies don’t like to pay out. I suspect making a claim is quite contentious? A. It depends on the nature of the claim and the amount of the claim. Most policies have exemptions, triggers, caps, etc. that have to be navigated. Avoiding payouts is bad business for insurance companies, which are operating in a very competitive space. Q. How much does cyber insurance cost?  Either an example or hypothetical. A. Due to all the factors involved (e.g., size of organization, market sector, location, type of organization, policy coverage/exemption, and others) it is not possible to make general estimates. That said, many insurers have on-line quote capabilities that can be used to explore basic options and pricing. Q. Do insurance companies do audits of actual practices, e.g. whether there are actual (vs. claimed) controls on insider access to confidential data? Either before issuing a policy or after an incident. If so, how are audits done? A. It depends on the nature of the coverage. The organization may need to supply certain documents (security policies, incident response plan, etc.) and make assertions about its operations. Policy discounts can be dependent on audits (insurer or third-party). Also, a claim may trigger an investigation/audit. Q. Is it possible that business executives see an insurance policy as simply a safeguard against a cyber-attack? A. Yes, it is possible and the fear of ransomware could be a key motivation. However, such a simplistic view is not likely to be productive. Cyber insurance needs to be an element of your overall risk program and carefully matched to the organization’s needs. You don’t want to learn that you purchased the wrong kind of insurance after an incident. That is like being victimized twice. Q. To what degree do businesses need to do risk assessment – is it not just an IT/data security problem? A. Assessing your risk and determining your risk appetite are critical prerequisites to purchasing cyber insurance. Without these insights there is no way for the organization to know what kind of coverage it should get. Such an activity should be driven by the CFO or someone with responsibility for the operations of the organization. IT (via the CIO) and data security (via the CISO) should play a supporting role but they should not be the drivers.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Composable or Computational – Your Questions Answered!

Eli Tiomkin

Sep 24, 2020

title of post
Our recent webcast on Composable Infrastructure and Computational Storage raised some interesting questions. No need to compose your own answers – my co-presenter Philip Kufeldt and I answer them here! You can find the entire webcast video along with the slide PDF in the SNIA Educational Library. We also invite you and your colleagues to take 10 and watch three short videos on Computational Storage topics. Q (I’m) a little confused about moving data across, for example, NVMe-oF, as it consumes DDR bandwidth. Can you elaborate? A:  Any data moving in or out of server consumes DDR bandwidth by virtue of the DMAs done. Consider a simple NFS file server, where I as a client write a 1GiB file.  That data arriving from the client in the server first appears as a series of TCP packets. These packets arrive first in the NIC and then are DMA-ed across the PCIe bus into main memory where the TCP/IP stack deciphers and then ultimately delivers them to waiting NFS server software.  If you have a smart NFS implementation that copy from the PCIe NIC to main memory is the only copy in the process.  But you have consumed 1GiB of DDR BW.  Now the NFS server SW translates the request into a series of Block IO requests to an underlying storage device via a SATA/SAS/NVMe controller.  This controller will again DMA the data from memory to the device.  Another 1GiB of DDR BW is consumed. Traditionally this has not really been noticed because the devices consuming this data have been slow enough to throttle how quickly the DDR BW can be consumed.  Now we have SSDs capable of consuming several GiB/s of bandwidth per device. You can easily design an unbalanced system where the DDR bus actually traps storage throughput. Q:  Some vendors are now running virtual machines within their arrays. Would these systems be considered a computational storage system? A:  Computational Storage at SNIA and other working groups are defining it at the storage device level, not the system level of sorts without at least some sort of computational storage processor (CSP). While these systems have intelligence, it is not at the storage device level, but addressed about that level before the user sees it (still at the CPU level). Q:  For Composable Infrastructure, you mentioned CXL as a more evolved PCIe fabric, when it will actually be released?  How about using PCIe Gen4 as a fabric, as it’s available today? A:  PCIe 4 does not provide robust memory semantics, specifically the cache coherency needed by some of these devices. This is the exact purpose of CXL:  to extend PCIe to better support load store operations, including cache coherency, needed by memory and memory like devices. Q:  Computational storage moves processing into storage. Isn’t that the opposite of disaggregation in composable infrastructure? A:  It is and it isn’t. As said in the presentation, the diagram of CI was quite simplistic. I doubt there will ever be just processors connected to a fabric.  Just as processors have memory built into them, level 1-3 caches, you can envision CI processor elements having some amount of local RAM as part of the component, an external level 4 cache if you will.  Imagine a small PCB with a processor and some small number of DIMMs. Other memory resources might be across the fabric to complete the memory requirements of a composed system. Storage devices already have processor and memory components within them for processing IO requests.  Augmenting these resources to handle only portions of processing of the governing app. Allowing cycles to migrate to the data, not the entire app but some of the data centric portions of it.   This is exactly how TPU or GPU processing would work as well, migrating the computational portions of the app to the component.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Security & Privacy Regulations: An Expert Q&A

J Metz

Sep 24, 2020

title of post
Last month the SNIA Networking Storage Forum continued its Storage Networking Security Webcast series with a presentation on Security & Privacy Regulations. We were fortunate to have security experts, Thomas Rivera and Eric Hibbard, explain the current state of regulations related to data protection and data privacy. If you missed it, it’s available on-demand. Q. Do you see the US working towards a national policy around privacy or is it going to stay state-specified? A.  This probably will not happen anytime soon due to political reasons. Having a national policy on privacy is not necessarily a good thing, depending on your state. Such a policy would likely have a preemption clause and could be used to diminish requirements from states like CA and MA. Q. Can you quickly summarize the IoT law? Does it force IoT manufactures to continually support IoT devices (ie. security patches) through its lifetime? A. The California IoT law is vague, in that it states that devices are to be equipped with “reasonable” security feature(s) that are all of the following:
  • Appropriate to the nature and function of the device
  • Appropriate to the information it may collect, contain, or transmit
  • Designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure
This is sufficiently vague that it may be left to lawyers to determine whether requirements have been met. It is also important to remember IoT is a nickname because the law applies to all “Connected devices” (i.e., any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address). It also states that if a connected device is equipped with a means for authentication outside a LAN, either a preprogrammed password that is unique to each device manufactured or a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time is required. Q. You didn’t mention Brexit – to date the plan is to follow GDPR but it may change, any thoughts? A. British and European Union courts recognize a fundamental right to data privacy under Article 8 of the binding November 1950, European Convention on Human Rights (ECHR). In addition, Britain had to implement GDPR as a member nation. Post-Brexit, the UK will not have to continue implementing GDPR as the other member countries in the EU. However, Britain will be subject to EU data transfer approval as a “third country” like the US. Speculation has been that Britain would attempt a “Privacy Shield” agreement modeled after the arrangement between the United States and the European Union. With the recent Court of Justice of the European Union issuance of a judgment declaring as “invalid” the European Commission’s Decision (EU) 2016/1250 of 12 July 2016 on the adequacy of the protection provided by the EU-U.S. Privacy Shield (i.e., the EU-U.S. Privacy Shield Framework is no longer a valid mechanism to comply with EU data protection requirements when transferring personal data from the European Union to the United States), such an approach is now unlikely. It is not clear what Britain will do at this point and, as with many elements of Brexit, Britain could find itself digitally isolated from the EU if data privacy is not handled as part of the separation agreement. Q. In thinking of privacy – what are your thoughts on encryption being challenged? By EARN IT act/LAED act, etc. It seems like that is going against a nation-wide privacy movement, if there is one. A. The US Government (and many others) have a love/hate relationship with encryption. They want everyone to use it to protect sensitive assets, unless you are a criminal and then they want you to do everything in the clear so they don’t have to work too hard to catch and prosecute you…or simply persecute you. The back-door argument is amusing because most governments don’t have the ability to prevent something like this from being exploited by attackers (non-Government types). If the US Government can’t secure its own personnel records, which potentially exposes every civil servant along with his/her families and colleagues to attacks, how could they protect something as important as a back-door? If you want to learn more about encryption, watch the Encryption 101 webcast we did as part of this series.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Non-Cryptic Answers to Common Cryptography Questions

AlexMcDonald

Sep 23, 2020

title of post
The SNIA Networking Storage Forum’s Storage Networking Security Webcast Series continues to examine the many different aspects of storage security. At our most recent webcast on applied cryptography, our experts dove into user authentication, data encryption, hashing, blockchain and more. If you missed the live event, you can watch it on-demand. Attendees of the live event had some very interesting questions on this topic and here are answer to them all: Q. Can hashes be used for storage deduplication?  If so, do the hashes need to be 100% collision-proof to be used for deduplication? A. Yes, hashes are often used for storage deduplication. It’s preferred that they be collision-proof but it’s not required if the deduplication software does a bit-by-bit comparison of any files that produce the same hash in order to verify if they really are identical or not. If the hash is 100% collision-proof then there is no need to run bit-by-bit comparisons of files that produce the same hash value. Q. Do cloud or backup service vendors use blockchain proof of space to prove to customers how much storage space is available or has been reserved?    A. There are some vendors who are using proof of space to map or plot the device. Once the device is plotted you can have a report which provides the summary of storage space available. Some vendors use it today. Since mining is the most popular application today, mining users use this information to report available space for mining pool applications. Can you use it for enterprise cloud to monitor the available disk space – absolutely. Q. If a vendor provides a guarantee of space to a customer using blockchain, does something prevent them from filling up the space before the customer uses that space? A. Once the disk is plotted there is no way for any other application to use it. It will be flagged as an error. In fact, it’s a really great way to ensure that no attacks are occurring on the disk itself. Each block of space is mapped and indexed. Q. I lost track during the explanation about proofs in blockchain, what are those algorithms used for? A. There are two concepts which are normally discussed and create the confusion. One is that Blockchain can use different cryptographic hash algorithms such as SHA-256 (one of the most popular), Whirpool, RIPEMD (RACE Integrity Primitives Evaluation Message Digest), Dagger-Hashimoto and others). Mercle tree is a blockchain construct which allows one to build a chain by using hashes and data blocks. Consensus protocols is protocol for decision making such as Proof of Work, Proof of Space, Proof of Stake and etc. Each consensus protocol is using the distributed ledger to make a record for the block of data transferred. Use of cryptography hashes allows us to create trustless concept with encrypting data which is being transferred from point A to point B. The consensus protocol allows us to keep the record of the data blocks in distributed ledgers. This is a brief answer to the question and if you would like to get additional information please contract olga@myactionspot.com I will be happy to deliver the detailed session to address this topic. Q. How does encryption work in Storage Replication? Please advise whether this exists? A. Yes it exists. Encryption can be applied to data at rest and that encrypted data can be replicated, and/or the replication process can encrypt the data temporarily while it’s in transit. Q. Regarding blockchain: assuming a new transaction (nobody has information yet), is it possible that when sending the broadcast someone modifies part of the data (0.1% for example) and this data continues to travel over the network without being considered corrupted? A. The first block of data which is building the first blockchain creates the authenticity. If the block and hash just created are originals they will be accepted as originals, recorded in distributed ledger and moved across the chain. BUT if you are attempting to send a block on a blockchain which is already authenticated this block will be not authenticated and discarded once it’s on the chain. Remember we said this was part of a series? We’ve already had a lot of great experts cover a wide range of storage security topics. You can access all of them at the SNIA Educational Library.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

See You (Online) at SDC!

Marty Foltyn

Sep 15, 2020

title of post
We’re going virtual in 2020, and Compute, Memory, and Storage are important topics at the upcoming SNIA Storage Developer ConferenceSNIA CMSI is a sponsor of SDC 2020 – so visit our booth for the latest information and a chance to chat with our experts.  With over 120 sessions available to watch live during the event and later on-demand, live Birds of a Feather chats, and a Persistent Memory Bootcamp accessing new PM systems in the cloud, we want to make sure you don’t miss anything!  Register here to see sessions live – or on demand to your schedule.  Agenda highlights include: Computational Storage Talks Deploying Computational Storage at the Edge – discussing the deployment of small form factor, asic-based, solutions, including a use case. Next Generation Datacenters require composable architecture enablers and deterministic programmable intelligenceexplaining why determinism, parallel programming and ease of programming are important. Computational Storage Birds of a Feather LIVE Session – ask your questions of our experts and see live demos of computational storage production systems. Tuesday September 22, 2020 – 3:00 pm – 4:00 pm PDT (UTC-7) Persistent Memory Presentations Caching on PMEM: an Iterative Approachdiscussing Twitter’s approach to exploring in-memory caching. Challenges and Opportunities as Persistence Moves Up the Memory/Storage Hierarchy – show how and why memory at all levels will become persistent. Persistent Memory on eADR System – describes how the SNIA Persistent Memory Programming Model will include the possibility of platforms where the CPU caches are considered permanent and need no flushing. Persistent Memory Birds of a Feather LIVE Sessionask your questions to our experts on your bootcamp progress, how to program PM, or what PM is shipping today . Tuesday, September 22, 2020 – 4:00 pm – 5:00 pm PDT (UTC-7) Solid State Storage Sessions Enabling Ethernet Drives – provides a glimpse into a new SNIA standard that enables SSDs to have an Ethernet interface, and discussed the latest management standards for NVMe-oF drives. An SSD for Automotive Applications – details efforts under way in JEDEC to define a new Automotive SSD standard.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

Sep 14, 2020

title of post
Last month, the SNIA Cloud Storage Technologies Initiative was fortunate to have artificial intelligence (AI) expert, Parviz Peiravi, explore the topic of AI Operations (AIOps) at our live webcast, “IT Modernization with AIOps: The Journey.” Parviz explained why the journey to cloud native and microservices, and the complexity that comes along with that, requires a rethinking of enterprise architecture. If you missed the live presentation, it’s now available on demand together with the webcast slides. We had some interesting questions from our live audience. As promised, here are answers to them all: Q. Can you please define the Data Lake and how different it is from other data storage models?           A. A data lake is another form of data repository with specific capability that allows data ingestion from different sources with different data types (structured, unstructured and semi-structured), data as is and not transformed. The data transformation process Extract, Load, Transform (ELT) follow schema on read vs. schema on write Extract, Transform and Load (ETL) that has been used in traditional database management systems. See the definition of data lake in the SNIA Dictionary here. In 2005 Roger Mougalas coined the term Big Data, it refers to large volume high velocity data generated by the Internet and billions of connected intelligent devices that was impossible to store, manage, process and analyze by traditional database management and business intelligent systems. The need for a high-performance data management systems and advanced analytics that can deal with a new generation of applications such as Internet of things (IoT), real-time applications and, streaming apps led to development of data lake technologies. Initially, the term “data lake” was referring to Hadoop Framework and its distributed computing and file system that bring storage and compute together and allow faster data ingestion, processing and analysis. In today’s environment, “data lake” could refer to both physical and logical forms: a logical data lake could include Hadoop, data warehouse (SQL/No-SQL) and object-based storage, for instance. Q. One of the aspects of replacing and enhancing a brownfield environment is that there are different teams in the midst of different budget cycles. This makes greenfield very appealing. On the other hand, greenfield requires a massive capital outlay. How do you see the percentages of either scenario working out in the short term? A. I do not have an exact percentage, but the majority of enterprises using a brownfield implementation strategy have been in place for a long time. In order to develop and deliver new capabilities with velocity, greenfield approaches are gaining significant traction. Most of the new application development based on microservices/cloud native is being implemented in greenfield to reduce the risk and cost using cloud resources available today in smaller scale at first and adding more resources later. Q. There is a heavy reliance upon mainframes in banking environments. There’s quite a bit of error that has been eliminated through decades of best practices. How do we ensure that we don’t build in error because these models are so new? A. The compelling reasons behind mainframe migration – beside the cost – is ability to develop and deliver new application capabilities, business services and making data available to all other applications. There are four methods for mainframe migration:
  • Data migration only
  • Re-platforming
  • Re-architecting
  • Re-factoring
Each approach provides enterprises different degrees of risk and freedom.  Applying best practices to both application design/development and operational management, is the best way to ensure smooth application migration from a monolith to a new distributed environment such as microservices/cloud native. Data architecture plays a pivotal role in the design process in addition to applying Continuous Integration and Continuous Delivery (CI/CD) process. Q. With the changes into a monolithic data lake, will we be seeing different data lakes with different security parameters, which just means that each lake is simply another data repository? A. If we follow a domain-driven design principal, you could have multiple data lakes with specific governance and security policies appropriate to that domain. Multiple data lakes could be accessed through data virtualization to mimic a monolithic data lake; this approach is based on a logical data lake architecture. Q. What’s the difference between multiple data lakes and multiple data repositories? Isn’t it just a matter of quantity? A. Looking from Big Data perspective, a data lake is not only stored data but also provides capabilities to process and analyze data (e.g. Hadoop framework/HDFS). New trends are emerging that separate storage and compute (e.g., disaggregated storage architectures) hence some vendors use the term “data lake” loosely and offer only storage capability, while others provide both storage and data processing capabilities as an integrated solution. What is more important than the definition of data lake is your usage and specific application requirements to determine which solution is a good fit for your environment.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Impact and Implications of Internet of Payments

Jim Fister

Sep 2, 2020

title of post
Electronic payments, once the purview of a few companies, have expanded to include a variety of financial and technology companies. Internet of Payment (IoP) enables payment processing over many kinds of IoT devices and has also led to the emergence of the micro-transaction. The growth of independent payment services offering e-commerce solutions, such as Square, and the entry of new ways to pay, such as Apple Pay, mean that a variety of devices and technologies also have come into wide use. This is the topic that the SNIA Cloud Storage Technologies Initiative is going to examine at our live webcast on October 14, 2020 “Technology Implications of Internet of Payments.” Along with the rise and dispersal of the payment eco-system, more of our assets that we exchange for payment are becoming digitized as well. When digital ownership is equivalent to physical ownership, security and scrutiny of those digital platforms and methods takes a leap forward in significance. Assets and funds are now widely distributed across multiple organizations. Physical asset ownership is even being shared between many stakeholders resulting in more ownership opportunities for less investment but in a distributed way. In this webcast we will look at the impact of all of these new principles across multiple use cases and how it impacts not only on the consumers driving this behavior but on the underlying infrastructure that supports and enables it. We’ll examine:
  • The cloud network, applications and storage implications of IoP
  • Use of emerging blockchain capabilities for payment histories and smart contracts
  • Identity and security challenges at the device in addition to point of payment
  • Considerations on architecting IoP solutions for future scale
Register today and please bring your questions for our expert presenters.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Impact and Implications of Internet of Payments

Jim Fister

Sep 2, 2020

title of post
Electronic payments, once the purview of a few companies, have expanded to include a variety of financial and technology companies. Internet of Payment (IoP) enables payment processing over many kinds of IoT devices and has also led to the emergence of the micro-transaction. The growth of independent payment services offering e-commerce solutions, such as Square, and the entry of new ways to pay, such as Apple Pay, mean that a variety of devices and technologies also have come into wide use. This is the topic that the SNIA Cloud Storage Technologies Initiative is going to examine at our live webcast on October 14, 2020 “Technology Implications of Internet of Payments.” Along with the rise and dispersal of the payment eco-system, more of our assets that we exchange for payment are becoming digitized as well. When digital ownership is equivalent to physical ownership, security and scrutiny of those digital platforms and methods takes a leap forward in significance. Assets and funds, similar to those found on btreal, are now widely distributed across multiple organizations. Physical asset ownership is even being shared between many stakeholders resulting in more ownership opportunities for less investment but in a distributed way. In this webcast we will look at the impact of all of these new principles across multiple use cases and how it impacts not only on the consumers driving this behavior but on the underlying infrastructure that supports and enables it. We’ll examine:
  • The cloud network, applications and storage implications of IoP
  • Use of emerging blockchain capabilities for payment histories and smart contracts
  • Identity and security challenges at the device in addition to point of payment
  • Considerations on architecting IoP solutions for future scale
Register today and please bring your questions for our expert presenters.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Achieving Data Literacy

Jim Fister

Aug 24, 2020

title of post
We’re all spending our days living in the pandemic and understanding the cultural changes on a personal level.  That keening wail you hear is not some outside siren, it’s you staring out the window at the world that used to be.  But with all that, have you thought about the insight that you could be applying to your business? If the pandemic has taught data professionals one essential thing, it’s this:  Data is like water when it escapes; it reaches every aspect of the community it inhabits. This fact becomes apparent when the general public has access to statistics, assessments, analysis and even medical journals related to the pandemic, at a scale never seen before. But having access to data does not automatically grant the reader knowledge of how to interpret that data or the ability to derive insight. In fact, it can be quite challenging to judge the accuracy or value in that data. Gaining insight to information in context that extends beyond just the facts presented and instead enables reasonable predictions and suppositions about new instances of that data requires a skill called data literacy. Join us on September 17, 2020 for a live SNIA Cloud Storage Technologies Initiative webcast “Using Data Literacy to Drive Insight” where we will explore the importance of data literacy and examine:
  • How data literacy is defined by the ability to interpret and apply context
  • How a data scientist approaches new data sources and the questions they ask of it
  • How to seek out supporting or challenging data to validate its accuracy and value for providing insight
  • How this impacts underlying information systems
  • How data platforms need to adjust to this purpose+ data eco-system where data sources are no longer single use
Register today for what is sure to be an “insightful” look at this important topic.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding the NVMe Key-Value Standard

John Kim

Aug 17, 2020

title of post

The storage industry has many applications that rely on storing data as objects. In fact, it’s the most popular way that unstructured data—for example photos, videos, and archived messages--is accessed.

At the drive level, however, the devil is in the details. Normally, storage devices like drives or storage systems store information as blocks, not objects. This means that there is some translation that goes on between the data as it is ingested or consumed (i.e., objects) and the data that is stored (i.e., blocks).

Naturally, storing objects from applications as objects on storage would be more efficient and means that there are performance boosts, and simplicity means that there are fewer things that can go wrong. Moving towards storing key value pairs that get away from the traditional block storage paradigm makes it easier and simpler to access objects. But nobody wants a marketplace where each storage vendor has their own key value API.

Both the NVM Express™ group and SNIA have done quite a bit of work in standardizing this approach:

  • NVM Express has completed standardization of the Key Value Command Set
  • SNIA has standardized a Key Value API
  • Spoiler alert: these two work very well together!

What does this mean? And why should you care? Find out on September 1, 2020 at our live SNIA Networking Storage Forum webcast, “The Key to Value: Understanding the NVMe Key-Value Standard” when Bill Martin, SNIA Technical Council Co-Chair, will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards. He will also cover the SNIA KV-API and open source work that is available to take advantage of Key Value storage and discuss:

  • How this approach is different than traditional block-based storage
  • Why doing this makes sense for certain types of data (and, of course, why doing this may not make sense for certain types of data)
  • How this simplifies the storage stack
  • Who should care about this, why they should care about this, and whether or not you are in that group

The event is live and with time permitting, Bill will be ready to answer questions on the spot. Register today to join us on September 1st.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to