Sorry, you need to enable JavaScript to visit this website.

Understanding the Power of SNIA’s Storage Management Initiative

Don Deel

Jun 25, 2020

title of post

By Don Deel, SNIA SMI Governing Board Chair

The SNIA Storage Management Initiative (SMI) uses many acronyms that can cause confusion. SMI? That’s the name of the Initiative! SMI-S? That’s a storage management specification. CTP? That stands for Conformance Test Program, but soon there will be two! One already exists for SMI-S and the other is being developed for SNIA Swordfish. Swordfish is a storage management specification that doesn’t have an acronym.

So other than come up with confusing acronyms, what does the SMI do? The SMI is an active group with a mission to unify the storage industry to develop and standardize interoperable storage management technologies. The SMI supports the development of storage management solutions that are based upon standard interfaces instead of proprietary interfaces. This helps lower costs, makes integration efforts easier and provides increased reliability, security and manageability.

You can learn more about the SMI on SNIA’s website, but to make things easier to digest, we created an infographic. It’s on our website here. This provides a visualization of the programs SMI has to offer and how they all work together to provide SMI members maximum value.

If you’re interested in storage management and your company works with these technologies, I encourage you to join SNIA’s SMI and participate in the development of the next generation of storage management standards. Learn more about SMI membership here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding the Power of SNIA’s Storage Management Initiative

title of post

By Don Deel, SNIA SMI Governing Board Chair

The SNIA Storage Management Initiative (SMI) uses many acronyms that can cause confusion. SMI? That’s the name of the Initiative! SMI-S? That’s a storage management specification. CTP? That stands for Conformance Test Program, but soon there will be two! One already exists for SMI-S and the other is being developed for SNIA Swordfish. Swordfish is a storage management specification that doesn’t have an acronym.

So other than come up with confusing acronyms, what does the SMI do? The SMI is an active group with a mission to unify the storage industry to develop and standardize interoperable storage management technologies. The SMI supports the development of storage management solutions that are based upon standard interfaces instead of proprietary interfaces. This helps lower costs, makes integration efforts easier and provides increased reliability, security and manageability.

You can learn more about the SMI on SNIA’s website, but to make things easier to digest, we created an infographic. It’s on our website here. This provides a visualization of the programs SMI has to offer and how they all work together to provide SMI members maximum value.

If you’re interested in storage management and your company works with these technologies, I encourage you to join SNIA’s SMI and participate in the development of the next generation of storage management standards. Learn more about SMI membership here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

Jun 18, 2020

title of post

Key management focuses on protecting cryptographic keys from threats and ensuring keys are available when needed. And it’s no small task. That's why the SNIA Networking Storage Forum (NSF) invited key management and encryption expert, Judy Furlong, to present a “Key Management 101” session as part our Storage Networking Security Webcast Series. If you missed the live webcast, I encourage you to watch it on-demand as it was highly-rated by attendees. Judy answered many key management questions during the live event, here are answers to those, as well as the ones we did not have time to get to.

Q. How are the keys kept safe in local cache?

A. It depends on the implementation. 
Options include:  1. Only storing
wrapped keys (each key individually encrypted with another key) in the cache. 2.
Encrypting the entire cache content with a separate encryption. In either case,
one needs to properly protect/manage the wrapping (KEK) key or Cache master key.

Q. Rotate key question – Self-encrypting Drive (SED) requires
permanent encryption key. How is rotation is done?

A. It is the Authentication Encryption Key used to access
(and protect the Data (Media) Encryption Key) that can be rotated. If you
change/rotate the DEK you destroy the data on the disk.

Q. You may want to point out that many people use
“FIPS” for FIPS 140, which isn’t strictly correct, as there are
numerous FIPS standards.

A. Yes that is true that many people refer to FIPS 140 as just FIPS which as noted is incorrect.  There are many Federal Information Process
Standards (FIPS).  That is why when I
present/write something I am careful to always add the appropriate FIPS
reference number (e.g. FIPS 140, FIPS 186, FIPS 201 etc.).

Q. So is the math for M of N
key sharing the same as used for object store?

A. Essentially yes, it’s the same mathematical concepts that
are being used.  However, the object
store approach uses a combination of data splitting and key splitting to allow
encrypted data to be stored across a set of cloud providers.

Q. According to the size of the data, this should be the
key, so for 1 TB should a 1T key be used? (
Slide
12
)

A. No, encrypting 1TB of data doesn’t mean that the key has to be
that long. Most data encryption (at rest and in flight) use symmetric
encryption like AES which is a block cipher. In block ciphers the data that is
being encrypted is broken up into blocks of specific size in order to be
processed by that algorithm. For a good overview of block ciphers see the Encryption 101 webcast.

Q. What is the maximum
lifetime of a certificate?

A. Maximum certificate validity (e.g. certificate lifetime)
varies based on regulations/guidance, organizational policies, application or
purpose for which certificate is used, etc. Certificates issued to humans for
authentication or digital signature or to common applications like web
browsers, web services, S/MIME email client, etc. tend to have validities of 1-2
years. CA certificates have slightly longer validities in the 3-5-year
range. 

Q. In data center applications, why not just use AEK as
DEK for SED?

A. Assuming
that AEK is Authentication Encryption Key — A defense in-depth strategy is
taken in the design of SEDs where the DEK (or MEK) is a key that is generated
on the drive and never leaves the drive. The MEK is protected by an AEK. This
AEK is externalized from the drive and needs to be provided by the
application/product that is accessing the SED in order to unlock the SED and
take advantage of its capabilities. 

Using separate keys follows the principles of only using a key for one purpose
(e.g. encryption vs. authentication).  It
also reduces the attack surface for each key. If an attacker obtains an AEK
they also need to have access to the SED it belongs to as well as the
application used to access that SED.

Q. Does NIST require
“timeframe” to rotate key?

A.NIST recommendations for the cryptoperiod of keys used for a
range of purposes may be found in section 5.3.6 of NIST SP800-57 Part 1 R5.

Q. Does D@RE use symmetric or asymmetric
encryption?

A.There are many Data at Rest (D@RE) implementations, but the
majority of the D@RE implementations within the storage industry (e.g.
controller based, Self-Encrypting Drives (SEDs)) symmetric encryption is used.
For more information about D@RE implementations, check out the Storage
Security Series: Data-at-Rest webcast
.

Q. In the TLS example shown, where does the “key
management” take place?

There
are multiple places in the TLS handshake example where different key management
concepts discussed in the webinar are leveraged:

  • In steps 3 and 5 the client and server exchange their public key
    certificates (example of asymmetric cryptography/certificate management)
  • In steps 4 and 6 the client and server validate each other’s
    certificates (example of certificate path validation — part of key management)
  • In step 5 the client creates and sends pre-master secret (example
    of key agreement)
  • In step 7 the client and server use this pre-master secret and
    other information to calculate the same symmetric key that will be used to
    encrypt the communication channel (example of key derivation).

Remember
I said this was part of the Storage Networking Security Webcast Series? Check
out the other webcasts we’ve done to date as well as what’s coming up

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Going Stir Crazy? Expand Your PM Resume at These Virtual Events!

Jim Fister

Jun 15, 2020

title of post

We here at SNIA know that everyone is getting a tad stir crazy sitting at home. However, there are still some great opportunities to learn while you’re trying to decide which wall of the home office to face tomorrow. SNIA Compute, Memory, and Storage Initative (CMSI) member company Intel is offering some excellent resources for those interested in programming persistent memory using the open-source Persistent Memory Development Kit (PMDK).

Intel is hosting a virtual forum on PMDK, along with the Storage Performance Development Kit (SPDK), and vTune Profiler tools. This is a great opportunity to meet virtually with the teams who are developing the tools as well as the community building applications. The Virtual Forum runs June 23-35, with special focus on PMDK on June 25th. There are a variety of exciting sessions all three days.

Intel is also hosting two BrightTALK seminars on Persistent Memory. The first, Building Durable Storage Solutions with Intel Optane Persistent Memory on June 23rd, will focus on remote applications for persistent memory. Especially for those interested in networked storage solutions, this will be a great educational webinar. The second, Enabling Persistent Memory Usages in Cloud on June 30th, will cover how many of the most popular in-memory databases already take advantage of Persistent Memory.

In addition, SNIA is continuing to advance the Persistent Memory development conversation. We announced at the Persistent Memory Summit in January that SNIA would be exploring more opportunity for online development using Persistent Memory, as well as an Optane Memory Programming Challenge. Both of these will be active for the second half of this year, and you can watch this space for a formal announcement in the next month.  Learn about our successful NVDIMM Programming Challenge journey here.

Please feel free to register for the above events to learn more and join the community.

And may we suggest the north office wall for tomorrow?

Note: This has also been cross-posted at the PIRL Blog, a collaborative effort of the USCD Non-Volatile Systems Lab and SNIA. Go check out PIRL for some more Persistent Memory Development content.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Q&A on the Impact of AI

Alex McDonald

Jun 15, 2020

title of post

It was April Fools’ Day, but the Artificial Intelligence (AI) webcast the SNIA Cloud Storage Technologies Initiative (CSTI) hosted on April 1st was no joke! We were fortunate to have AI experts, Glyn Bowden and James Myers, join us for an interesting discussion on the impact AI is having on data strategies. If you missed the live event, you can watch it here on-demand. The audience asked several great questions. Here are our experts’ answers:

Q. How does the performance requirement of the data change from its capture at the edge through to its use

A. That depends a lot on what purpose the data is being
captured for. For example, consider a video analytics solution to capture
real-time activities. The data transfer will need to be low latency to get the
frames to the inference engine as quickly as possible. However, there is less
of a need to protect that data, as if we lose a frame or two it’s not a major
issue. Resolution and image fidelity are already likely to have been sacrificed
through compression. Now think of financial trading transactions. It may be we
want to do some real-time work against them to detect fraud, or feedback into a
market prediction engine; however we may just want to push them into an archive.
In this case, as long as we can push the data through the acquisition function
quickly, we don’t want to cause issues for processing new incoming data and have
side effects like filling up of caches etc,  so we don’t need to be too concerned with
performance. However, we MUST protect every transaction. This means that each
piece of data and its use will dictate what the performance, protection and any
other requirements are required as it passes through the pipeline.

Q. Need to think of the security, who is seeing the data resource?

A. Security
and governance is key to building a successful and flexible data pipeline. We
can no longer assume that data will only have one use, or that we know in
advance all personas who will access it; hence we won’t know in advance how to protect
the data. So, each step needs to consider how the data should be treated and
protected. The security model is one where the security profile of the data is
applied to the data itself and not any individual storage appliance that it
might pass through. This can be done with the use of metadata and signing to
ensure you know exactly how a particular data set, or even object, can and
should be treated. The upside to this is that you can also build very good data
dictionaries using this metadata, and make discoverability and audit of use
much simpler. And with that sort of metadata, the ability to couple data to
locations through standards such as the SNIA
Cloud Data Management Interface (CDMI
) brings real opportunity.

Q.
Great overview on the inner workings of AI. Would a company’s Blockchain have a
role in the provisioning of AI?

A.
Blockchain can play a role in AI. There are vendors with patents around Blockchain’s
use in distributing training features so that others can leverage trained
weights and parameters for refining their own models without the need to have
access to the original data. Now, is blockchain a requirement for this to
happen? No, not at all. However, it can provide a method to assess the providence
of those parameters and ensure you’re not being duped into using polluted
weights.

Q.
It looks like everybody is talking about AI, but thinking about pattern
recognition / machine learning. The biggest differentiator for human
intelligence is – making a decision and acting on its own, without external
influence. Little children are good example. Can AI make decisions on its own
right now?

A.
Yes and no. Machine Learning (ML) today results in a prediction and a
probability of its accuracy. So that’s only one stage of the cognitive pipeline
that leads from observation, to assessment, to decision and ultimately action.
Basically, ML on its own provides the assessment and decision capability. We
then write additional components to translate that decision into actions. That
doesn’t need to be a “Switch / Case” or “If this then that”
situation. We can plug the outcomes directly into the decision engine so that the
ML algorithm is selecting the outcome desired directly. Our extra code just
tells it how to go about that. But today’s AI has a very narrow focus. It’s not
general intelligence that can assess entirely new features without training and
then infer from previous experience how it should interpret them. It is not yet
capable of deriving context from past experiences and applying them to new and
different experiences.

Q.
Shouldn’t there be a path for the live data (or some cleaned-up version or
output of the inference) to be fed back into the training data to evolve and
improve the training model?

A. Yes
there should be. Ideally you will capture in a couple of places. One would be
your live pipeline. If you are using something like Kafka to do the pipelining you
can split the data to two different locations and persist one in a data lake or
archive and process the other through your live inference pipeline. You might
also then want your inference results pushed out to the archive as well as this
could be a good source of “training data”; it’s essentially labelled
and ready to use. Of course, you would need to manually review this, as if
there is inaccuracy in the model, a few false positives can reinforce that
inaccuracy.

Q.
Can the next topic focus be on pipes and new options?

A. Great Idea. In fact, given the popularity of this presentation, we are looking at a couple more webcasts on AI. There’s a lot to cover! Follow us on Twitter @sniacloud_com for dates of future webcast.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your IoT Questions Answered

Alex McDonald

Jun 11, 2020

title of post

The SNIA Cloud Storage Technologies Initiative (CSTI) webcast on IoT explored how the explosion of data generated from IoT devices creates unique challenges in the way we store, transmit and curate data. If you missed the webcast, you can watch it on-demand. This topic generated several interesting questions.  As promised during the live event, here are answers to them all:

Q. Do IoT devices consume as much data as they produce?

A. It really depends on the device. There are some like sensors that will only produce data and transmit it on, on the other hand the more intelligence built into these devices the more need there might be to consume data to drive that intelligence. In the future, it’s possible there will be much more device to device (or peer to peer) traffic between IoT devices, cutting out the leg back to the data center altogether for data that doesn’t need to be there.

Q. How can we educate the Manufacturers to start adding
security features needed to the IoT devices?

A. This is being managed through legislation in places such
as Europe. But it probably isn’t the manufacturers that need educating. They
already know the need for security and the risk of having poor practices. The
people that need to be educated are the users and consumers of the technology.
This will mean the market will move to reward those that care about security
and punish those that ignore it. Mostly, the lapses in security have been more
about mistakes and unintended consequences of pushing a certain feature,
however an educated market place can make those judgements much better. From a
manufacturing perspective, there needs to be a better, standardized way of
reporting incidents and security flaws in such a way that organizations have time
to respond with patches before the information can be exploited. We already
have a pretty good model for this in software engineering with principles such
as lead time from discovery to public disclosure to enable time for fix. This
leads to bug bounties and other measures that encourage secure design. These
same principles could map easily into the IoT world as well and with that
educated marketplace, there will be many more guardians.

Q. To have efficient IoT devices, WiFi plays a critical
role to have proper synchronization with uninterrupted information. So, is WiFi
5 efficient enough to get it connected with many devices or would WiFi 6 be
required to have many connected IoT devices?

A. WiFi 6 (812.11ax standard) provides faster speeds and
better performance in congested areas. So yes, this can potentially bring
benefits to an IoT implementation. Of course, there is a dependency on device
availability and not all vendors have adopted the standard fully yet. This is
largely due to the certificate only being issued in September 2019. We also
have the emergence of 5G radio technology that will one day be the standard for
wireless networks servicing mobile phones. This also provides higher speeds and
better congestion management as well as power efficiency required when
deploying many devices. In summary, the WiFi standards continually advance and
IoT traffic will absolutely be able to take advantage of that. We must also
ensure that our data acquisition, persistence and management keep pace and that
if we are plugging this into real-time networks, are inference engines are
deployed to cope with both the scale and volume of data the new technologies
can deliver.

Q. In your camera example, is there an opportunity to do
that inference at the edge directly on the camera or in close proximity to the
camera?  Network connectivity then
becomes less of a latency concern.

A. Absolutely. Cameras are becoming intelligent in that
inference engines can form part of the IP camera. This removes the latency
issue for immediate inference, but there is a limited capacity meaning that
models will need to be pruned and optimized potentially sacrificing accuracy.
If the inference is happening near the camera, which is very often the case
even if it’s not on camera, then latency from the camera and video management
system can impact the solution. However, the ability to improve accuracy and
model complexity as well as the ability to aggregate multiple data sources
together, might mean this is a requirement. An example might be leveraging
video analytics for social distancing. In order to resolve an object’s position
in 3-dimensional space with reasonable accuracy it becomes necessary to track
from at least two cameras so that we can apply trigonometry to calculate angles
and therefor position relative to known markers. An on-camera solution won’t
help here, but a near camera, edge-based solution, would.

Q. Is there any collaboration between the IoT efforts at
SNIA and another SNIA initiative around Computational Storage?  Many IoT devices include some form of storage
already and the idea of localized processing where the data is created and
stored, may help solve some of the latency and security challenges mentioned.

A. Yes, there is synergistic work taking place, and the SNIA Computational Storage Special Interest Group is developing an extensive set of use cases for a variety of on-drive computational services that can help with the latency challenges. There is also work underway to define a set of security models based on threat challenges that CS shares with other systems and devices, and some that apply uniquely to them. There will be a number of overview and technical documents this year that address these issues, and as is usual for SNIA, they will be publicly available on the SNIA website.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Latest on NVMe-oF 1.1

Tim Lustig

Jun 9, 2020

title of post
Since its introduction, NVMe over Fabrics (NVMe-oF™) has not been resting on any laurels. Work has been ongoing, and several updates are worth mentioning. And that’s exactly what the SNIA Networking Storage Forum will be doing on June 30th, 2020 at our live webcast, Notable Updates in NVMe-oF 1.1. There is more to a technology than its core standard, of course, and many different groups have been hard at work at improving upon, and fleshing out, many of the capabilities related to NVMe-oF.  In this webcast, we will explore a few of these projects and how they relate to implementing the technology. In particular, this webcast will be covering:
  • A summary of new items introduced in NVMe-oF 1.1
  • Updates regarding enhancements to FC-NVMe-2
  • How SNIA’s provisioning model helps NVMe-oF Ethernet Bunch of Flash (EBOF) devices
  • Managing and provisioning NVMe-oF devices with SNIA Swordfish
Register today for a look at what’s new in NVMe-oF. We hope to see you on June 30th.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Alex McDonald

May 27, 2020

title of post

Ever wonder how encryption actually works? Experts, Ed Pullin and Judy Furlong, provided an encryption primer to hundreds of attendees at our SNIA NSF webcast Storage Networking Security: Encryption 101. If you missed it, It's now available on-demand. We promised during the live event to post answers to the questions we received. Here they are:

Q. When using asymmetric keys, how often do the keys need to be changed?

A. How often asymmetric (and symmetric) keys need to be changed is driven by the purpose the keys are used for, the security policies of the organization/environment in which they are used and the length of the key material. For example, the CA/Browser Forum has a policy that certificates used for TLS (secure communications) have a validity of no more than two years.

Q. In earlier slides there was a mention that information can only be decrypted via private key (not public key). So, was Bob's public key retrieved using the public key of signing authority?

A. In asymmetric cryptography the opposite key is needed to reverse the encryption process.  So, if you encrypt using Bob's private key (normally referred to a digital signature) then anyone can use his public key to decrypt.  If you use Bob's public key to encrypt, then his private key should be used to decrypt.  Bob's public key would be contained in the public key certificate that is digitally signed by the CA and can be extracted from the certificate to be used to verify Bob's signature.

Q. Do you see TCG Opal 2.0 or TCG for Enterprise as requirements for drive encryption? What about the FIPS 140-2 L2 with cryptography validated by 3rd party NIST? As NIST was the key player in selecting AES, their stamp of approval for a FIPS drive seems to be the best way to prove that the cryptographic methods of a specific drive are properly implemented.

A. Yes, the TCG Opal 2.0 and TCG for Enterprise standards are generally recognized in the industry for self-encrypting drives (SEDs)/drive level encryption. FIPS 140 cryptographic module validation is a requirement for sale into the U.S. Federal market and is also recognized in other verticals as well.     Validation of the algorithm implementation (e.g. AES) is part of the FIPS 140 (Cryptographic Module Validation Program (CMVP)) companion Cryptographic Algorithm Validation Program (CAVP).

Q. Can you explain Constructive Key Management (CKM) that allows different keys given to different parties in order to allow levels of credentialed access to components of a single encrypted object?

A. Based on the available descriptions of CKM, this approach is using a combination of key derivation and key splitting techniques. Both of these concepts will be covered in the upcoming Key Management 101 webinar. An overview of CKM can be found in  this Computer World article (box at the top right). 

Q. Could you comment on Zero Knowledge Proofs and Digital Verifiable Credentials based on Decentralized IDs (DIDs)?

A. A Zero Knowledge Proof is a cryptographic-based method for being able to prove you know something without revealing what it is. This is a field of cryptography that has emerged in the past few decades and has only more recently transitioned from a theoretical research to a practical implementation phase with crypto currencies/blockchain and multi-party computation (privacy preservation).

Decentralized IDs (DIDs) is an authentication approach which leverages blockchain/decentralized ledger technology. Blockchain/decentralized ledgers employ cryptographic techniques and is an example of applying cryptography and uses several of the underlying cryptographic algorithms described in this 101 webinar.

Q. Is Ed saying every block should be encrypted with a different key?

A. No. we believe the confusion was over the key transformation portion of Ed's diagram.  In the AES Algorithm a key transformation occurs that uses the initial key as input, and provides the AES rounds their own key.  This Key expansion is part of the AES Algorithm itself and is known as the Key Schedule.

Q. Where can I learn more about storage security?

A. Remember this Encryption 101 webcast was part of the SNIA Networking Storage Forum's Storage Networking Security Webcast Series. You can keep up with additional installments here and by following us on Twitter @SNIANSF.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

AlexMcDonald

May 27, 2020

title of post

Ever wonder how encryption actually works? Experts, Ed Pullin and Judy Furlong, provided an encryption primer to hundreds of attendees at our SNIA NSF webcast Storage Networking Security: Encryption 101. If you missed it, It’s now available on-demand. We promised during the live event to post answers to the questions we received. Here they are:

Q. When using asymmetric keys, how often do the keys need to be changed?

A. How often asymmetric (and symmetric) keys need to be changed is driven by the purpose the keys are used for, the security policies of the organization/environment in which they are used and the length of the key material. For example, the CA/Browser Forum has a policy that certificates used for TLS (secure communications) have a validity of no more than two years.

Q.
In earlier slides there was a mention that information can only be decrypted
via private key (not public key). So, was Bob’s public key retrieved using the
public key of signing authority?

A.
In asymmetric cryptography the opposite key is needed to reverse the encryption
process.  So, if you encrypt using Bob’s
private key (normally referred to a digital signature) then anyone can use his
public key to decrypt.  If you use Bob’s
public key to encrypt, then his private key should be used to decrypt.  Bob’s public key would be contained in the
public key certificate that is digitally signed by the CA and can be extracted
from the certificate to be used to verify Bob’s signature.

Q.
Do you see TCG Opal 2.0 or TCG for Enterprise as requirements for drive
encryption? What about the FIPS 140-2 L2 with cryptography validated by 3rd
party NIST? As NIST was the key player in selecting AES, their stamp of
approval for a FIPS drive seems to be the best way to prove that the
cryptographic methods of a specific drive are properly implemented.

A.
Yes, the TCG Opal 2.0 and TCG for Enterprise standards are generally recognized
in the industry for self-encrypting drives (SEDs)/drive level encryption. FIPS
140 cryptographic module validation is a requirement for sale into the U.S.
Federal market and is also recognized in other verticals as well.     Validation of the algorithm implementation
(e.g. AES) is part of the FIPS 140 (Cryptographic Module Validation Program
(CMVP)) companion Cryptographic Algorithm Validation Program (CAVP).

Q.
Can you explain Constructive Key Management (CKM) that allows different keys
given to different parties in order to allow levels of credentialed access to
components of a single encrypted object?

A.
Based on the available descriptions of CKM, this approach is using a
combination of key derivation and key splitting techniques. Both of these
concepts will be covered in the upcoming Key
Management 101 webinar
. An overview of CKM can be found in  this Computer
World article
(box at the top right). 

Q.
Could you comment on Zero Knowledge Proofs and Digital Verifiable Credentials
based on Decentralized IDs (DIDs)?

A.
A Zero Knowledge Proof is a cryptographic-based method for being able to prove
you know something without revealing what it is. This is a field of
cryptography that has emerged in the past few decades and has only more
recently transitioned from a theoretical research to a practical implementation
phase with crypto currencies/blockchain and multi-party computation (privacy
preservation).

Decentralized IDs (DIDs) is an authentication approach which leverages
blockchain/decentralized ledger technology. Blockchain/decentralized ledgers
employ cryptographic techniques and is an example of applying cryptography and
uses several of the underlying cryptographic algorithms described in this 101
webinar.

Q.
Is Ed saying every block should be encrypted with a different key?

A.
No. we believe the confusion was over the key transformation portion of Ed’s
diagram.  In the AES Algorithm a key
transformation occurs that uses the initial key as input, and provides the AES rounds
their own key.  This Key expansion is
part of the AES Algorithm itself and is known as the Key Schedule.

Q.
Where can I learn more about storage security?

A.
Remember this Encryption 101 webcast was part of the SNIA Networking Storage
Forum’s Storage
Networking Security Webcast Series
. You can keep up with additional installments here and by
following us on Twitter @SNIANSF.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Everyone Wants Their Java to Persist

Jim Fister

May 20, 2020

title of post
In this time of lockdown, I'm sure we're all getting a little off kilter. I mean, it's one thing to get caught up listening to tunes in your office to avoid going out and alerting your family of the fact that you haven't changed your shirt in two days. It's another thing to not know where a clean coffee cup is in the house so you can fill it and face the day starting sometime between 5AM and Noon. Okay, maybe we're just talking about me, sorry. But you get the point. Wouldn't it be great if we had some caffeinated source that was good forever? I mean... persistence of Java? At this point, it's not just me. Okay, that's not what this webinar will be talking about, but it's close. SNIA member Intel is offering an overview of the ways to utilize persistent memory in the Java environment. In my nearly two years here at SNIA, this has been one of the most-requested topics. Steve Dohrmann and Soji Denloye are two of the brightest minds in enabling persistence, and this is sure to be an insightful presentation. Persistent memory application capabilities are growing significantly.  Since the publication of the SNIA NVM Programming Model developed by the SNIA Persistent Memory Programming Technical Work Group, new language support seems to be happening every day.  Don't miss the opportunity to see the growth of PM programming in such a crucial space as Java. The presentation is on BrighTALK, and will be live on May 27th at 10am PST. You can see the details at this link. Now I just have to find a clean cup. This post is also cross-posted at the PIRL Blog.  PIRL is a joint effort by SNIA and UCSD's Non-Volatile Systems Lab to advance the conversation on persistent memory programming.  Check out other entries here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to