Sorry, you need to enable JavaScript to visit this website.

The Latest on NVMe-oF 1.1

Tim Lustig

Jun 9, 2020

title of post
Since its introduction, NVMe over Fabrics (NVMe-oF™) has not been resting on any laurels. Work has been ongoing, and several updates are worth mentioning. And that’s exactly what the SNIA Networking Storage Forum will be doing on June 30th, 2020 at our live webcast, Notable Updates in NVMe-oF 1.1. There is more to a technology than its core standard, of course, and many different groups have been hard at work at improving upon, and fleshing out, many of the capabilities related to NVMe-oF.  In this webcast, we will explore a few of these projects and how they relate to implementing the technology. In particular, this webcast will be covering:
  • A summary of new items introduced in NVMe-oF 1.1
  • Updates regarding enhancements to FC-NVMe-2
  • How SNIA’s provisioning model helps NVMe-oF Ethernet Bunch of Flash (EBOF) devices
  • Managing and provisioning NVMe-oF devices with SNIA Swordfish
Register today for a look at what’s new in NVMe-oF. We hope to see you on June 30th.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Alex McDonald

May 27, 2020

title of post

Ever wonder how encryption actually works? Experts, Ed Pullin and Judy Furlong, provided an encryption primer to hundreds of attendees at our SNIA NSF webcast Storage Networking Security: Encryption 101. If you missed it, It's now available on-demand. We promised during the live event to post answers to the questions we received. Here they are:

Q. When using asymmetric keys, how often do the keys need to be changed?

A. How often asymmetric (and symmetric) keys need to be changed is driven by the purpose the keys are used for, the security policies of the organization/environment in which they are used and the length of the key material. For example, the CA/Browser Forum has a policy that certificates used for TLS (secure communications) have a validity of no more than two years.

Q. In earlier slides there was a mention that information can only be decrypted via private key (not public key). So, was Bob's public key retrieved using the public key of signing authority?

A. In asymmetric cryptography the opposite key is needed to reverse the encryption process.  So, if you encrypt using Bob's private key (normally referred to a digital signature) then anyone can use his public key to decrypt.  If you use Bob's public key to encrypt, then his private key should be used to decrypt.  Bob's public key would be contained in the public key certificate that is digitally signed by the CA and can be extracted from the certificate to be used to verify Bob's signature.

Q. Do you see TCG Opal 2.0 or TCG for Enterprise as requirements for drive encryption? What about the FIPS 140-2 L2 with cryptography validated by 3rd party NIST? As NIST was the key player in selecting AES, their stamp of approval for a FIPS drive seems to be the best way to prove that the cryptographic methods of a specific drive are properly implemented.

A. Yes, the TCG Opal 2.0 and TCG for Enterprise standards are generally recognized in the industry for self-encrypting drives (SEDs)/drive level encryption. FIPS 140 cryptographic module validation is a requirement for sale into the U.S. Federal market and is also recognized in other verticals as well.     Validation of the algorithm implementation (e.g. AES) is part of the FIPS 140 (Cryptographic Module Validation Program (CMVP)) companion Cryptographic Algorithm Validation Program (CAVP).

Q. Can you explain Constructive Key Management (CKM) that allows different keys given to different parties in order to allow levels of credentialed access to components of a single encrypted object?

A. Based on the available descriptions of CKM, this approach is using a combination of key derivation and key splitting techniques. Both of these concepts will be covered in the upcoming Key Management 101 webinar. An overview of CKM can be found in  this Computer World article (box at the top right). 

Q. Could you comment on Zero Knowledge Proofs and Digital Verifiable Credentials based on Decentralized IDs (DIDs)?

A. A Zero Knowledge Proof is a cryptographic-based method for being able to prove you know something without revealing what it is. This is a field of cryptography that has emerged in the past few decades and has only more recently transitioned from a theoretical research to a practical implementation phase with crypto currencies/blockchain and multi-party computation (privacy preservation).

Decentralized IDs (DIDs) is an authentication approach which leverages blockchain/decentralized ledger technology. Blockchain/decentralized ledgers employ cryptographic techniques and is an example of applying cryptography and uses several of the underlying cryptographic algorithms described in this 101 webinar.

Q. Is Ed saying every block should be encrypted with a different key?

A. No. we believe the confusion was over the key transformation portion of Ed's diagram.  In the AES Algorithm a key transformation occurs that uses the initial key as input, and provides the AES rounds their own key.  This Key expansion is part of the AES Algorithm itself and is known as the Key Schedule.

Q. Where can I learn more about storage security?

A. Remember this Encryption 101 webcast was part of the SNIA Networking Storage Forum's Storage Networking Security Webcast Series. You can keep up with additional installments here and by following us on Twitter @SNIANSF.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

AlexMcDonald

May 27, 2020

title of post

Ever wonder how encryption actually works? Experts, Ed Pullin and Judy Furlong, provided an encryption primer to hundreds of attendees at our SNIA NSF webcast Storage Networking Security: Encryption 101. If you missed it, It’s now available on-demand. We promised during the live event to post answers to the questions we received. Here they are:

Q. When using asymmetric keys, how often do the keys need to be changed?

A. How often asymmetric (and symmetric) keys need to be changed is driven by the purpose the keys are used for, the security policies of the organization/environment in which they are used and the length of the key material. For example, the CA/Browser Forum has a policy that certificates used for TLS (secure communications) have a validity of no more than two years.

Q.
In earlier slides there was a mention that information can only be decrypted
via private key (not public key). So, was Bob’s public key retrieved using the
public key of signing authority?

A.
In asymmetric cryptography the opposite key is needed to reverse the encryption
process.  So, if you encrypt using Bob’s
private key (normally referred to a digital signature) then anyone can use his
public key to decrypt.  If you use Bob’s
public key to encrypt, then his private key should be used to decrypt.  Bob’s public key would be contained in the
public key certificate that is digitally signed by the CA and can be extracted
from the certificate to be used to verify Bob’s signature.

Q.
Do you see TCG Opal 2.0 or TCG for Enterprise as requirements for drive
encryption? What about the FIPS 140-2 L2 with cryptography validated by 3rd
party NIST? As NIST was the key player in selecting AES, their stamp of
approval for a FIPS drive seems to be the best way to prove that the
cryptographic methods of a specific drive are properly implemented.

A.
Yes, the TCG Opal 2.0 and TCG for Enterprise standards are generally recognized
in the industry for self-encrypting drives (SEDs)/drive level encryption. FIPS
140 cryptographic module validation is a requirement for sale into the U.S.
Federal market and is also recognized in other verticals as well.     Validation of the algorithm implementation
(e.g. AES) is part of the FIPS 140 (Cryptographic Module Validation Program
(CMVP)) companion Cryptographic Algorithm Validation Program (CAVP).

Q.
Can you explain Constructive Key Management (CKM) that allows different keys
given to different parties in order to allow levels of credentialed access to
components of a single encrypted object?

A.
Based on the available descriptions of CKM, this approach is using a
combination of key derivation and key splitting techniques. Both of these
concepts will be covered in the upcoming Key
Management 101 webinar
. An overview of CKM can be found in  this Computer
World article
(box at the top right). 

Q.
Could you comment on Zero Knowledge Proofs and Digital Verifiable Credentials
based on Decentralized IDs (DIDs)?

A.
A Zero Knowledge Proof is a cryptographic-based method for being able to prove
you know something without revealing what it is. This is a field of
cryptography that has emerged in the past few decades and has only more
recently transitioned from a theoretical research to a practical implementation
phase with crypto currencies/blockchain and multi-party computation (privacy
preservation).

Decentralized IDs (DIDs) is an authentication approach which leverages
blockchain/decentralized ledger technology. Blockchain/decentralized ledgers
employ cryptographic techniques and is an example of applying cryptography and
uses several of the underlying cryptographic algorithms described in this 101
webinar.

Q.
Is Ed saying every block should be encrypted with a different key?

A.
No. we believe the confusion was over the key transformation portion of Ed’s
diagram.  In the AES Algorithm a key
transformation occurs that uses the initial key as input, and provides the AES rounds
their own key.  This Key expansion is
part of the AES Algorithm itself and is known as the Key Schedule.

Q.
Where can I learn more about storage security?

A.
Remember this Encryption 101 webcast was part of the SNIA Networking Storage
Forum’s Storage
Networking Security Webcast Series
. You can keep up with additional installments here and by
following us on Twitter @SNIANSF.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Everyone Wants Their Java to Persist

Jim Fister

May 20, 2020

title of post
In this time of lockdown, I'm sure we're all getting a little off kilter. I mean, it's one thing to get caught up listening to tunes in your office to avoid going out and alerting your family of the fact that you haven't changed your shirt in two days. It's another thing to not know where a clean coffee cup is in the house so you can fill it and face the day starting sometime between 5AM and Noon. Okay, maybe we're just talking about me, sorry. But you get the point. Wouldn't it be great if we had some caffeinated source that was good forever? I mean... persistence of Java? At this point, it's not just me. Okay, that's not what this webinar will be talking about, but it's close. SNIA member Intel is offering an overview of the ways to utilize persistent memory in the Java environment. In my nearly two years here at SNIA, this has been one of the most-requested topics. Steve Dohrmann and Soji Denloye are two of the brightest minds in enabling persistence, and this is sure to be an insightful presentation. Persistent memory application capabilities are growing significantly.  Since the publication of the SNIA NVM Programming Model developed by the SNIA Persistent Memory Programming Technical Work Group, new language support seems to be happening every day.  Don't miss the opportunity to see the growth of PM programming in such a crucial space as Java. The presentation is on BrighTALK, and will be live on May 27th at 10am PST. You can see the details at this link. Now I just have to find a clean cup. This post is also cross-posted at the PIRL Blog.  PIRL is a joint effort by SNIA and UCSD's Non-Volatile Systems Lab to advance the conversation on persistent memory programming.  Check out other entries here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Everyone Wants Their Java to Persist

Jim Fister

May 19, 2020

title of post

In this time of lockdown, I’m sure we’re all getting a little off kilter. I mean, it’s one thing to get caught up listening to tunes in your office to avoid going out and alerting your family of the fact that you haven’t changed your shirt in two days. It’s another thing to not know where a clean coffee cup is in the house so you can fill it and face the day starting sometime between 5AM and Noon. Okay, maybe we’re just talking about me, sorry. But you get the point.

Wouldn’t it be great if we had some caffeinated source that was good forever? I mean… persistence of Java? At this point, it’s not just me.

Okay, that’s not what this webinar will be talking about, but it’s close. SNIA member Intel is offering an overview of the ways to utilize persistent memory in the Java environment. In my nearly two years here at SNIA, this has been one of the most-requested topics. Steve Dohrmann and Soji Denloye are two of the brightest minds in enabling persistence, and this is sure to be an insightful presentation.

Persistent memory application capabilities are growing significantly.  Since the publication of the persistent memory programming model developed by the TWG, new language support seems to be happening every day.  Don’t miss the opportunity to see the growth of PM programming in such a crucial space as Java.

The presentation is on BrighTALK, and will be live on May 27th at 10am PST. You can see the details at this link.

Now I just have to find a clean cup.

This post is also cross-posted at the PIRL Blog.  PIRL is a joint effort by SNIA and UCSD’s Non-Volatile Systems Lab to advance the conversation on persistent memory programming.  Check out other entries here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Power of Data Aggregation during a Pandemic

Glyn Bowden

May 19, 2020

title of post

The new coronavirus that has been ravaging countries and sending us all into lockdown is the most observed pandemic we’ve ever experienced. Data about the virus itself and perhaps more appropriately, the nations upon which it is having an impact have been shared from multiple sources. These include academic institutions such as John Hopkins University, national governments and international organisations such as the World Health Organisation. The data has been made available in many formats, from programmatically accessible APIs to downloadable comma delimited files to prepared data visualisations. We’ve never been more informed about the current status of anything.

Data Aggregation

What this newfound wealth of data has also brought to light is the true power of data aggregation. There is really only a limited number of conclusions that can be drawn from the number of active and resolved cases per nation and region. Over time, this can show us a trend and it also gives a very real snapshot of where we stand today. However, if we layer on additional data such as when actions were taken, we can see clear pictures of the impact of that strategy over time. With each nation taking differing approaches based on their own perceived position, mixed with culture and other socio-economic factors, we end up with a good side-by-side comparison of the strategies and their effectiveness. This is helping organisations and governments make decisions going forward, but data scientists globally are urging caution. In fact, the data we are producing today by processing all of these feeds may turn out to be far more valuable for the next pandemic, than it will for this one. It will be the analysis that helps create the “new normal.”

So why the suggestions of caution? Surely the more data we have the better? And the answer is an age-old problem relating to, well, age. Data fluctuates. Particularly medical data. With a small and very recent data set, what we have currently is small compared to the global population. The challenge we have at the moment is the data is often being presented to the public in a raw or semi-raw form with little regard as to how that might be interpreted.

The Need for Context

Therefore, when we see a reduction in infections within a country, it could be interpreted as an improvement in the condition rather than a simple variation that is expected. Data has the most value when it is presented in context. On May 15, 2020, at the time of this writing, the total number of recorded deaths from the novel coronavirus stood at more than  300,000. This is a large number and is bound to increase, exponentially for a time, but it needs to be understood in context. It can be large or small depending on the time frame, the geographic scale, and the demographic composition of the population affected. For example, one statistic that has not been clearly shown in the press covering the numbers, is the number expressed as a percentage of the population. This is largely because it does not make such compelling reading, as in the early stages, these numbers are thankfully very small. However, the context provided there is very important. If 5,000 people out of a population of 100,000 per square km is sick, this is very different that 5,000 out of a population of 1,000 per square km. It describes the situation in the context of the population that is feeling the impact.

This is information presented, not in its raw format but normalised, cleaned and presented alongside other influencing data. This is what data scientists have been doing and sharing with the community in order to drive valuable insights from the mass of data we have. The other work going on is around the augmentation of that data to provide new contexts and new insights that are proving ever more valuable in how societies react and even predict the impact of Covid-19.

One such effort is the inclusion of age and prosperity data. This leads to an understanding of how activities such as shuttering businesses, which disproportionately impacts lower paid workers will cause poverty or other social-economic challenges. How the distribution network will need to adapt to serve the new priorities of food and health items in the absence of luxury items. How transport networks that provide the vital links for our health professionals and key workers can be maintained in such a way to provide frequent links whilst still allowing for social distancing and not restricting transport so much that overcrowding is the unintended consequence.

Data Lessons Learned

What all of this work around data has taught eager data scientists and engineers is that there are really two personas when it comes to data science, producers and consumers. There are also multiple levels of consumers, the intended audience, often professionals who have their own implied context, and passive observers who are exposed to the data through news reporting, internet searches and general discovery.

The data scientists and engineers are the producers, and they have a specific view on that data and a firm understanding of how to interpret the data they are working with and which bits can be safely disregarded. Seeing a scatter plot or a hexbin map or other such visualisation can be intuitively processed and provide an immediate understanding to the viewer. The same can be said for the intended consumers. However, the general public do not have the experience and training required to make such judgements and so the way data is presented to the consumer must inform whilst taking care of all the assumptions and context. The skills the population learn in being able to parse and interpret the data, along with the fine tuning of the skills of the data professionals to present the information in consumable packages, will coalesce to bring a data literacy never before seen. This same skill set can then be leveraged for presenting social, political, financial and many other verticals of data to a data-savvy populace.

The Tip of the Iceberg

Guidance and principles on how to get started with data assessment and make sense of the numbers is needed. It should be aimed at citizen data enthusiasts, journalists that might need help interpreting data being presented rapidly, and by anyone consuming the data that would like to be able to understand context and provenance of data.

The work getting the focus in the news around Covid-19 data is the tip of a very large iceberg, and likely not the work that will have the most impact over the longer term. The ability to educate millions on the meaning of data when it is presented in context will drive new social conversations far in the future. Allow us to understand how our societies and economies really work, and fully understand what our priorities should be, so that when the next pandemic hits the world, we are ready and informed. The learning we are doing now, will be the best defence for the next event, whilst helping us make immediate decisions to inform our reaction to this one.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Power of Data Aggregation during a Pandemic

Glyn Bowden

May 19, 2020

title of post

The new coronavirus that has been ravaging countries and sending us all into lockdown is the most observed pandemic we’ve ever experienced. Data about the virus itself and perhaps more appropriately, the nations upon which it is having an impact have been shared from multiple sources. These include academic institutions such as John Hopkins University, national governments and international organisations such as the World Health Organisation. The data has been made available in many formats, from programmatically accessible APIs to downloadable comma delimited files to prepared data visualisations. We’ve never been more informed about the current status of anything.

Data Aggregation

What this newfound wealth of data has also brought to light is the true power of data aggregation. There is really only a limited number of conclusions that can be drawn from the number of active and resolved cases per nation and region. Over time, this can show us a trend and it also gives a very real snapshot of where we stand today. However, if we layer on additional data such as when actions were taken, we can see clear pictures of the impact of that strategy over time. With each nation taking differing approaches based on their own perceived position, mixed with culture and other socio-economic factors, we end up with a good side-by-side comparison of the strategies and their effectiveness. This is helping organisations and governments make decisions going forward, but data scientists globally are urging caution. In fact, the data we are producing today by processing all of these feeds may turn out to be far more valuable for the next pandemic, than it will for this one. It will be the analysis that helps create the “new normal.”

So why the suggestions of caution? Surely
the more data we have the better? And the answer is an age-old problem relating
to, well, age. Data fluctuates. Particularly medical data. With a small and
very recent data set, what we have currently is small compared to the global
population. The challenge we have at the moment is the data is often being
presented to the public in a raw or semi-raw form with little regard as to how
that might be interpreted.

The Need for Context

Therefore, when we see a reduction in
infections within a country, it could be interpreted as an improvement in the
condition rather than a simple variation that is expected. Data has the most
value when it is presented in context. On May 15, 2020, at the time of this writing,
the total number of recorded deaths from the novel coronavirus stood at more
than  300,000. This is a large number and
is bound to increase, exponentially for a time, but it needs to be understood
in context. It can be large or small depending on the time frame, the geographic scale, and the demographic composition of
the population affected. For example, one statistic that has not been clearly
shown in the press covering the numbers, is the number expressed as a
percentage of the population. This is largely because it does not make such
compelling reading, as in the early stages, these numbers are thankfully very
small. However, the context provided there is very important. If 5,000 people
out of a population of 100,000 per square km is sick, this is very different
that 5,000 out of a population of 1,000 per square km. It describes the
situation in the context of the population that is feeling the impact.

This is information presented, not in its
raw format but normalised, cleaned and presented alongside other influencing
data. This is what data scientists have been doing and sharing with the
community in order to drive valuable insights from the mass of data we have.
The other work going on is around the augmentation of that data to provide new
contexts and new insights that are proving ever more valuable in how societies
react and even predict the impact of Covid-19.

One such effort is the inclusion of age and
prosperity data. This leads to an understanding of how activities such as
shuttering businesses, which disproportionately impacts lower paid workers will
cause poverty or other social-economic challenges. How the distribution network
will need to adapt to serve the new priorities of food and health items in the
absence of luxury items. How transport networks that provide the vital links
for our health professionals and key workers can be maintained in such a way to
provide frequent links whilst still allowing for social distancing and not
restricting transport so much that overcrowding is the unintended consequence.

Data Lessons Learned

What all of this work around data has
taught eager data scientists and engineers is that there are really two
personas when it comes to data science, producers and consumers. There are also
multiple levels of consumers, the intended audience, often professionals who
have their own implied context, and passive observers who are exposed to the
data through news reporting, internet searches and general discovery.

The data scientists and engineers are the
producers, and they have a specific view on that data and a firm understanding
of how to interpret the data they are working with and which bits can be safely
disregarded. Seeing a scatter plot or a hexbin map or other such visualisation can
be intuitively processed and provide an immediate understanding to the viewer. The
same can be said for the intended consumers. However, the general public do not
have the experience and training required to make such judgements and so the
way data is presented to the consumer must inform whilst taking care of all the
assumptions and context. The skills the population learn in being able to parse
and interpret the data, along with the fine tuning of the skills of the data
professionals to present the information in consumable packages, will coalesce
to bring a data literacy never before seen. This same skill set can then be
leveraged for presenting social, political, financial and many other verticals
of data to a data-savvy populace.

The Tip of the Iceberg

Guidance and principles on how to get
started with data assessment and make sense of the numbers is needed. It should
be aimed at citizen data enthusiasts, journalists that might need help
interpreting data being presented rapidly, and by anyone consuming the data that
would like to be able to understand context and provenance of data.

The work getting the focus in the news
around Covid-19 data is the tip of a very large iceberg, and likely not the
work that will have the most impact over the longer term. The ability to educate
millions on the meaning of data when it is presented in context will drive new
social conversations far in the future. Allow us to understand how our
societies and economies really work, and fully understand what our priorities
should be, so that when the next pandemic hits the world, we are ready and
informed. The learning we are doing now, will be the best defence for the next
event, whilst helping us make immediate decisions to inform our reaction to
this one.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scaling Storage in Hybrid Cloud and Multicloud Environments

Pekon Gupta

May 13, 2020

title of post

As data growth in enterprises continues to skyrocket, striking balance between cost and scalability becomes a challenge. Businesses face key decision on how to deploy their cloud service, whether on premises, in hybrid cloud or in multicloud deployments. So, what are enterprise IT organizations supposed to do, given that 'run anything anywhere' is becoming more important than ever? 

Find out on June 11, 2020, when the SNIA Cloud Storage Technologies Initiative will host a live webcast, “Storage Scalability in Hybrid Cloud and Multicloud Environments.This webcast will help architects and consumers of hybrid cloud and multicloud storage solutions better understand:

  • Trends and benefits of hybrid cloud storage and multicloud deployments
    • The range of technologies and capabilities which will help enterprise, hyperscalers and cloud service providers (CSPs) serve their customers
    • How scalability differs in block vs. file workloads
    • What are key factors to keep in mind when considering a 'run anything anywhere' objective?

We hope to see you on June 11th. Our expert presenters will all be on-hand to answer your questions. Register today

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scaling Storage in Hybrid Cloud and Multicloud Environments

Pekon Gupta

May 13, 2020

title of post
As data growth in enterprises continues to skyrocket, striking balance between cost and scalability becomes a challenge. Businesses face key decision on how to deploy their cloud service, whether on premises, in hybrid cloud or in multicloud deployments. So, what are enterprise IT organizations supposed to do, given that ‘run anything anywhere’ is becoming more important than ever?  Find out on June 11, 2020, when the SNIA Cloud Storage Technologies Initiative will host a live webcast, “Storage Scalability in Hybrid Cloud and Multicloud Environments.This webcast will help architects and consumers of hybrid cloud and multicloud storage solutions better understand:
  • Trends and benefits of hybrid cloud storage and multicloud deployments
    • The range of technologies and capabilities which will help enterprise, hyperscalers and cloud service providers (CSPs) serve their customers
    • How scalability differs in block vs. file workloads
    • What are key factors to keep in mind when considering a ‘run anything anywhere’ objective?
We hope to see you on June 11th. Our expert presenters will all be on-hand to answer your questions. Register today

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Business Resiliency in a Kubernetes World

Jim Fister

May 12, 2020

title of post

At the 2018 KubeCon keynote, Monzo Bank explained the potential risk of running a single massive Kubernetes cluster. A minor conflict between etcd and Java led to an outage during one of their busiest business days, prompting questions, like “If a cluster goes down can our business keep functioning?”  Understanding the business continuity implications of multiple Kubernetes clusters is an important topic and key area of debate.

It’s an opportunity for the SNIA Cloud Storage Technologies Initiative (CSTI) to host “A Multi-tenant Multi-cluster Kubernetes "Datapocalypse" is Coming” - a live webcast on June 23, 2020 where Kubernetes expert, Paul Burt, will dive into:

  • The history of multi-cluster Kubernetes
  • How multi-cluster setups could affect data heavy workloads (such as multiple microservices backed by independent data stores)
  • Managing multiple clusters
  • Keeping the business functioning if a cluster goes down
  • How to prepare for the coming “datapocalypse”

Multi-cluster Kubernetes that provides robustness and resiliency is rapidly moving from “best practice” to a “must have”. Register today to save your spot on June 23rd to learn more and have your questions answered.

Need a refresher on Kubernetes? Check out the CSTI’s 3-part Kubernetes in the Cloud series to get up-to-speed.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to