Sorry, you need to enable JavaScript to visit this website.

Standards Watch: Blockchain Storage

Olga Buchonina

Jul 24, 2020

title of post

Is there a standard for Blockchain today? Not really. What we are attempting to do is to build one.

Since 2008, when Satoshi Nakamoto’s White Paper was published and Bitcoin emerged, we have been learning about a new solution using a decentralized ledger and one of its applications: Blockchain.

Wikipedia defines Blockchain as follows:  “A blockchain, originally block chain, is a growing list of records, called blocks, that are linked using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree).”

There are certain drawbacks which are significant in today’s applications for Blockchain solutions:

Reliability: There is a consistent issue in solutions today. In most cases, they are not scalable and cannot be adopted by the industry in their current format. Teams developing solutions need financial backing and support, and when the backing stops, the chain disappears even if technically it was a great solution. We aren’t even touching on certain solutions that, upon analyzing the underlying technical approach, are not viable for the industry overall. But their intentions are good, and there is such a thing as trial and error, so we are still learning what would be a good approach to blockchain solutions.

Interoperability is one of the major obstacles since the vast majority are chains with no interface to work across different chains, and that basically makes it an internal company product or a solution. You cannot use these in real-world applications if you want to create an exchange of data and information. Granted that there are some nascent solutions which try to address this problem, and I know that our group will analyze and work with these companies and teams to see if we can create an exchange of best practices.

Data Accuracy:  When it comes to data in the blockchain the true or false relies on the immutability property of blockchain as a data storage. Having an improved storage medium which prevents careless or malicious actions to make them visible and proven the authenticity of the data will allow to remove false data and flag attempts for chain to be corrupted.

Latency is another aspect which is hindering adoption today. Regular transactions using databases and code are currently faster than utilization of the blockchain.

The new SNIA Blockchain Storage Technical Work Group (TWG) will focus on understanding the existing technical solutions and best practices. Why try to do something which has already been done? Instead, it’s better to add this to our knowledge of storage and networking to build new open source specification for blockchain protocols to work across blockchain networks. The TWG will be utilizing aspects of Proof of Capacity and Proof of Space protocols to address storage bottlenecks.

The Blockchain Storage TWG is currently supported by companies such as DELL EMC, IBM, Marvell, Western Digital, ActionSpot and Bankex.  We come from different backgrounds, but our common goal is to build something great.  For more information about this new technical work group, please contact membershipservices@snia.org.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Role of AIOps in IT Modernization

Alex McDonald

Jul 23, 2020

title of post
The almost overnight shift of resources toward remote work has introduced the need for far more flexible, dynamic and seamless end-to-end applications, putting us on a path that requires autonomous capabilities using AIOps – Artificial Intelligence for IT Operations. It’s the topic that the SNIA Cloud Storage Technologies Initiative is going to cover on August 25, 2020 at our live webcast, “IT Modernization with AIOps: The Journey.” Our AI expert, Parviz Peiravi, will provide an overview of concepts and strategies to accelerate the digitalization of critical enterprise IT resources, and help architects rethink what applications and underlying infrastructure are needed to support an agile, seamless data centric environment. This session will specifically address migration from monolithic to microservices, transition to Cloud Native services, and the platform requirements to help accelerate AIOps application delivery within our dynamic hybrid and multi-cloud world. Join this webcast to learn: • Use cases and design patterns: Data Fabrics, Cloud Native and the move from Request Driven to Event Driven •    Foundational technologies supporting observability: how to build a more consistent scalable framework for governance and orchestration •    The nature of an AI data centric enterprise: data sourcing, ingestion, processing, and distribution This webcast will be live, so please bring your questions. We hope to see you on August 25th. Register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Role of AIOps in IT Modernization

Alex McDonald

Jul 23, 2020

title of post
The almost overnight shift of resources toward remote work has introduced the need for far more flexible, dynamic and seamless end-to-end applications, putting us on a path that requires autonomous capabilities using AIOps – Artificial Intelligence for IT Operations. It’s the topic that the SNIA Cloud Storage Technologies Initiative is going to cover on August 25, 2020 at our live webcast, “IT Modernization with AIOps: The Journey.” Our AI expert, Parviz Peiravi, will provide an overview of concepts and strategies to accelerate the digitalization of critical enterprise IT resources, and help architects rethink what applications and underlying infrastructure are needed to support an agile, seamless data centric environment. This session will specifically address migration from monolithic to microservices, transition to Cloud Native services, and the platform requirements to help accelerate AIOps application delivery within our dynamic hybrid and multi-cloud world. Join this webcast to learn: • Use cases and design patterns: Data Fabrics, Cloud Native and the move from Request Driven to Event Driven •    Foundational technologies supporting observability: how to build a more consistent scalable framework for governance and orchestration •    The nature of an AI data centric enterprise: data sourcing, ingestion, processing, and distribution This webcast will be live, so please bring your questions. We hope to see you on August 25th. Register today.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Are We at the End of the 2.5-inch Disk Era?

Jonmichael Hands

Jul 20, 2020

title of post

The SNIA Solid State Storage Special Interest Group (SIG) recently updated the Solid State Drive Form Factor page to provide detailed information on dimensions; mechanical, electrical, and connector specifications; and protocols. On our August 4, 2020 SNIA webcast, we will take a detailed look at one of these form factors - Enterprise and Data Center SSD Form Factor (EDSFF) – challenging an expert panel to consider if we are at the end of the 2.5-in disk era.

Enterprise and Data Center Form Factor (EFSFF) is designed natively for data center NVMe SSDs to improve thermal, power, performance, and capacity scaling. EDSFF has different variants for flexible and scalable performance, dense storage configurations, general purpose servers, and improved data center TCO.  At the 2020 Open Compute Virtual Summit, OEMs, cloud service providers, hyperscale data center, and SSD vendors showcased products and their vision for how this new family of SSD form factors solves real data challenges.

During the webcast, our SNIA experts from companies that have been involved in EDSFF since the beginning will discuss how they will use the EDSFF form factor:

  • Hyperscale data center and cloud service provider panelists Facebook and Microsoft will discuss how E1.S (SNIA specification SFF-TA-1006) helps solve performance scalability, serviceability, capacity, and thermal challenges for future NVMe SSDs and persistent memory in 1U servers.
  • Server and storage system panelists Dell, HPE, Kioxia, and Lenovo will discuss their goals for the E3 family and the new updated version of the E3 specification (SNIA specification SFF-TA-1008)

We hope you can join us as we spend some time on this important topic.  Register here to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Are We at the End of the 2.5-inch Disk Era?

Jonmichael Hands

Jul 20, 2020

title of post
The SNIA Solid State Storage Special Interest Group (SIG) recently updated the Solid State Drive Form Factor page to provide detailed information on dimensions; mechanical, electrical, and connector specifications; and protocols. On our August 4, 2020 SNIA webcast, we will take a detailed look at one of these form factors – Enterprise and Data Center SSD Form Factor (EDSFF) – challenging an expert panel to consider if we are at the end of the 2.5-in disk era. Enterprise and Data Center Form Factor (EFSFF) is designed natively for data center NVMe SSDs to improve thermal, power, performance, and capacity scaling. EDSFF has different variants for flexible and scalable performance, dense storage configurations, general purpose servers, and improved data center TCO.  At the 2020 Open Compute Virtual Summit, OEMs, cloud service providers, hyperscale data center, and SSD vendors showcased products and their vision for how this new family of SSD form factors solves real data challenges. During the webcast, our SNIA experts from companies that have been involved in EDSFF since the beginning will discuss how they will use the EDSFF form factor:
  • Hyperscale data center and cloud service provider panelists Facebook and Microsoft will discuss how E1.S (SNIA specification SFF-TA-1006) helps solve performance scalability, serviceability, capacity, and thermal challenges for future NVMe SSDs and persistent memory in 1U servers.
  • Server and storage system panelists Dell, HPE, Kioxia, and Lenovo will discuss their goals for the E3 family and the new updated version of the E3 specification (SNIA specification SFF-TA-1008)
We hope you can join us as we spend some time on this important topic.  Register here to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Where Does Cyber Insurance Fit in Your Security Strategy?

Paul Talbut

Jul 17, 2020

title of post
Protection against cyber threats is recognized as a necessary component of an effective risk management approach, typically based on a well-known cybersecurity framework. A growing area to further mitigate risks and provide organizations with the high level of protection they need is cyber insurance. However, it’s not as simple as buying a pre-packaged policy. In fact, it’s critical to identify what risks and conditions are excluded from a cyber insurance policy before you buy. Determining what kind of cyber insurance your business needs or if the policy you have will really cover you in the event of an incident is challenging. On August 27, 2020 the SNIA Cloud Storage Technologies Initiative (CSTI) will host a live webcast, “Does Your Storage Need a Cyber Insurance Tune-Up?” where we’ll examine how cyber insurance fits in a risk management program. We’ll identify key terms and conditions that should be understood and carefully negotiated as cyber insurance policies may not cover all types of losses. Join this webcast to learn:
  • General threat tactics, risk management approaches, cybersecurity frameworks
  • How cyber insurance fits within an enterprise data security strategy
  • Nuances of cyber insurance – exclusions, exemption, triggers, deductibles and payouts
  • Reputational damage considerations
  • Challenges associated with data stored in the cloud
There’s a lot to cover when it comes to this topic. In fact, we may need to offer a “Part Two” to this webcast, but hope you will register today to join us on August 27th.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Applied Cryptography Techniques and Use Cases

AlexMcDonald

Jul 15, 2020

title of post
The rapid growth in infrastructure to support real time and continuous collection and sharing of data to make better business decisions has led to an age of unprecedented information storage and easy access. While collection of large amounts of data has increased knowledge and allowed improved efficiencies for business, it has also made attacks upon that information—theft, modification, or holding it for ransom — more profitable for criminals and easier to accomplish. As a result, strong cryptography is often used to protect valuable data. The SNIA Networking Storage Forum (NSF) has recently covered several specific security topics as part of our Storage Networking Security Webcast Series, including Encryption101, Protecting Data at Rest, and Key Management 101. Now, on August 5, 2020, we are going to present Applied Cryptography. In this webcast, our SNIA experts will present an overview of cryptography techniques for the most popular and pressing use cases. We’ll discuss ways of securing data, the factors and trade-off that must be considered, as well as some of the general risks that need to be mitigated. We’ll be looking at:
  • Encryption techniques for authenticating users
  • Encrypting data—either at rest or in motion
  • Using hashes to authenticate information coding and data transfer methodologies
  • Cryptography for Blockchain
As the process for storing and transmitting data securely has evolved, this Storage Networking Security Series provides ongoing education for placing these very important parts into the much larger whole. We hope you can join us as we spend some time on this very important piece of the data security landscape. Register here to save your spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Notable Questions on NVMe-oF 1.1

Tim Lustig

Jul 14, 2020

title of post

At our recent SNIA Networking Storage Forum (NSF) webcast, Notable Updates in NVMe-oF™ 1.1we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what's new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all:

Q. Is there an implementation of NVMe-oF with direct CMB access?

A. The Controller Memory Buffer (CMB) was introduced in NVMe 1.2 and first supported in the NVMe-oF 1.0 specification. It's supported if the storage vendor has implemented this within the hardware and the network supports it. We recommend that you ask your favorite vendor if they support the feature.

Q. What is the different between PMR in an NVMe device and the persistent memory in general?

A. The Persistent Memory Region (PMR) is a region within the SSD controller and it is reserved for system level persistent memory that is exposed to the host. Just like a Controller Memory Buffer (CMB), the PMR may be used to store command data, but because it's persistent it allows the content to remain even after power cycles and resets. To go further into this answer would require a follow up webinar.

Q. Are any special actions required on the host side over Controller Memory Buffers to maintain the data consistency?

A. To prevent possible disruption and to maintain data consistency, first the control address range must be configured so that addresses will not overlap, as described in the latest specification. There is also a flush command so that persistent memory can be cleared, (also described in the specification).

Q. Is there a field to know the size of CMB and PMR supported by controller? What is the general size of CMR in current devices?

A. The general size of CMB/PMR is vendor-specific but there is a size register field in both that is defined in the specification by the size register.

Q. Does having PMR guarantee that write requests to the PMR region have been committed to media, even though they have not been acknowledged before the power fail? Is there a max time limit in spec, within which NVMe drive should recover after power fail?

A. The implementation must ensure that the previous write has completed and that it is persistent. Time limit is vendor-specific.

Q. What is the average latency of an unladen swallow using NVMe-oF 1.1?

A. Average latency will depend on the media, the network and the way the devices are implemented. It also depends on whether or not the swallow is African or European (African swallows are non-migratory).

Q. Doesn't RDMA provide an 'implicit' queue on the controller side (negating the need for CMB for queues). Can the CMB also be used for data?

A. Yes, the CMB can be used to hold both commands and command data and the queues are managed by RDMA within host memory or within the adapter. By having the queue in the CMB you can gain performance advantages.

Q. What is a ballpark latency difference number between CMB and PMR access, can you provide a number based on assumption that both of these are accessed over RDMA fabric?

A. When using CMB, latency goes down but there are no specific latency numbers available as of this writing.

Q. What is the performance of NVMe/TCP in terms of IOPS as compared to NVMe/RDMA? (Good implementation assumed)

A. This is heavily implementation dependent as the network adapter may provide offloads for TCP. NVMe/RDMA generally will have lower latency.

Q. If there are several sequence-level errors, how can we correct the errors in an appropriate order?

Q. How could we control the right order for the error corrections in FC-NVMe-2?

  1. These two questions are related and the response below is applicable to both questions.

As mentioned in the presentation, Sequence-level error recovery provides the ability to detect and recover from lost commands, lost data, and lost status responses. For Fibre Channel, a Sequence consists of one or more frames: e.g., a Sequence containing a NVMe command, a Sequence containing data, or a Sequence containing a status response. 

The order for error correction is based on information returned from the target on the given state of the Exchange compared to the state of the Exchange at the initiator. To do this, and from a high-level overview, upon sending an Exchange containing an NVMe command, a timer is started at the initiator. The default value for this timer is 2 seconds, and if a response is not received for the Exchange before the timer expires, a message is sent from the initiator to the target to determine the status of the Exchange.

Also, a response from the target may be received before the information on the Exchange is obtained from the target. If this occurs the command just continues on as normal, the timer is restarted if the Exchange is still in progress, and all is good. Otherwise, if no response from the target has been received since sending the Exchange information message, then one of two actions usually take place:

a) If the information returned from the target indicates the Exchange is not known, then the Exchange resources are cleaned up and released, and the Exchange containing the NVMe command is re-transmitted; or

b) If the information returned from the target indicates the Exchange is known and the target is still working on the command, then no error recovery is needed; the timer is restarted, and the initiator continues to wait for a response from the target.

An example of this behavior is a format command, where it may take a while for the command to complete, and the status response to be sent.

For some other typical information returned from the target per the Exchange status query:

  1. If the information returned from the target indicates the Exchange is known, and a ready to receive data message was sent by the target (e.g., a write operation), then the initiator requests the target to re-transmit the ready-to-receive data message, and the write operation continues at the transport level;
  2. If the information returned from the target indicates the Exchange is known, and data was sent by the target (e.g., a read operation), then the initiator requests the target to re-transmit the data and the status response, and the read operation continues at the transport level; and
  3. If the information returned from the target indicates the Exchange is known, and the status response was sent by the target, then the initiator requests the target to re-transmit the status response and the command completes accordingly at the transport level.

For further information, detailed informative Sequence level error recovery diagrams are provided in Annex E of the FC-NVMe-2 standard available via INCITS. 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Notable Questions on NVMe-oF 1.1

Tim Lustig

Jul 14, 2020

title of post
At our recent SNIA Networking Storage Forum (NSF) webcast, “Notable Updates in NVMe-oF™ 1.1” we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what’s new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all: Q. Is there an implementation of NVMe-oF with direct CMB access? A. The Controller Memory Buffer (CMB) was introduced in NVMe 1.2 and first supported in the NVMe-oF 1.0 specification. It’s supported if the storage vendor has implemented this within the hardware and the network supports it. We recommend that you ask your favorite vendor if they support the feature. Q. What is the different between PMR in an NVMe device and the persistent memory in general? A. The Persistent Memory Region (PMR) is a region within the SSD controller and it is reserved for system level persistent memory that is exposed to the host. Just like a Controller Memory Buffer (CMB), the PMR may be used to store command data, but because it’s persistent it allows the content to remain even after power cycles and resets. To go further into this answer would require a follow up webinar. Q. Are any special actions required on the host side over Controller Memory Buffers to maintain the data consistency? A. To prevent possible disruption and to maintain data consistency, first the control address range must be configured so that addresses will not overlap, as described in the latest specification. There is also a flush command so that persistent memory can be cleared, (also described in the specification). Q. Is there a field to know the size of CMB and PMR supported by controller? What is the general size of CMR in current devices? A. The general size of CMB/PMR is vendor-specific but there is a size register field in both that is defined in the specification by the size register. Q. Does having PMR guarantee that write requests to the PMR region have been committed to media, even though they have not been acknowledged before the power fail? Is there a max time limit in spec, within which NVMe drive should recover after power fail? A. The implementation must ensure that the previous write has completed and that it is persistent. Time limit is vendor-specific. Q. What is the average latency of an unladen swallow using NVMe-oF 1.1? A. Average latency will depend on the media, the network and the way the devices are implemented. It also depends on whether or not the swallow is African or European (African swallows are non-migratory). Q. Doesn’t RDMA provide an ‘implicit’ queue on the controller side (negating the need for CMB for queues). Can the CMB also be used for data? A. Yes, the CMB can be used to hold both commands and command data and the queues are managed by RDMA within host memory or within the adapter. By having the queue in the CMB you can gain performance advantages. Q. What is a ballpark latency difference number between CMB and PMR access, can you provide a number based on assumption that both of these are accessed over RDMA fabric? A. When using CMB, latency goes down but there are no specific latency numbers available as of this writing. Q. What is the performance of NVMe/TCP in terms of IOPS as compared to NVMe/RDMA? (Good implementation assumed) A. This is heavily implementation dependent as the network adapter may provide offloads for TCP. NVMe/RDMA generally will have lower latency. Q. If there are several sequence-level errors, how can we correct the errors in an appropriate order? Q. How could we control the right order for the error corrections in FC-NVMe-2?
  1. These two questions are related and the response below is applicable to both questions.
As mentioned in the presentation, Sequence-level error recovery provides the ability to detect and recover from lost commands, lost data, and lost status responses. For Fibre Channel, a Sequence consists of one or more frames: e.g., a Sequence containing a NVMe command, a Sequence containing data, or a Sequence containing a status response. The order for error correction is based on information returned from the target on the given state of the Exchange compared to the state of the Exchange at the initiator. To do this, and from a high-level overview, upon sending an Exchange containing an NVMe command, a timer is started at the initiator. The default value for this timer is 2 seconds, and if a response is not received for the Exchange before the timer expires, a message is sent from the initiator to the target to determine the status of the Exchange. Also, a response from the target may be received before the information on the Exchange is obtained from the target. If this occurs the command just continues on as normal, the timer is restarted if the Exchange is still in progress, and all is good. Otherwise, if no response from the target has been received since sending the Exchange information message, then one of two actions usually take place: a) If the information returned from the target indicates the Exchange is not known, then the Exchange resources are cleaned up and released, and the Exchange containing the NVMe command is re-transmitted; or b) If the information returned from the target indicates the Exchange is known and the target is still working on the command, then no error recovery is needed; the timer is restarted, and the initiator continues to wait for a response from the target. An example of this behavior is a format command, where it may take a while for the command to complete, and the status response to be sent. For some other typical information returned from the target per the Exchange status query:
  1. If the information returned from the target indicates the Exchange is known, and a ready to receive data message was sent by the target (e.g., a write operation), then the initiator requests the target to re-transmit the ready-to-receive data message, and the write operation continues at the transport level;
  2. If the information returned from the target indicates the Exchange is known, and data was sent by the target (e.g., a read operation), then the initiator requests the target to re-transmit the data and the status response, and the read operation continues at the transport level; and
  3. If the information returned from the target indicates the Exchange is known, and the status response was sent by the target, then the initiator requests the target to re­transmit the status response and the command completes accordingly at the transport level.
For further information, detailed informative Sequence level error recovery diagrams are provided in Annex E of the FC-NVMe-2 standard available via INCITS.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Kubernetes Business Resiliency FAQ

Jim Fister

Jul 13, 2020

title of post

The SNIA Cloud Storage Technologies Initiative continued our webcast series on Kubernetes last month with an interesting look at the business resiliency of Kubernetes. If you missed “A Multi-tenant, Multi-cluster Kubernetes Datapocalypse in Coming” it’s available along with the slide deck in the SNIA Educational Library here. In this Q&A blog, our Kubernetes expert, Paul Burt, answers some frequently asked questions on this topic.

Q: Multi-cloud: Departments might have their own containers; would they have their own cloud (i.e. Hybrid Cloud)?  Is that how multi-cloud might start in a company?

A: Multi-cloud or hybrid cloud is absolutely a result of different departments scaling containers in a deployment. Multi-cloud means multiple clusters, but those can be of various configurations. Different clusters and clouds need to be tuned for the needs of the organization.

Netflix is perhaps one of the most popular adopters of the cloud. Following their success, most organizations prefer a gradual shift towards cloud. That practice naturally results in different environments during the growth and exploration phase.

Q: Service Mesh and Kube Federation: From the perspective of tools, how is the development of the various federation tools progressing?

A: There are some tools (Istio and Linkerd) that are quite far along. The official community solution, KubeFed, is just on the cusp of moving into Beta. We’re starting to see some standards or standard practices developing in this space. Companies like SnapChat and Uber are already sharing some of their benefits and challenges with multi-tenancy and multi-cluster at scale. More concrete recommendations should emerge as more organizations gain experience and share their findings.

Q: Defining cluster characteristics: Within a given container, does the service needed define or predict the statefulness of the application?

A: In Kubernetes a service defines how an app might be discovered by other apps and services that need to rely on it. Generally, services tend to push the state down. That means where the data is stored will be defined in the YAML manifest that gets pushed to Kubernetes, and stateful apps are pretty well-defined in that environment.

Stateful apps will naturally be harder to manage, because of the need for uptime. The good news is they are at least easy to identify when running on Kubernetes.

Q: On how knowledge grows: It’s possible to start simply with containers and move to more complex environments. Does the knowledge gained from containers transfer to a multi-cluster, multi-tenant space?

A: Yes, it’s possible to start and grow. A lot of the knowledge will transfer. Think of it like math, with each course building on the last. We might learn trigonometry, and find that many of those trig problems simplify into algebra problems. Similarly, we might learn about Pods in Kubernetes, and find that they simplify down to many of the things we learned about containers.

In the end, your problems will likely resolve down to something familiar. Start simply, and then grow your complexity and mindset.

Q: Version control: Do we need coordinated version pinning when deploying homogenous clusters?  Does that allow us to write once and run everywhere?

A: Kubernetes is capable of that, but there are certainly some caveats. By having a homogenous infrastructure and tightly maintaining versions, you are more likely to be successful and have less versioning and bug issues that you have to track over time.

On the other hand, a machine learning team that creates a recommendation system has vastly different needs than a team building an e-commerce website.

Our goal should be to start with a well-defined base platform. With a well-defined base, it’s easier to test the compatibility of those common components. When specific teams have specific needs, we’ll inevitably need to adapt our platform. That base should mean it’s easier to add new components with confidence. Troubleshooting, and maintaining the resulting distinct version of the platform, should also be easier because the base platform ensures a lot of familiarity and common knowledge transfers.

Q: Interface and management of security: There needs to be a balance between customer demand for cluster storage and security. Is there an appropriate interface that can manage security problems and give users enough space to easily consume storage on clusters? What’s the right design in this case?

A: Yes. Many of the current tools have really strong interfaces for management of security.  Rook and Astra are two good examples. Most of these solutions are open-source, but that does not necessarily mean they approach problems like security in the same way.

For any cluster storage solution, we’re probably looking for a few common features. Encryption of data at rest, RBAC / permissions, snapshots, and backups. For components that are defined by Kubernetes (like RBAC), we’re more likely to see a consistent way of doing things amongst tools. For other items like encryption of data or backing up, it’s more likely we’ll see each solution tackle the problem in a slightly different way.

The implication is that the cluster storage solution you start with will likely be with you for a while, so make sure you’re picking the right tool for the overall corporate needs.

Q: So that means it’s really important to do the research early? Is it possible to move between tools?

A: It is possible to move between tools, but it’s likely that your first choices will be the ones you carry forward. Beginning to research this early can help reduce the panic and stress that might come later on, when your organization discovers a hard need and resulting deadline for cluster storage.

Remember, we said this webcast was part of a series? Watch all of the CSTI’s Kubernetes in the Cloud presentations to learn more.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to