Sorry, you need to enable JavaScript to visit this website.

Understanding Kubernetes in the Cloud

Mike Jochimsen

Mar 25, 2019

title of post
Ever wonder why and where you would want to use Kubernetes? You’re not alone, that’s why the SNIA Cloud Storage Technologies Imitative is hosting a live webcast on May 2, 2019 “Kubernetes in the Cloud.” Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization. In this webcast, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on storage aspects and best practices, discussing:
  • What is Kubernetes? Why would you want to use it?
  • How does Kubernetes help in a multi-cloud/private cloud environment?
  • How does Kubernetes orchestrate and manage storage?
  • Can Kubernetes use Docker?
  • How do we provide persistence and data protection?
  • Example use cases
We’re fortunate to have great experts for this session, Matt Baldwin, the founder and former CEO of Stackpoint Cloud and now with NetApp and Ingo Fuchs, Chief Technologist, Cloud and DevOps at NetApp. I hope you will register today to join us on May 2nd. It’s live which means our expert presenters will be on-hand to answer your questions on the spot.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Innovating File System Architectures with NVMe

Marty Foltyn

Mar 20, 2019

title of post

It’s exciting to see the recent formation of the Solid State Drive Special Interest Group (SIG) here in the SNIA Solid State Storage Initiative.  After all, everyone appreciates the ability to totally geek out about the latest drive technology and software for file systems.  Right? Hey, where’s everyone going? We have vacation pictures with the dog we stored that we want to show…

Solid state storage has long found its place with those seeking greater performance in systems, especially where smaller or more random block/file transfers are prevalent.  Single-system opportunity with NVMe drives is broad, and pretty much unquestioned by those building systems for the modern IT environments. Cloud, likewise, has found use of the technology where single-node performance makes a broader deployment relevant.

There have been many efforts to build the case for solid state in networked storage.  Where storage and computation combine -- for instance in a large map/reduce application -- there’s been significant advantage, especially in the area of sustained data reads.  This has usually comes at a scalar cost, where additional systems are needed for capacity. Nonetheless, finding cases where non-volatile memory enhances infrastructure deployment for storage or analytics.  Yes, analytics is infrastructure these days, deal with it.

Seemingly independent of the hardware trends, the development of new file systems has provided significant innovation.  Notably, heavily parallel file systems have the ability to serve a variety of network users in specialized applications or appliances.  Much of the work has focused on development of the software or base technology rather than delivering a broader view of either performance or applicability.  Therefore, a paper such as this one on building a Lustre file system using NVMe drives is a welcome addition to the case for both solid state storage and revolutionary file systems that move from specific applications to more general availability.

The paper shows how to build a small (half-rack) cluster of storage to support the Lustre file system, and it also adds the Dell VFlex OS implemented as a software defined storage solution.  This has the potential to take an HPC-focused product like Lustre and drive a broader market availability for a high-performance solution. The combination of read/write performance, easy adoption to the broad enterprise, and relatively small footprint shows new promise for innovation.

The opportunity for widespread delivery of solid state storage using NVMe and software innovation in the storage space is ready to move the datacenter to new and more ambitious levels.  The SNIA 2019 Storage Developer Conference  is currently open for submissions from storage professionals willing to share knowledge and experience.  Innovative solutions such as this one are always welcome for consideration.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Innovating File System Architectures with NVMe

Marty Foltyn

Mar 20, 2019

title of post

It’s exciting to see the recent formation of the Solid State Drive Special Interest Group (SIG) here in the SNIA Solid State Storage Initiative.  After all, everyone appreciates the ability to totally geek out about the latest drive technology and software for file systems.  Right? Hey, where’s everyone going? We have vacation pictures with the dog we stored that we want to show…

Solid state storage has long found its place with those seeking greater performance in systems, especially where smaller or more random block/file transfers are prevalent.  Single-system opportunity with NVMe drives is broad, and pretty much unquestioned by those building systems for the modern IT environments. Cloud, likewise, has found use of the technology where single-node performance makes a broader deployment relevant.

There have been many efforts to build the case for solid state in networked storage.  Where storage and computation combine — for instance in a large map/reduce application — there’s been significant advantage, especially in the area of sustained data reads.  This has usually comes at a scalar cost, where additional systems are needed for capacity. Nonetheless, finding cases where non-volatile memory enhances infrastructure deployment for storage or analytics.  Yes, analytics is infrastructure these days, deal with it.

Seemingly independent of the hardware trends, the development of new file systems has provided significant innovation.  Notably, heavily parallel file systems have the ability to serve a variety of network users in specialized applications or appliances.  Much of the work has focused on development of the software or base technology rather than delivering a broader view of either performance or applicability.  Therefore, a paper such as this one on building a Lustre file system using NVMe drives is a welcome addition to the case for both solid state storage and revolutionary file systems that move from specific applications to more general availability.

The paper shows how to build a small (half-rack) cluster of storage to support the Lustre file system, and it also adds the Dell VFlex OS implemented as a software defined storage solution.  This has the potential to take an HPC-focused product like Lustre and drive a broader market availability for a high-performance solution. The combination of read/write performance, easy adoption to the broad enterprise, and relatively small footprint shows new promise for innovation.

The opportunity for widespread delivery of solid state storage using NVMe and software innovation in the storage space is ready to move the datacenter to new and more ambitious levels.  The SNIA 2019 Storage Developer Conference  is currently open for submissions from storage professionals willing to share knowledge and experience.  Innovative solutions such as this one are always welcome for consideration.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Has Hybrid Cloud Reached a Tipping Point?

Michelle Tidwell

Mar 13, 2019

title of post

According to research from the Enterprise Strategy Group (ESG), IT organizations today are struggling to strike the right balance between public cloud and their on-premises infrastructure. Has hybrid cloud reached a tipping point? Find out on April 23, 2019 at our live webcast “The Hybrid Cloud Tipping Point” when the SNIA CSTI welcomes ESG senior analyst, Scott Sinclair, who will share research on current cloud trends, covering:

  • Key drivers behind IT complexity
  • IT spending priorities
  • Multi-cloud & hybrid cloud adoption drivers
  • When businesses are moving workloads from the cloud back on-premises
  • Top security and cost challenges
  • Future cloud projections

The research presentation will be followed by a panel discussion with Scott Sinclair and my SNIA cloud colleagues, Alex McDonald, Mike Jochimsen and Eric Lakin. We will be on-hand on the 23rd to answer questions.

Register today. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Has Hybrid Cloud Reached a Tipping Point?

Michelle Tidwell

Mar 13, 2019

title of post
According to research from the Enterprise Strategy Group (ESG), IT organizations today are struggling to strike the right balance between public cloud and their on-premises infrastructure. Has hybrid cloud reached a tipping point? Find out on April 23, 2019 at our live webcast “The Hybrid Cloud Tipping Point” when the SNIA CSTI welcomes ESG senior analyst, Scott Sinclair, who will share research on current cloud trends, covering:
  • Key drivers behind IT complexity
  • IT spending priorities
  • Multi-cloud & hybrid cloud adoption drivers
  • When businesses are moving workloads from the cloud back on-premises
  • Top security and cost challenges
  • Future cloud projections
The research presentation will be followed by a panel discussion with Scott Sinclair and my SNIA cloud colleagues, Alex McDonald, Mike Jochimsen and Eric Lakin. We will be on-hand on the 23rd to answer questions. Register today. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scale-Out File Systems FAQ

John Kim

Mar 8, 2019

title of post
On February 28th, the SNIA Networking Storage Forum (NSF) took at look at what's happening in Scale-Out File Systems. We discussed general principles, design considerations, challenges, benchmarks and more. If you missed the live webcast, it's now available on-demand. We did not have time to answer all the questions we received at the live event, so here are answers to them all. Q. Can scale-out file systems do Erasure coding? A. Indeed, Erasure coding is a common method to improve resilience. Q. How does one address the problem of a specific disk going down? Where does scale-out architecture provide redundancy? A. Disk failures typically are covered by RAID software. Some of scale-out software also use multiple replicators to mitigate the impact of disk failures. Q. Are there use cases where a hybrid of these two styles is needed? A. Yes, for example, in some environments, the foundation layer might be using the dedicated storage server to form the large storage pool, which is the 1st style, and then export LUNs or virtual disks to the compute nodes (either physical or virtual) to run the applications, which is the 2nd style. Q. Which scale-out file systems present on windows, Linux platforms? A.   Some of  the scale-out file systems provide  native client software across multiple  platforms. Another approach is to use Samba to build SMB  gateways to make the  scale-out file system  available to Windows computers. Q. Is Amazon elastic file system (EFS) on AWS scale-out file systems? A. Please see: https://docs.aws.amazon.com/efs/latest/ug/performance.html "Amazon EFS file systems are distributed across an unconstrained number of storage servers, enabling file systems to grow elastically to petabyte scale and allowing massively parallel access from Amazon EC2 instances to your data. The distributed design of Amazon EFS avoids the bottlenecks and constraints inherent to traditional file servers." Q. Where are the most cost effective price/performance uses of NVMe?   A. NVMe can support very high IOPS and very high throughput as well. The best use case would be to couple NVMe with high performance storage software that would not limit the NVMe.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scale-Out File Systems FAQ

John Kim

Mar 8, 2019

title of post
On February 28th, the SNIA Networking Storage Forum (NSF) took at look at what’s happening in Scale-Out File Systems. We discussed general principles, design considerations, challenges, benchmarks and more. If you missed the live webcast, it’s now available on-demand. We did not have time to answer all the questions we received at the live event, so here are answers to them all. Q. Can scale-out file systems do Erasure coding? A. Indeed, Erasure coding is a common method to improve resilience. Q. How does one address the problem of a specific disk going down? Where does scale-out architecture provide redundancy? A. Disk failures typically are covered by RAID software. Some of scale-out software also use multiple replicators to mitigate the impact of disk failures. Q. Are there use cases where a hybrid of these two styles is needed? A. Yes, for example, in some environments, the foundation layer might be using the dedicated storage server to form the large storage pool, which is the 1st style, and then export LUNs or virtual disks to the compute nodes (either physical or virtual) to run the applications, which is the 2nd style. Q. Which scale-out file systems present on windows, Linux platforms? A.  Some of the scale-out file systems provide native client software across multiple platforms. Another approach is to use Samba to build SMB gateways to make the scale-out file system available to Windows computers. Q. Is Amazon elastic file system (EFS) on AWS scale-out file systems? A. Please see: https://docs.aws.amazon.com/efs/latest/ug/performance.html “Amazon EFS file systems are distributed across an unconstrained number of storage servers, enabling file systems to grow elastically to petabyte scale and allowing massively parallel access from Amazon EC2 instances to your data. The distributed design of Amazon EFS avoids the bottlenecks and constraints inherent to traditional file servers.” Q. Where are the most cost effective price/performance uses of NVMe?  A. NVMe can support very high IOPS and very high throughput as well. The best use case would be to couple NVMe with high performance storage software that would not limit the NVMe.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Composable Infrastructure Q&A

Alex McDonald

Mar 5, 2019

title of post

On February 13, 2019, the SNIA Cloud Storage Technologies Initiative (CSTI) presented a live webcast, Why Composable Infrastructure? Our goal was to clearly explain the reasoning behind, and the benefits of, composable infrastructure in an educational, vendor-neutral way. We believe our speakers, Philip Kufeldt and Mike Jochimsen, did just that. Now, as promised, Philip and Mike have answered the many interesting questions we received during the live event.

Q. Are composable infrastructure solutions incompatible with virtualized or containerized environments? Will these solutions only serve bare metal environments?

A. Composable infrastructure solutions will eventually work across any environment that supports the orchestration toolsets. There are no compatibility issues between virtualization/containerization and composable infrastructure, even if they fundamentally look at allocation of resources within a defined resource differently. For example, in a virtualized environment if the need for network bandwidth or storage capacity exceeds the capability of a given resource, a "larger" resource could be composed using the orchestration tools. It would then be managed within the virtualization layer like any other resource.

Q. Typically new technology adoption is slowed due to support within commercial operating systems. Are there changes needed in the major OS environments (Linux, Windows, VMware, etc.), that will need to be released before composable infrastructure solutions will be supported?

A. So the PCIe and Ethernet based fabrics are already well established and have great OS support. The storage world and networking worlds already deploy composable infrastructure. However, the newer standards such as Gen-Z, OpenCAPI, and CCIX will need both hardware and software support.   ARM SoCs are showing up with CCIX HW and OpenCAPI is in the Power architecture. But this is just the early stage; switches, enclosures and components that support these standards are still in the offing. Furthermore the OS support for these standards is also unavailable. And finally the management mechanisms and composability software is also undefined. So we still are a good distance from the larger composable infrastructure being available.

Q. Are the data center orchestration tools currently on the market capable of building a composable infrastructure solution like you described?

A. The tools on the market are still in the early stages of providing this capability. Some are purpose built for specific environments while others encompass a wider set of environments, but lack some of the dynamic/automation capabilities. Also, there is work going on, or starting up, in standards bodies to define the APIs needed for orchestration to work in truly heterogeneous application, operating system and hardware environments.

Q. In composable environments, how does security scale with it, specifically, encryption? Encrypt everything? Or some subset of jobs that truly are only jobs requiring encryption?

A. Fabrics can be configured to be local and private relieving the need for encrypted transfers. However there will be new issues to contend with. For example, consider memory that was previously used in one configuration that was disassembled and then reused in another. Ensuring that memory is cleaned before reuse will become required to prevent information leakage.

Q. For Gen-Z, Pooled Memory or Memory from different Racks, what about the Latency issues? Local memories don’t have issues with latency?

A. Although Gen-Z supports longer distance interconnects, it does not mean that only long distance configurations will be utilized. Think of it as a set of tools in a toolbox. Some memories will be close for lower latencies and others will be farther to provide for 4th or 5th level caching.

Q. Is it more about declarative mapping of the components? At this point software and hardware are decoupled, so the messaging and logic are really the requirement for orchestration.

A. The orchestration layer provides a translation between a declarative and imperative state in composable infrastructure. It is responsible for gathering the requirements from the application (declarative - "this is what I want"), then identifying the capabilities of the components on the network and logically mapping them to create a virtual infrastructure (imperative - "this is how to do it").

Q. As apps start to be built from microservices, which may run across different physical nodes, I would think this further raises performance challenges on disaggregated infrastructure. How this will impact things would be an interesting next topic.

A. Agreed. Although I believe microservices will actually be enhanced by composable infrastructure. Composable infrastructure in general will create smaller systems that more closely fit the needs of the service or classes of services that will run on them. Just as in a bin packing problem having smaller units tends to provide better utilization of the container.

Got more questions? Feel free to comment on this blog and we’ll answer.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Composable Infrastructure Q&A

Alex McDonald

Mar 5, 2019

title of post
On February 13, 2019, the SNIA Cloud Storage Technologies Initiative (CSTI) presented a live webcast, Why Composable Infrastructure? Our goal was to clearly explain the reasoning behind, and the benefits of, composable infrastructure in an educational, vendor-neutral way. We believe our speakers, Philip Kufeldt and Mike Jochimsen, did just that. Now, as promised, Philip and Mike have answered the many interesting questions we received during the live event. Q. Are composable infrastructure solutions incompatible with virtualized or containerized environments? Will these solutions only serve bare metal environments? A. Composable infrastructure solutions will eventually work across any environment that supports the orchestration toolsets. There are no compatibility issues between virtualization/containerization and composable infrastructure, even if they fundamentally look at allocation of resources within a defined resource differently. For example, in a virtualized environment if the need for network bandwidth or storage capacity exceeds the capability of a given resource, a “larger” resource could be composed using the orchestration tools. It would then be managed within the virtualization layer like any other resource. Q. Typically new technology adoption is slowed due to support within commercial operating systems. Are there changes needed in the major OS environments (Linux, Windows, VMware, etc.), that will need to be released before composable infrastructure solutions will be supported? A. So the PCIe and Ethernet based fabrics are already well established and have great OS support. The storage world and networking worlds already deploy composable infrastructure. However, the newer standards such as Gen-Z, OpenCAPI, and CCIX will need both hardware and software support.   ARM SoCs are showing up with CCIX HW and OpenCAPI is in the Power architecture. But this is just the early stage; switches, enclosures and components that support these standards are still in the offing. Furthermore the OS support for these standards is also unavailable. And finally the management mechanisms and composability software is also undefined. So we still are a good distance from the larger composable infrastructure being available. Q. Are the data center orchestration tools currently on the market capable of building a composable infrastructure solution like you described? A. The tools on the market are still in the early stages of providing this capability. Some are purpose built for specific environments while others encompass a wider set of environments, but lack some of the dynamic/automation capabilities. Also, there is work going on, or starting up, in standards bodies to define the APIs needed for orchestration to work in truly heterogeneous application, operating system and hardware environments. Q. In composable environments, how does security scale with it, specifically, encryption? Encrypt everything? Or some subset of jobs that truly are only jobs requiring encryption? A. Fabrics can be configured to be local and private relieving the need for encrypted transfers. However there will be new issues to contend with. For example, consider memory that was previously used in one configuration that was disassembled and then reused in another. Ensuring that memory is cleaned before reuse will become required to prevent information leakage. Q. For Gen-Z, Pooled Memory or Memory from different Racks, what about the Latency issues? Local memories don’t have issues with latency? A. Although Gen-Z supports longer distance interconnects, it does not mean that only long distance configurations will be utilized. Think of it as a set of tools in a toolbox. Some memories will be close for lower latencies and others will be farther to provide for 4th or 5th level caching. Q. Is it more about declarative mapping of the components? At this point software and hardware are decoupled, so the messaging and logic are really the requirement for orchestration. A. The orchestration layer provides a translation between a declarative and imperative state in composable infrastructure. It is responsible for gathering the requirements from the application (declarative – “this is what I want”), then identifying the capabilities of the components on the network and logically mapping them to create a virtual infrastructure (imperative – “this is how to do it”). Q. As apps start to be built from microservices, which may run across different physical nodes, I would think this further raises performance challenges on disaggregated infrastructure. How this will impact things would be an interesting next topic. A. Agreed. Although I believe microservices will actually be enhanced by composable infrastructure. Composable infrastructure in general will create smaller systems that more closely fit the needs of the service or classes of services that will run on them. Just as in a bin packing problem having smaller units tends to provide better utilization of the container. Got more questions? Feel free to comment on this blog and we’ll answer.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Got Questions on Container Storage? We’ve Got Answers!

Alex McDonald

Feb 27, 2019

title of post
Keeping up with changes in the world of container storage is not easy. That’s why the SNIA Cloud Storage Technologies Initiative invited expert Keith Hudgins of Docker for a live webcast, “What’s New in Container Storage.” I encourage you to watch it on-demand. It’s well worth the approximately half-hour investment to get up to speed on container storage. As promised during the live event, here are answers to the questions we received: Q. How does the new Container Storage Interface fit in here? A. Container Storage Interface (CSI) is one of the three persistent storage interfaces for Kubernetes. It's also gaining a bit of traction for non-Kubernetes use as well: Pivotal and Mesos have announced their intention to use the API for support for volume use. You can learn more at the CSI main project page. Q. Where does LXD/LXC fit into this discussion? A. Not very well - LXC technology was used in earlier versions of Docker prior to Docker Engine 1.10. There is some provision under LXC for both persistent volumes and overlay, but I'm honestly not that familiar with the pluggable APIs for that container tech. Here's a link to some docs for the LXD persistent storage interface. Q. How does hardware-RAID created volumes play a role in Kubernetes? Do the hardware RAID volumes need an out-of-tree plugin for Kubernetes persistent volume? A. Hardware RAID devices can provide volumes for containers running under Kubernetes. Like any installation, the method you use will depend on your requirements. You can use basic, in-tree drivers for most cases. Kubernetes has built-in support for NFS and iSCSI. Depending on needs, you can also build a custom driver using FlexVolume or CSI. Q. Are there plans to add Docker support for persistent me A.It's a very new technology - we're interested in the applications but it's new. We're waiting to see how the market matures. Q. Is FlexVolume persistent storage? A. Yes, absolutely. FlexVolume is one of the three persistent storage APIs available for use with Kubernetes. For deeper info on how to build FlexVolume plugins, take a look at these links: You can learn more about container storage in the other three container webcasts the CSTI has hosted: If you have questions, please leave comments in this blog. Happy viewing!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to