Sorry, you need to enable JavaScript to visit this website.

Your Questions Answered – Applications Take Advantage of Persistent Memory Webcast

Marty Foltyn

Mar 27, 2019

title of post
We hope you had time to check out our recent webcast on Applications Take Advantage of Persistent Memory Raghu Kulkarni of Viking Technology, a member of the SNIA Solid State Storage Initiative, did a great job laying the foundation for an understanding of Persistent Memory today, just in time for the SNIA Persistent Memory Summit. You can catch up on videos of Summit talks, along with the slides presented, here. During the webcast, we had many interesting questions.  Now, as promised, Raghu provides the answers.  Happy reading, and we hope to see you at one of our upcoming webcasts or events. Q.  Does NVDIMM-N encryption lower the performance levels that you presented? A.  It typically depends on the implementation and differs from each vendor. Generally speaking, Save and Restore operations will increase by a small factor – less than 10%.  Products from some vendors, like Viking, will not see a performance degradation as it is offset by a faster transfer rate Q.  What are the read/write bandwidth capabilities of NVDIMM-N? How does that compare to Intel’s Persistent Memory? A.  For Byte-addressable mode, NVDIMM-N in theory has the same high performance as DRAM, around 100ns. With the latest Linux drivers in DAX mode, NVDIMM-N are still expected to be better than Intel’s Persistent Memory. Q.  On the use cases, what are the use cases when Persistent Memory is attached to an accelerator chip compared to a Processor attached setup? A.  Mainly to accelerate the performance by storing the metadata or even data in Persistent Memory, so that the request can be acknowledged immediately without having to wait for commits to SSD/HDD. It also saves the rebuild time, which is a common practice for volatile memory. Q.  How does BIOS/MRC work when a Persistent Memory is attached to an accelerator (ASIC/FPGA/GPU) over PCIe, when trying to extended/increase the memory for the processor? A.  System BIOS will not detect the Persistent Memory sitting on PCIe; it only discovers Persistent Memory installed in DIMM slots. FPGA/ASIC, etc. have to build their own bottom up code to detect and present the Persistent Memory on PCIe depending on the use case. Q.  Do we need application changes to take advantage of Persistent Memory-aware file storage? how does it compare against the DAX mode? A.  To take advantage of the low latency/high performance nature of Persistent Memory, it would be beneficial to modify the applications. However, one can still leverage the existing IO stack if modifying the application is not an option. Check out pmem.io for pre-built libraries that can be directly integrated into applications. Q.  Should the Persistent Memory usage be compared against the Storage or Memory. Which is a more relevant use case for Persistent Memory? A.  Typically, a media that is Byte-addressable is called Persistent Memory (PM); however, you can also access it in Block mode. Again, depending on the application needs, use case, and other system level factors it can be used in either modes.  However, you will find best performance when accessing in Byte-addressable/Load-Store mode.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Author of NVMe™/TCP Spec Answers Your Questions

J Metz

Mar 27, 2019

title of post

900 people have already watched our SNIA Networking Storage Forum webcast, What NVMe™/TCP Means for Networked Storage? where Sagi Grimberg, lead author of the NVMe/TCP specification, and J Metz, Board Member for SNIA, explained what NVMe/TCP is all about. If you haven’t seen the webcast yet, check it out on-demand.

Like any new technology, there’s no shortage of areas for potential confusion or questions. In this FAQ blog, we try to clear up both.

Q. Who is responsible for updating NVMe Host Driver?

A. We assume you are referring to the Linux host driver (independent OS software vendors are responsible for developing their own drivers). Like any device driver and/or subsystem in Linux, the responsibility of maintenance is on the maintainer(s) listed under the MAINTAINERS file. The responsibility of contributing is shared by all the community members.

Q. What is the realistic timeframe to see a commercially available NVME over TCP driver for targets? Is one year from now (2020) fair?

A. Even this year commercial products are coming to market. The work started even before the spec was fully ratified, but now that it has been, we expect wider NVMe/TCP support available. Q. Does NVMe/TCP work with 400GbE infrastructure? A. As of this writing, there is no reason to believe that upper layer protocols such as NVMe/TCP will not work with faster Ethernet physical layers like 400GbE. Q. Why is NVMe CQ in the controller and not on the Host? A. The example that was shown in the webcast assumed that the fabrics controller had an NVMe backend. So the controller backend NVMe device had a local completion queue, and on the host sat the “transport completion queue” (in NVMe/TCP case this is the TCP stream itself). Q. So, SQ and CQ streams run asynchronously from each other, with variable ordering depending on the I/O latency of a request? A. Correct. For a given NVMe/TCP connection, stream delivery is in-order, but commands and completions can arrive (and be processed by the NVMe controller) in any order. Q. What TCP ports are used? Since we have many NVMe queues, I bet we need a lot of TCP ports. A. Each NVMe queue will consume a unique source TCP port. Common NVMe host implementations will create a number of NVMe queues in the same order of magnitude of the number of CPU cores. Q. What is the max size of Data PDU supported? Are there any restrictions in parallel writes? A. The maximum size of an H2CData PDU (MAXH2CDATA) is negotiated and can be as large as 4GB. It is recommended that it will be no less than 4096 bytes. Q. Is immediate data negotiated between host and target? A. The in-capsule data size (IOCCSZ) is negotiated on an NVMe level. In NVMe/TCP the admin queue command capsule size is 8K by default. In addition, the maximum size of the H2CData PDU is negotiated during the connection initialization. Q. Is NVMe/TCP hardware infrastructure cost lower? A. This can vary widely, but we assume you are referring to Ethernet hardware infrastructure. Plus, NVMe/TCP does not require RDMA capable NIC so the variety of implementations is usually wider which typically drives down cost. Q. What are the plans for the major OS suppliers to support NVMe over TCP (Windows, Linux, VMware)? A. Unfortunately, we cannot comment on their behalf, but Linux already supports NVMe/TCP which should find its way to the various distributions soon. We are working with others to support NVMe/TCP, but suggest asking them directly. Q. Where does the overhead occur for NVMe/TCP packetization, is it dependent on the CPU, or does the network adapter offload that heavy lifting? And what is the impact of numerous, but extremely small transfers? A. Indeed a software NVMe/TCP implementation will introduce an overhead resulting from the TCP stream processing. However, you are correct that common stateless offloads such as Large Receive Offload and TCP Segmentation Offload are extremely useful both for large and for small 4K transfers. Q. What do you mean Absolute Latency is higher than RDMA by “several” microseconds? <10us, tens of microseconds, or 100s of microseconds? A. That depends on various aspects such as the CPU model, the network infrastructure, the controller implementation, services running on top etc. Remote access to raw NVMe devices over TCP was measured to add a range between 20-35 microseconds with Linux in early testing, but the degrees of variability will affect this. Q. Will Wireshark support NVMe/TCP soon? Is an implementation in progress? A. We most certainly hope so, it shouldn’t be difficult, but we are not aware of an ongoing implementation in progress. Q. Are there any NVMe TCP drivers out there? A. Yes, Linux and SPDK both support NVMe/TCP out-of-the-box, see: https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/ Q. Do you recommend a dedicated IP network for the storage traffic or can you use the same corporate network with all other LAN traffic? A. This really depends on the use case, the network utilization and other factors. Obviously if the network bandwidth is fully utilized to begin with, it won’t be very efficient to add the additional NVMe/TCP “load” on the network, but that alone might not be the determining factor. Otherwise it can definitely make sense to share the same network and we are seeing customers choosing this route. It might be useful to consider the best practices for TCP-based storage networks (iSCSI has taught valuable lessons), and we anticipate that many of the same principles will apply to NVMe/TCP. The AQM, buffer etc. tuning settings is very dependent on the traffic pattern and needs to be developed based on the requirements. Base configuration is determined by the vendors. Q. On slide 28: no, TCP needs the congestion feedback, mustn’t need to be a drop (could be ecn, latency variance etc) A. Yes, you are correct. The question refers to how that feedback is received, though, and in the most common (traditional) TCP methods it’s done via drops. Q. How can you find out/check what TCP stack (drop vs. zero-buffer) your network is using? A. The use/support of DCTCP is mostly driven by the OS. The network needs to support and have ECN enabled and correctly configured for the traffic of interest. So the best way to figure this out is to talk to the network team. The use of ECN,etc. needs to be developed between server and network team Q. On slide 33: drop is signal of overloaded network; congestion on-set is when there is a standing Q (latency already increases). Current state of the art is to always overload the network (switches). A. ECN is used to signal before drop happens to make it more efficient. Q. Is it safe to assume that most current switches on the market today support DCTCP/ECN and that we can mix/match switches from vendors across product families? A. Most modern ASICS support ECN today. Mixing different product lines needs to be carefully planned and tested. AQM, Buffers etc. need to be fine-tuned across the platforms. Q. Is there a substantial cost savings by implementing all of what is needed to support NVMe over TCP versus just sticking with RDMA? Much like staying with Fibre Channel instead of risking performance with iSCSI not being and staying implemented correctly. Building the separately supported network just seems the best route. A. By “sticking with RDMA” you mean that you have already selected RDMA, which means you already made the investments to make it work for your use case. We agree that changing what currently works reliably and meets the targets might be an unnecessary risk. NVMe/TCP brings a viable option for Ethernet fabrics which is easily scalable and allows you to utilize a wide variety of both existing and new infrastructure while still maintaining low latency NVMe access. Q. It seems that with multiple flavors of TCP and especially congestion management (DCTCP, DCQCN?) is there a plan for commonality in ecosystem to support a standard way to handle congestion management? Is that required in the switches or also in the HBAs? A. DCTCP is an approach for L3 based congestion management, whereas DCQCN is a combination of PFC and ECN for RoCEv2(UDP) based communication. So both of these are two different approaches. Q. Who are the major players in terms of marketing this technology among storage vendors? A. The key organization to find out about NVMe/TCP (or all NVMe-related material, in fact), is NVM Express® Q. Can I compare the NVMe over TCP to iSCSI? A. Easy, you can download upstream kernel and test both of the in-kernel implementations (iSCSI and NVMe/TCP). Alternatively you can reach out to a vendor that supports any of the two to test it as well. You should expect NVMe/TCP to run substantially faster for pretty much any workload. Q. Is network segmentation crucial as “go to” architecture with host to storage proximity objective to accomplish objective of manage/throttled close to near loss-less connectivity? A. There is a lot to unpack in this question. Let’s see if we can break it down a little. Generally speaking, best practice is to keep the storage as close to the host as possible (and is reasonable). Not only does this reduce latency, but it reduces the variability in latency (and bandwidth) that can occur at longer distances. In cases where storage traffic shares bandwidth (i.e., links) with other types of traffic, the variable nature of different applications (some are bursty, others are more long-lived) can create unpredictability. Since storage – particularly block storage – doesn’t “like” unpredictability, different methods are used to regain some of that stability as scales increase. A common and well-understood best practice is to isolate storage traffic from “regular” Ethernet traffic. As different workloads tend to be either “North-South” but increasingly “East-West” across the network topologies, this network segmentation becomes more important. Of course, it’s been used as a typical best practice for many years with protocols such as iSCSI, so this is not new. In environments where the variability of congestion can have a profound impact on the storage performance, network segmentation will, indeed, become crucial as a “go-to” architecture. Proper techniques at L2 and L3 will help determine how close to a “lossless” environment can be achieved, of course, as well as properly configured QoS mechanisms across the network. As a general rule of thumb, though, network segmentation is a very powerful tool to have for reliable storage delivery. Q. How close are we to shared NVMe storage either over Fiber or TCP? A. There are several shared storage products available on the market for NVMe over Fabrics, but as of this writing (only 3 months after the ratification of the protocol) no major vendors have announced NVMe over TCP shared storage capabilities. A good place to look for updates is on the NVM Express website for interoperability and compliance products. [https://nvmexpress.org/products/] Q. AQM -> DualQ work in IETF for coexisting L4S (DCTCP) and legacy TCP. Ongoing work @ chip merchants A. Indeed a lot of advancements around making TCP evolve as the speeds and feeds increase. This is yet another example that shows why NVMe/TCP is, and will remain, relevant in the future. Q. Are there any major vendors who are pushing products based on these technologies? A. We cannot comment publicly on any vendor plans. You would need to ask a vendor directly for a concrete timeframe for the technology. However, several startups have made public announcements on supporting NVMe/TCP. Lightbits Labs, to give one example, will have a high-performance low-latency NVMe/TCP-based software-defined-storage solution out very soon.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding Kubernetes in the Cloud

Mike Jochimsen

Mar 25, 2019

title of post

Ever wonder why and where you would want to use Kubernetes? You’re not alone, that’s why the SNIA Cloud Storage Technologies Initiative is hosting a live webcast on May 2, 2019 “Kubernetes in the Cloud.”

Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization.

In this webcast, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on storage aspects and best practices, discussing:

  • What is Kubernetes? Why would you want to use it?
  • How does Kubernetes help in a multi-cloud/private cloud environment?
  • How does Kubernetes orchestrate and manage storage?
  • Can Kubernetes use Docker?
  • How do we provide persistence and data protection?
  • Example use cases

We’re fortunate to have great experts for this session, Matt Baldwin, the founder and former CEO of Stackpoint Cloud and now with NetApp and Ingo Fuchs, Chief Technologist, Cloud and DevOps at NetApp.

I hope you will register today to join us on May 2nd. It’s live which means our expert presenters will be on-hand to answer your questions on the spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding Kubernetes in the Cloud

Mike Jochimsen

Mar 25, 2019

title of post

Ever wonder why and where you would want to use Kubernetes? You’re not alone, that’s why the SNIA Cloud Storage Technologies Initiative is hosting a live webcast on May 2, 2019 “Kubernetes in the Cloud.”

Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization.

In this webcast, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on storage aspects and best practices, discussing:

  • What is Kubernetes? Why would you want to use it?
  • How does Kubernetes help in a multi-cloud/private cloud environment?
  • How does Kubernetes orchestrate and manage storage?
  • Can Kubernetes use Docker?
  • How do we provide persistence and data protection?
  • Example use cases

We’re fortunate to have great experts for this session, Matt Baldwin, the founder and former CEO of Stackpoint Cloud and now with NetApp and Ingo Fuchs, Chief Technologist, Cloud and DevOps at NetApp.

I hope you will register today to join us on May 2nd. It’s live which means our expert presenters will be on-hand to answer your questions on the spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding Kubernetes in the Cloud

Mike Jochimsen

Mar 25, 2019

title of post
Ever wonder why and where you would want to use Kubernetes? You’re not alone, that’s why the SNIA Cloud Storage Technologies Imitative is hosting a live webcast on May 2, 2019 “Kubernetes in the Cloud.” Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization. In this webcast, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on storage aspects and best practices, discussing:
  • What is Kubernetes? Why would you want to use it?
  • How does Kubernetes help in a multi-cloud/private cloud environment?
  • How does Kubernetes orchestrate and manage storage?
  • Can Kubernetes use Docker?
  • How do we provide persistence and data protection?
  • Example use cases
We’re fortunate to have great experts for this session, Matt Baldwin, the founder and former CEO of Stackpoint Cloud and now with NetApp and Ingo Fuchs, Chief Technologist, Cloud and DevOps at NetApp. I hope you will register today to join us on May 2nd. It’s live which means our expert presenters will be on-hand to answer your questions on the spot.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Innovating File System Architectures with NVMe

Marty Foltyn

Mar 20, 2019

title of post

It’s exciting to see the recent formation of the Solid State Drive Special Interest Group (SIG) here in the SNIA Solid State Storage Initiative.  After all, everyone appreciates the ability to totally geek out about the latest drive technology and software for file systems.  Right? Hey, where’s everyone going? We have vacation pictures with the dog we stored that we want to show…

Solid state storage has long found its place with those seeking greater performance in systems, especially where smaller or more random block/file transfers are prevalent.  Single-system opportunity with NVMe drives is broad, and pretty much unquestioned by those building systems for the modern IT environments. Cloud, likewise, has found use of the technology where single-node performance makes a broader deployment relevant.

There have been many efforts to build the case for solid state in networked storage.  Where storage and computation combine -- for instance in a large map/reduce application -- there’s been significant advantage, especially in the area of sustained data reads.  This has usually comes at a scalar cost, where additional systems are needed for capacity. Nonetheless, finding cases where non-volatile memory enhances infrastructure deployment for storage or analytics.  Yes, analytics is infrastructure these days, deal with it.

Seemingly independent of the hardware trends, the development of new file systems has provided significant innovation.  Notably, heavily parallel file systems have the ability to serve a variety of network users in specialized applications or appliances.  Much of the work has focused on development of the software or base technology rather than delivering a broader view of either performance or applicability.  Therefore, a paper such as this one on building a Lustre file system using NVMe drives is a welcome addition to the case for both solid state storage and revolutionary file systems that move from specific applications to more general availability.

The paper shows how to build a small (half-rack) cluster of storage to support the Lustre file system, and it also adds the Dell VFlex OS implemented as a software defined storage solution.  This has the potential to take an HPC-focused product like Lustre and drive a broader market availability for a high-performance solution. The combination of read/write performance, easy adoption to the broad enterprise, and relatively small footprint shows new promise for innovation.

The opportunity for widespread delivery of solid state storage using NVMe and software innovation in the storage space is ready to move the datacenter to new and more ambitious levels.  The SNIA 2019 Storage Developer Conference  is currently open for submissions from storage professionals willing to share knowledge and experience.  Innovative solutions such as this one are always welcome for consideration.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Innovating File System Architectures with NVMe

Marty Foltyn

Mar 20, 2019

title of post

It’s exciting to see the recent formation of the Solid State Drive Special Interest Group (SIG) here in the SNIA Solid State Storage Initiative.  After all, everyone appreciates the ability to totally geek out about the latest drive technology and software for file systems.  Right? Hey, where’s everyone going? We have vacation pictures with the dog we stored that we want to show…

Solid state storage has long found its place with those seeking greater performance in systems, especially where smaller or more random block/file transfers are prevalent.  Single-system opportunity with NVMe drives is broad, and pretty much unquestioned by those building systems for the modern IT environments. Cloud, likewise, has found use of the technology where single-node performance makes a broader deployment relevant.

There have been many efforts to build the case for solid state in networked storage.  Where storage and computation combine — for instance in a large map/reduce application — there’s been significant advantage, especially in the area of sustained data reads.  This has usually comes at a scalar cost, where additional systems are needed for capacity. Nonetheless, finding cases where non-volatile memory enhances infrastructure deployment for storage or analytics.  Yes, analytics is infrastructure these days, deal with it.

Seemingly independent of the hardware trends, the development of new file systems has provided significant innovation.  Notably, heavily parallel file systems have the ability to serve a variety of network users in specialized applications or appliances.  Much of the work has focused on development of the software or base technology rather than delivering a broader view of either performance or applicability.  Therefore, a paper such as this one on building a Lustre file system using NVMe drives is a welcome addition to the case for both solid state storage and revolutionary file systems that move from specific applications to more general availability.

The paper shows how to build a small (half-rack) cluster of storage to support the Lustre file system, and it also adds the Dell VFlex OS implemented as a software defined storage solution.  This has the potential to take an HPC-focused product like Lustre and drive a broader market availability for a high-performance solution. The combination of read/write performance, easy adoption to the broad enterprise, and relatively small footprint shows new promise for innovation.

The opportunity for widespread delivery of solid state storage using NVMe and software innovation in the storage space is ready to move the datacenter to new and more ambitious levels.  The SNIA 2019 Storage Developer Conference  is currently open for submissions from storage professionals willing to share knowledge and experience.  Innovative solutions such as this one are always welcome for consideration.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Has Hybrid Cloud Reached a Tipping Point?

Michelle Tidwell

Mar 13, 2019

title of post

According to research from the Enterprise Strategy Group (ESG), IT organizations today are struggling to strike the right balance between public cloud and their on-premises infrastructure. Has hybrid cloud reached a tipping point? Find out on April 23, 2019 at our live webcast “The Hybrid Cloud Tipping Point” when the SNIA CSTI welcomes ESG senior analyst, Scott Sinclair, who will share research on current cloud trends, covering:

  • Key drivers behind IT complexity
  • IT spending priorities
  • Multi-cloud & hybrid cloud adoption drivers
  • When businesses are moving workloads from the cloud back on-premises
  • Top security and cost challenges
  • Future cloud projections

The research presentation will be followed by a panel discussion with Scott Sinclair and my SNIA cloud colleagues, Alex McDonald, Mike Jochimsen and Eric Lakin. We will be on-hand on the 23rd to answer questions.

Register today. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Has Hybrid Cloud Reached a Tipping Point?

Michelle Tidwell

Mar 13, 2019

title of post
According to research from the Enterprise Strategy Group (ESG), IT organizations today are struggling to strike the right balance between public cloud and their on-premises infrastructure. Has hybrid cloud reached a tipping point? Find out on April 23, 2019 at our live webcast “The Hybrid Cloud Tipping Point” when the SNIA CSTI welcomes ESG senior analyst, Scott Sinclair, who will share research on current cloud trends, covering:
  • Key drivers behind IT complexity
  • IT spending priorities
  • Multi-cloud & hybrid cloud adoption drivers
  • When businesses are moving workloads from the cloud back on-premises
  • Top security and cost challenges
  • Future cloud projections
The research presentation will be followed by a panel discussion with Scott Sinclair and my SNIA cloud colleagues, Alex McDonald, Mike Jochimsen and Eric Lakin. We will be on-hand on the 23rd to answer questions. Register today. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scale-Out File Systems FAQ

John Kim

Mar 8, 2019

title of post
On February 28th, the SNIA Networking Storage Forum (NSF) took at look at what's happening in Scale-Out File Systems. We discussed general principles, design considerations, challenges, benchmarks and more. If you missed the live webcast, it's now available on-demand. We did not have time to answer all the questions we received at the live event, so here are answers to them all. Q. Can scale-out file systems do Erasure coding? A. Indeed, Erasure coding is a common method to improve resilience. Q. How does one address the problem of a specific disk going down? Where does scale-out architecture provide redundancy? A. Disk failures typically are covered by RAID software. Some of scale-out software also use multiple replicators to mitigate the impact of disk failures. Q. Are there use cases where a hybrid of these two styles is needed? A. Yes, for example, in some environments, the foundation layer might be using the dedicated storage server to form the large storage pool, which is the 1st style, and then export LUNs or virtual disks to the compute nodes (either physical or virtual) to run the applications, which is the 2nd style. Q. Which scale-out file systems present on windows, Linux platforms? A.   Some of  the scale-out file systems provide  native client software across multiple  platforms. Another approach is to use Samba to build SMB  gateways to make the  scale-out file system  available to Windows computers. Q. Is Amazon elastic file system (EFS) on AWS scale-out file systems? A. Please see: https://docs.aws.amazon.com/efs/latest/ug/performance.html "Amazon EFS file systems are distributed across an unconstrained number of storage servers, enabling file systems to grow elastically to petabyte scale and allowing massively parallel access from Amazon EC2 instances to your data. The distributed design of Amazon EFS avoids the bottlenecks and constraints inherent to traditional file servers." Q. Where are the most cost effective price/performance uses of NVMe?   A. NVMe can support very high IOPS and very high throughput as well. The best use case would be to couple NVMe with high performance storage software that would not limit the NVMe.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to