Sorry, you need to enable JavaScript to visit this website.

Join the Conversation at the Open Infrastructure Summit

Kristen Hauser

Apr 10, 2019

title of post

Thousands of IT decision makers, operators and the developers will gather April 29 – May 1 at the Open Infrastructure Summit in Denver, Colorado to collaborate across common use cases and solve real problems.

On Monday, April 29, from 2:50 p.m. – 3:30 p.m., members of the OpenSDS project and the Technical Working Group (TWG) which develops SNIA Swordfish™, are holding a Birds-of-a-Feather (BoF) session at the summit titled “Open Storage Management.”

To kick things off, Richelle Ahlvers, SNIA board member, chair of the Scalable Storage Management TWG, and storage management architect, Broadcom, will provide a brief overview of the SNIA Swordfish storage management specification. Swordfish is an extension to the DMTF Redfish® specification that provides a unified approach for the management of storage equipment and services in converged, hyper-converged, hyperscale and cloud infrastructure environments.  Swordfish is built using a RESTful interface over HTTPS in JSON format, and also provides support for OpenAPI.

Richelle will also discuss the lifecycle of creating consistent open standard interfaces, from definition to implementations, and how the open source ecosystem plays a role in open infrastructure management.

Xing Yang, principal architect at Huawei Technologies, and project and architecture lead in OpenSDS, will explain how the open source community addresses storage integration challenges in scale-out cloud native environments and connects siloed data solutions.

The session will be interactive and attendees will be encouraged to join in the conversation, get their questions answered and share their knowledge while making valuable new connections. Add the BoF to your conference schedule here.

While visiting the summit, stop by to see SNIA in booth #B13 in the Open Infrastructure Marketplace and pick up the latest Swordfish swag!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Join the Conversation at the Open Infrastructure Summit

Kristen Hauser

Apr 10, 2019

title of post

Thousands of IT decision makers, operators and the developers will gather April 29 – May 1 at the Open Infrastructure Summit in Denver, Colorado to collaborate across common use cases and solve real problems.

On Monday, April 29, from 2:50 p.m. – 3:30 p.m., members of the OpenSDS project and the Technical Working Group (TWG) which develops SNIA Swordfish™, are holding a Birds-of-a-Feather (BoF) session at the summit titled “Open Storage Management.”

To kick things off, Richelle Ahlvers, SNIA board member, chair of the Scalable Storage Management TWG, and storage management architect, Broadcom, will provide a brief overview of the SNIA Swordfish storage management specification. Swordfish is an extension to the DMTF Redfish® specification that provides a unified approach for the management of storage equipment and services in converged, hyper-converged, hyperscale and cloud infrastructure environments.  Swordfish is built using a RESTful interface over HTTPS in JSON format, and also provides support for OpenAPI.

Richelle will also discuss the lifecycle of creating consistent open standard interfaces, from definition to implementations, and how the open source ecosystem plays a role in open infrastructure management.

Xing Yang, principal architect at Huawei Technologies, and project and architecture lead in OpenSDS, will explain how the open source community addresses storage integration challenges in scale-out cloud native environments and connects siloed data solutions.

The session will be interactive and attendees will be encouraged to join in the conversation, get their questions answered and share their knowledge while making valuable new connections. Add the BoF to your conference schedule here.

While visiting the summit, stop by to see SNIA in booth #B13 in the Open Infrastructure Marketplace and pick up the latest Swordfish swag!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Join the Conversation at the Open Infrastructure Summit

khauser

Apr 10, 2019

title of post
Thousands of IT decision makers, operators and the developers will gather April 29 – May 1 at the Open Infrastructure Summit in Denver, Colorado to collaborate across common use cases and solve real problems. On Monday, April 29, from 2:50 p.m. – 3:30 p.m., members of the OpenSDS project and the Technical Working Group (TWG) which develops SNIA Swordfish™, are holding a Birds-of-a-Feather (BoF) session at the summit titled “Open Storage Management.” To kick things off, Richelle Ahlvers, SNIA board member, chair of the Scalable Storage Management TWG, and storage management architect, Broadcom, will provide a brief overview of the SNIA Swordfish storage management specification. Swordfish is an extension to the DMTF Redfish® specification that provides a unified approach for the management of storage equipment and services in converged, hyper-converged, hyperscale and cloud infrastructure environments.  Swordfish is built using a RESTful interface over HTTPS in JSON format, and also provides support for OpenAPI. Richelle will also discuss the lifecycle of creating consistent open standard interfaces, from definition to implementations, and how the open source ecosystem plays a role in open infrastructure management. Xing Yang, principal architect at Huawei Technologies, and project and architecture lead in OpenSDS, will explain how the open source community addresses storage integration challenges in scale-out cloud native environments and connects siloed data solutions. The session will be interactive and attendees will be encouraged to join in the conversation, get their questions answered and share their knowledge while making valuable new connections. Add the BoF to your conference schedule here. While visiting the summit, stop by to see SNIA in booth #B13 in the Open Infrastructure Marketplace and pick up the latest Swordfish swag!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Everything You Wanted to Know about Memory

John Kim

Apr 9, 2019

title of post
Many followers (dare we say fans?) of the SNIA Networking Storage Forum (NSF) are familiar with our popular webcast series "Everything You Wanted To Know About Storage But Were Too Proud To Ask." If you've missed any of the nine episodes we've done to date, they are all available on-demand and provide a 101 lesson on a range of storage related topics like buffers, storage controllers, iSCSI and more. Our next "Too Proud to Ask" webcast on May 16, 2019 will be "Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Taupe – The Memory Pod." Traditionally, much of the IT infrastructure that we've built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components. The only purpose of storage is to persist the data between periods of processing it on a CPU. And the only purpose of memory is to provide a cache of fast accessible data to feed the huge appetite of compute. Currently, we build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But fast memory has no persistence at all; it's only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that. Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by loading from and storing to byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface). It's clear the lines between memory and storage are blurring as new memory technologies are challenging the way we build and use storage to meet application demands. New memory technologies look like storage in that they're persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. Persistent memory (PM) lies between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, our SNIA experts will discuss:
  • Fundamental terminology relating to memory
  • Traditional uses of storage and memory as a cache
  • How can we build and use systems based on PM?
  • Persistent memory over a network
  • Do we need a new programming model to take advantage of PM?
  • Interesting use cases for systems equipped with PM
  • How we might take better advantage of this new technology
Register today for this live webcast on May 16th. Our experts will be available to answer the questions that you should not be too proud to ask! And if you're curious to know why each of the webcasts in this series is associated with a different color (rather than a number), check out this SNIA NSF blog that explains it all.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Everything You Wanted to Know about Memory

John Kim

Apr 9, 2019

title of post
Many followers (dare we say fans?) of the SNIA Networking Storage Forum (NSF) are familiar with our popular webcast series “Everything You Wanted To Know About Storage But Were Too Proud To Ask.” If you’ve missed any of the nine episodes we’ve done to date, they are all available on-demand and provide a 101 lesson on a range of storage related topics like buffers, storage controllers, iSCSI and more. Our next “Too Proud to Ask” webcast on May 16, 2019 will be “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Taupe – The Memory Pod.” Traditionally, much of the IT infrastructure that we’ve built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components. The only purpose of storage is to persist the data between periods of processing it on a CPU. And the only purpose of memory is to provide a cache of fast accessible data to feed the huge appetite of compute. Currently, we build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But fast memory has no persistence at all; it’s only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that. Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by loading from and storing to byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface). It’s clear the lines between memory and storage are blurring as new memory technologies are challenging the way we build and use storage to meet application demands. New memory technologies look like storage in that they’re persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. Persistent memory (PM) lies between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, our SNIA experts will discuss:
  • Fundamental terminology relating to memory
  • Traditional uses of storage and memory as a cache
  • How can we build and use systems based on PM?
  • Persistent memory over a network
  • Do we need a new programming model to take advantage of PM?
  • Interesting use cases for systems equipped with PM
  • How we might take better advantage of this new technology
Register today for this live webcast on May 16th. Our experts will be available to answer the questions that you should not be too proud to ask! And if you’re curious to know why each of the webcasts in this series is associated with a different color (rather than a number), check out this SNIA NSF blog that explains it all.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Trends in Media and Entertainment Storage – Your Questions Answered from Our Webcast

Marty Foltyn

Apr 1, 2019

title of post
Thanks to all who attended or listened on-demand to our recent SNIA Solid State Storage Initiative (SSSI) webcast on Trends in Worldwide Media and Entertainment Storage. Motti Beck of Mellanox Technologies and Tom Coughlin, SSSI Education Chair and analyst with Coughlin Associates, got rave reviews for their analysis of this important market.  Feedback comments included “Good overview with enough details for me to learn something”; “Really appreciate the insight into the ME businesses”; and “Just in time for the upcoming NAB Show!”.  We appreciate your interest and enthusiasm! Important to every SNIA webcast are the Questions – and we got quite a few on this one.  Thanks in advance to Tom Coughlin, who provided the answers below.  Send any more questions to us at asksssi@snia.org with the subject- M&E Webcast Questions.  Happy reading, and we hope to see you at one of our upcoming webcasts or events. Q.  What is the best form to store(age) the format video, NAS or SAN? A.  Well, it depends.  A SAN can directly access the data blocks that make up the video file, these can be quickly transported to the workstation.  There they are reassembled into the video file.  A properly configured SAN can provide faster access to data, particularly if many users are accessing the same data in the storage system.  SANs can be appropriate for a larger production facility.  A NAS may provide somewhat slower access, but provides individual access to individual files.  NAS storage can be an appropriate shared storage for smaller production facilities where there are fewer users or the users don’t access the same files at the same time.   Q.  Are there an M&E-specific performance benchmarks or other qualification tools recommended for storage subsystem selection? A.  That is an interesting question.  I know about several general storage performance benchmarks, such as SPC (https://spcresults.org/benchmarks).  There are storage performance tests offered by some M&E industry suppliers, such as one from AJA System Test (https://www.aja.com/products/aja-system-test).  This is probably an area that could use some additional development.   Q.  Revenue share by type and use case – is this on the decline or rise?  What are the YoY trends? A.  If I understand this right, you are asking questions about revenue growth for different media and entertainment use cases or different parts of the workflow on an annual basis.  That information is in the 2018 Digital Storage for Media and Entertainment Report (https://tomcoughlin.com/tech-papers/)   Q.  A question for Tom Coughlin.  You  said 66% will use private or public cloud for archiving in 2018.  Do you have the breakdown between the the two? A.  It is a combination but given the concerns of the industry, I suspect this is mostly private cloud in 2018.   Q.  When Tom says “post production” storage, is that primary, secondary/nearline, or both? A.  If I understand the question right, this is all storage used in post-production, which can use a primary and secondary storage tier, particularly in a larger facility which is economizing on its storage costs.   Q.  With regard to HDD storage, does the interface trend continue to be SATA/SAS?  Does the back end workload look to benefit from SMR or dual-actuator technology? A.  For the time being HDDs will be SATA and SAS.  There are now some HDD storage systems with NVMe on the back end and it will be interesting to see how this develops.  I am sure that M&E users will benefit from SMR and dual-actuator HDDs.  SMR will be good for active archiving in particular and dual actuator will allow faster access to HDD data, a benefit for video projects.   Q.  Unless I missed it, you made no mention of software-defined storage as a viable method for storing the growing amount of data in M&E.  Was that taken into consideration when you did your survey? A.  Software defined storage can be an important element in media and entertainment storage and is finding increasing use in this and other applications.   Q.  Is the cloud archive, primary copy or secondary (insurance policy with limited to no access)? A.  It depends upon the organization, although I think for many studios and larger organizations, they may keep content on tape and even off-line tape as well.  Cloud archives do allow access to data, the usual issue is the cost of egressing that content.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered – Applications Take Advantage of Persistent Memory Webcast

Marty Foltyn

Mar 27, 2019

title of post
We hope you had time to check out our recent webcast on Applications Take Advantage of Persistent Memory Raghu Kulkarni of Viking Technology, a member of the SNIA Solid State Storage Initiative, did a great job laying the foundation for an understanding of Persistent Memory today, just in time for the SNIA Persistent Memory Summit. You can catch up on videos of Summit talks, along with the slides presented, here. During the webcast, we had many interesting questions.  Now, as promised, Raghu provides the answers.  Happy reading, and we hope to see you at one of our upcoming webcasts or events. Q.  Does NVDIMM-N encryption lower the performance levels that you presented? A.  It typically depends on the implementation and differs from each vendor. Generally speaking, Save and Restore operations will increase by a small factor – less than 10%.  Products from some vendors, like Viking, will not see a performance degradation as it is offset by a faster transfer rate Q.  What are the read/write bandwidth capabilities of NVDIMM-N? How does that compare to Intel’s Persistent Memory? A.  For Byte-addressable mode, NVDIMM-N in theory has the same high performance as DRAM, around 100ns. With the latest Linux drivers in DAX mode, NVDIMM-N are still expected to be better than Intel’s Persistent Memory. Q.  On the use cases, what are the use cases when Persistent Memory is attached to an accelerator chip compared to a Processor attached setup? A.  Mainly to accelerate the performance by storing the metadata or even data in Persistent Memory, so that the request can be acknowledged immediately without having to wait for commits to SSD/HDD. It also saves the rebuild time, which is a common practice for volatile memory. Q.  How does BIOS/MRC work when a Persistent Memory is attached to an accelerator (ASIC/FPGA/GPU) over PCIe, when trying to extended/increase the memory for the processor? A.  System BIOS will not detect the Persistent Memory sitting on PCIe; it only discovers Persistent Memory installed in DIMM slots. FPGA/ASIC, etc. have to build their own bottom up code to detect and present the Persistent Memory on PCIe depending on the use case. Q.  Do we need application changes to take advantage of Persistent Memory-aware file storage? how does it compare against the DAX mode? A.  To take advantage of the low latency/high performance nature of Persistent Memory, it would be beneficial to modify the applications. However, one can still leverage the existing IO stack if modifying the application is not an option. Check out pmem.io for pre-built libraries that can be directly integrated into applications. Q.  Should the Persistent Memory usage be compared against the Storage or Memory. Which is a more relevant use case for Persistent Memory? A.  Typically, a media that is Byte-addressable is called Persistent Memory (PM); however, you can also access it in Block mode. Again, depending on the application needs, use case, and other system level factors it can be used in either modes.  However, you will find best performance when accessing in Byte-addressable/Load-Store mode.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Author of NVMe™/TCP Spec Answers Your Questions

J Metz

Mar 27, 2019

title of post

900 people have already watched our SNIA Networking Storage Forum webcast, What NVMe™/TCP Means for Networked Storage? where Sagi Grimberg, lead author of the NVMe/TCP specification, and J Metz, Board Member for SNIA, explained what NVMe/TCP is all about. If you haven’t seen the webcast yet, check it out on-demand.

Like any new technology, there’s no shortage of areas for potential confusion or questions. In this FAQ blog, we try to clear up both.

Q. Who is responsible for updating NVMe Host Driver?

A. We assume you are referring to the Linux host driver (independent OS software vendors are responsible for developing their own drivers). Like any device driver and/or subsystem in Linux, the responsibility of maintenance is on the maintainer(s) listed under the MAINTAINERS file. The responsibility of contributing is shared by all the community members.

Q. What is the realistic timeframe to see a commercially available NVME over TCP driver for targets? Is one year from now (2020) fair?

A. Even this year commercial products are coming to market. The work started even before the spec was fully ratified, but now that it has been, we expect wider NVMe/TCP support available. Q. Does NVMe/TCP work with 400GbE infrastructure? A. As of this writing, there is no reason to believe that upper layer protocols such as NVMe/TCP will not work with faster Ethernet physical layers like 400GbE. Q. Why is NVMe CQ in the controller and not on the Host? A. The example that was shown in the webcast assumed that the fabrics controller had an NVMe backend. So the controller backend NVMe device had a local completion queue, and on the host sat the “transport completion queue” (in NVMe/TCP case this is the TCP stream itself). Q. So, SQ and CQ streams run asynchronously from each other, with variable ordering depending on the I/O latency of a request? A. Correct. For a given NVMe/TCP connection, stream delivery is in-order, but commands and completions can arrive (and be processed by the NVMe controller) in any order. Q. What TCP ports are used? Since we have many NVMe queues, I bet we need a lot of TCP ports. A. Each NVMe queue will consume a unique source TCP port. Common NVMe host implementations will create a number of NVMe queues in the same order of magnitude of the number of CPU cores. Q. What is the max size of Data PDU supported? Are there any restrictions in parallel writes? A. The maximum size of an H2CData PDU (MAXH2CDATA) is negotiated and can be as large as 4GB. It is recommended that it will be no less than 4096 bytes. Q. Is immediate data negotiated between host and target? A. The in-capsule data size (IOCCSZ) is negotiated on an NVMe level. In NVMe/TCP the admin queue command capsule size is 8K by default. In addition, the maximum size of the H2CData PDU is negotiated during the connection initialization. Q. Is NVMe/TCP hardware infrastructure cost lower? A. This can vary widely, but we assume you are referring to Ethernet hardware infrastructure. Plus, NVMe/TCP does not require RDMA capable NIC so the variety of implementations is usually wider which typically drives down cost. Q. What are the plans for the major OS suppliers to support NVMe over TCP (Windows, Linux, VMware)? A. Unfortunately, we cannot comment on their behalf, but Linux already supports NVMe/TCP which should find its way to the various distributions soon. We are working with others to support NVMe/TCP, but suggest asking them directly. Q. Where does the overhead occur for NVMe/TCP packetization, is it dependent on the CPU, or does the network adapter offload that heavy lifting? And what is the impact of numerous, but extremely small transfers? A. Indeed a software NVMe/TCP implementation will introduce an overhead resulting from the TCP stream processing. However, you are correct that common stateless offloads such as Large Receive Offload and TCP Segmentation Offload are extremely useful both for large and for small 4K transfers. Q. What do you mean Absolute Latency is higher than RDMA by “several” microseconds? <10us, tens of microseconds, or 100s of microseconds? A. That depends on various aspects such as the CPU model, the network infrastructure, the controller implementation, services running on top etc. Remote access to raw NVMe devices over TCP was measured to add a range between 20-35 microseconds with Linux in early testing, but the degrees of variability will affect this. Q. Will Wireshark support NVMe/TCP soon? Is an implementation in progress? A. We most certainly hope so, it shouldn’t be difficult, but we are not aware of an ongoing implementation in progress. Q. Are there any NVMe TCP drivers out there? A. Yes, Linux and SPDK both support NVMe/TCP out-of-the-box, see: https://nvmexpress.org/welcome-nvme-tcp-to-the-nvme-of-family-of-transports/ Q. Do you recommend a dedicated IP network for the storage traffic or can you use the same corporate network with all other LAN traffic? A. This really depends on the use case, the network utilization and other factors. Obviously if the network bandwidth is fully utilized to begin with, it won’t be very efficient to add the additional NVMe/TCP “load” on the network, but that alone might not be the determining factor. Otherwise it can definitely make sense to share the same network and we are seeing customers choosing this route. It might be useful to consider the best practices for TCP-based storage networks (iSCSI has taught valuable lessons), and we anticipate that many of the same principles will apply to NVMe/TCP. The AQM, buffer etc. tuning settings is very dependent on the traffic pattern and needs to be developed based on the requirements. Base configuration is determined by the vendors. Q. On slide 28: no, TCP needs the congestion feedback, mustn’t need to be a drop (could be ecn, latency variance etc) A. Yes, you are correct. The question refers to how that feedback is received, though, and in the most common (traditional) TCP methods it’s done via drops. Q. How can you find out/check what TCP stack (drop vs. zero-buffer) your network is using? A. The use/support of DCTCP is mostly driven by the OS. The network needs to support and have ECN enabled and correctly configured for the traffic of interest. So the best way to figure this out is to talk to the network team. The use of ECN,etc. needs to be developed between server and network team Q. On slide 33: drop is signal of overloaded network; congestion on-set is when there is a standing Q (latency already increases). Current state of the art is to always overload the network (switches). A. ECN is used to signal before drop happens to make it more efficient. Q. Is it safe to assume that most current switches on the market today support DCTCP/ECN and that we can mix/match switches from vendors across product families? A. Most modern ASICS support ECN today. Mixing different product lines needs to be carefully planned and tested. AQM, Buffers etc. need to be fine-tuned across the platforms. Q. Is there a substantial cost savings by implementing all of what is needed to support NVMe over TCP versus just sticking with RDMA? Much like staying with Fibre Channel instead of risking performance with iSCSI not being and staying implemented correctly. Building the separately supported network just seems the best route. A. By “sticking with RDMA” you mean that you have already selected RDMA, which means you already made the investments to make it work for your use case. We agree that changing what currently works reliably and meets the targets might be an unnecessary risk. NVMe/TCP brings a viable option for Ethernet fabrics which is easily scalable and allows you to utilize a wide variety of both existing and new infrastructure while still maintaining low latency NVMe access. Q. It seems that with multiple flavors of TCP and especially congestion management (DCTCP, DCQCN?) is there a plan for commonality in ecosystem to support a standard way to handle congestion management? Is that required in the switches or also in the HBAs? A. DCTCP is an approach for L3 based congestion management, whereas DCQCN is a combination of PFC and ECN for RoCEv2(UDP) based communication. So both of these are two different approaches. Q. Who are the major players in terms of marketing this technology among storage vendors? A. The key organization to find out about NVMe/TCP (or all NVMe-related material, in fact), is NVM Express® Q. Can I compare the NVMe over TCP to iSCSI? A. Easy, you can download upstream kernel and test both of the in-kernel implementations (iSCSI and NVMe/TCP). Alternatively you can reach out to a vendor that supports any of the two to test it as well. You should expect NVMe/TCP to run substantially faster for pretty much any workload. Q. Is network segmentation crucial as “go to” architecture with host to storage proximity objective to accomplish objective of manage/throttled close to near loss-less connectivity? A. There is a lot to unpack in this question. Let’s see if we can break it down a little. Generally speaking, best practice is to keep the storage as close to the host as possible (and is reasonable). Not only does this reduce latency, but it reduces the variability in latency (and bandwidth) that can occur at longer distances. In cases where storage traffic shares bandwidth (i.e., links) with other types of traffic, the variable nature of different applications (some are bursty, others are more long-lived) can create unpredictability. Since storage – particularly block storage – doesn’t “like” unpredictability, different methods are used to regain some of that stability as scales increase. A common and well-understood best practice is to isolate storage traffic from “regular” Ethernet traffic. As different workloads tend to be either “North-South” but increasingly “East-West” across the network topologies, this network segmentation becomes more important. Of course, it’s been used as a typical best practice for many years with protocols such as iSCSI, so this is not new. In environments where the variability of congestion can have a profound impact on the storage performance, network segmentation will, indeed, become crucial as a “go-to” architecture. Proper techniques at L2 and L3 will help determine how close to a “lossless” environment can be achieved, of course, as well as properly configured QoS mechanisms across the network. As a general rule of thumb, though, network segmentation is a very powerful tool to have for reliable storage delivery. Q. How close are we to shared NVMe storage either over Fiber or TCP? A. There are several shared storage products available on the market for NVMe over Fabrics, but as of this writing (only 3 months after the ratification of the protocol) no major vendors have announced NVMe over TCP shared storage capabilities. A good place to look for updates is on the NVM Express website for interoperability and compliance products. [https://nvmexpress.org/products/] Q. AQM -> DualQ work in IETF for coexisting L4S (DCTCP) and legacy TCP. Ongoing work @ chip merchants A. Indeed a lot of advancements around making TCP evolve as the speeds and feeds increase. This is yet another example that shows why NVMe/TCP is, and will remain, relevant in the future. Q. Are there any major vendors who are pushing products based on these technologies? A. We cannot comment publicly on any vendor plans. You would need to ask a vendor directly for a concrete timeframe for the technology. However, several startups have made public announcements on supporting NVMe/TCP. Lightbits Labs, to give one example, will have a high-performance low-latency NVMe/TCP-based software-defined-storage solution out very soon.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding Kubernetes in the Cloud

Mike Jochimsen

Mar 25, 2019

title of post

Ever wonder why and where you would want to use Kubernetes? You’re not alone, that’s why the SNIA Cloud Storage Technologies Initiative is hosting a live webcast on May 2, 2019 “Kubernetes in the Cloud.”

Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization.

In this webcast, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on storage aspects and best practices, discussing:

  • What is Kubernetes? Why would you want to use it?
  • How does Kubernetes help in a multi-cloud/private cloud environment?
  • How does Kubernetes orchestrate and manage storage?
  • Can Kubernetes use Docker?
  • How do we provide persistence and data protection?
  • Example use cases

We’re fortunate to have great experts for this session, Matt Baldwin, the founder and former CEO of Stackpoint Cloud and now with NetApp and Ingo Fuchs, Chief Technologist, Cloud and DevOps at NetApp.

I hope you will register today to join us on May 2nd. It’s live which means our expert presenters will be on-hand to answer your questions on the spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Understanding Kubernetes in the Cloud

Mike Jochimsen

Mar 25, 2019

title of post

Ever wonder why and where you would want to use Kubernetes? You’re not alone, that’s why the SNIA Cloud Storage Technologies Initiative is hosting a live webcast on May 2, 2019 “Kubernetes in the Cloud.”

Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization.

In this webcast, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on storage aspects and best practices, discussing:

  • What is Kubernetes? Why would you want to use it?
  • How does Kubernetes help in a multi-cloud/private cloud environment?
  • How does Kubernetes orchestrate and manage storage?
  • Can Kubernetes use Docker?
  • How do we provide persistence and data protection?
  • Example use cases

We’re fortunate to have great experts for this session, Matt Baldwin, the founder and former CEO of Stackpoint Cloud and now with NetApp and Ingo Fuchs, Chief Technologist, Cloud and DevOps at NetApp.

I hope you will register today to join us on May 2nd. It’s live which means our expert presenters will be on-hand to answer your questions on the spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to