Sorry, you need to enable JavaScript to visit this website.

Virtualization and Storage Networking Best Practices from the Experts

J Metz

Nov 26, 2018

title of post
Ever make a mistake configuring a storage array or wonder if you’re maximizing the value of your virtualized environment? With all the different storage arrays and connectivity protocols available today, knowing best practices can help improve operational efficiency and ensure resilient operations. That’s why the SNIA Networking Storage Forum is kicking off 2019 with a live webcast “Virtualization and Storage Networking Best Practices.” In this webcast, Jason Massae from VMware and Cody Hosterman from Pure Storage will share insights and lessons learned as reported by VMware’s storage global services by discussing:
  • Common mistakes when setting up storage arrays
  • Why iSCSI is the number one storage configuration problem
  • Configuring adapters for iSCSI or iSER
  • How to verify your PSP matches your array requirements
  • NFS best practices
  • How to maximize the value of your array and virtualization
  • Troubleshooting recommendations
Register today to join us on January 17th. Whether you’ve been configuring storage for VMs for years or just getting started, we think you will pick up some useful tips to optimize your storage networking infrastructure.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

RDMA for Persistent Memory over Fabrics – FAQ

John Kim

Nov 14, 2018

title of post
In our most recent SNIA Networking Storage Forum (NSF) webcast Extending RDMA for Persistent Memory over Fabrics, our expert speakers, Tony Hurson and Rob Davis outlined extensions to RDMA protocols that confirm persistence and additionally can order successive writes to different memories within the target system. Hundreds of people have seen the webcast and have given it a 4.8 rating on a scale of 1-5! If you missed, it you can watch it on-demand at your convenience. The webcast slides are also available for download. We had several interesting questions during the live event. Here are answers from our presenters:  Q. For the RDMA Message Extensions, does the client have to qualify a WRITE completion with only Atomic Write Response and not with Commit Response? A. If an Atomic Write must be confirmed persistent, it must be followed by an additional Commit Request. Built-in confirmation of persistence was dropped from the Atomic Request because it adds latency and is not needed for some application streams. Q. Why do you need confirmation for writes? From my point of view, the only thing required is ordering. A. Agreed, but only if the entire target system is non-volatile! Explicit confirmation of persistence is required to cover the “gap” between the Write completing in the network and the data reaching persistence at the target. Q. Where are these messages being generated? Does NIC know when the data is flushed or committed? A. They are generated by the application that has reserved the memory window on the remote node. It can write using RDMA writes to that window all it wants, but to guarantee persistence it must send a flush. Q. How is RPM presented on the client host? A. The application using it sees it as memory it can read and write. Q. Does this RDMA commit response implicitly ACK any previous RDMA sends/writes to same or different MR? A. Yes, the new Commit (and Verify and Atomic Write) Responses have the same acknowledgement coalescing properties as the existing Read Response. That is, a Commit Response is explicit (non-coalesced); but it coalesces/implies acknowledgement of prior Write and/or Send Requests. Q. Does this one still have the current RMDA Write ACK? A. See previous general answer. Yes. A Commit Response implicitly acknowledges prior Writes. Q. With respect to the Race Hazard explained to show the need for explicit completion response, wouldn’t this be the case even with a non-volatile Memory, if the data were to be stored in non-volatile memory. Why is this completion status required only on the non-volatile case? A. Most networked applications that write over the network to volatile memory do not require explicit confirmation at the writer endpoint that data has actually reached there. If so, additional handshake messages are usually exchanged between the endpoint applications. On the other hand, a writer to PERSISTENT memory across a network almost always needs assurance that data has reached persistence, thus the new extension. Q. What if you are using multiple RNIC with multiple ports to multiple ports on a 100Gb fabric for server-to-server RDMA? How is order kept there…by CPU software or ‘NIC teaming plus’? A. This would depend on the RNIC vendor and their implementation. Q. What is the time frame for these new RDMA messages to be available in verbs API? A. This depends on the IBTA standards approval process which is not completely predicable, roughly sometime the first half of 2019. Q. Where could I find more details about the three new verbs (what are the arguments)? A. Please poll/contact/Google the IBTA and IETF organizations towards the end of calendar year 2018, when first drafts of the extension documents are expected to be available. Q. Do you see this technology used in a way similar to Hyperconverged systems now use storage or could you see this used as a large shared memory subsystem in the network? A. High-speed persistent memory, in either NVDIMM or SSD form factor, has enormous potential in speeding up hyperconverged write replication. It will require however substantial re-write of such storage stacks, moving for example from traditional three-phase block storage protocols (command/data/response) to an RDMA write/confirm model. More generally, the RDMA extensions are useful for distributed shared PERSISTENT memory applications. Q. What would be the most useful performance metrics to debug performance issues in such environments? A. Within the RNIC, basic counts for the new message types would be a baseline. These plus total stall times encountered by the RNIC awaiting Commit Responses from the local CPU subsystem would be useful. Within the CPU platform basic counts of device write and read requests targeting persistent memory would be useful. Q. Do all the RDMA NIC’s have to update their firmware to support this new VERB’s? What is the expected performance improvement with the new Commit message? A. Both answers would depend on the RNIC vendor and their implementation. Q. Will the three new verbs be implemented in the RNIC alone, or will they require changes in other places (processor, memory controllers, etc.)? A. The new Commit request requires the CPU platform and its memory controllers to confirm that prior write data has reached persistence. The new Atomic Write and Verify messages however may be executed entirely within the RNIC. Q. What about the future of NVMe over TCP – this would be much simpler for people to implement. Is this a good option? A. Again this would depend on the NIC vendor and their implementation. Different vendors have implemented various tests for performance. It is recommended that readers do their own due diligence.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Scale-Out Storage Changes Networking Demands

Fred Zhang

Oct 23, 2018

title of post
Scale-out storage is increasingly popular for Cloud, High-Performance Computing, Machine Learning, and certain Enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines. But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it's based on flash. That's why the SNIA Networking Storage Forum (NSF) is hosting a live webcast "Networking Requirements for Scale-Out Storage" on November 14th. I hope you will join my NSF colleagues and me to learn about:
  • Scale-out storage solutions and what workloads they can address
  • How your network may need to evolve to support scale-out storage
  • Network considerations to ensure performance for demanding workloads
  • Key considerations for all flash scale-out storage solutions
Register today. Our NSF experts will be on hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Scale-Out Storage Changes Networking Demands

Fred Zhang

Oct 23, 2018

title of post
Scale-out storage is increasingly popular for Cloud, High-Performance Computing, Machine Learning, and certain Enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines. But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webcast “Networking Requirements for Scale-Out Storage” on November 14th. I hope you will join my NSF colleagues and me to learn about:
  • Scale-out storage solutions and what workloads they can address
  • How your network may need to evolve to support scale-out storage
  • Network considerations to ensure performance for demanding workloads
  • Key considerations for all flash scale-out storage solutions
Register today. Our NSF experts will be on hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Deciphering the Economics of Building a Cloud Storage Architecture

Eric Lakin

Oct 11, 2018

title of post
Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture. That’s why the SNIA Cloud Storage Technologies Initiative is hosting a live webcast, “Create a Smart and More Economic Cloud Storage Architecture” on November 7th. From an economic perspective, cloud infrastructure is often procured in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures which slows cost recovery based on fluctuating customer adoption. Giving large enterprises and cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets. From a technical perspective, clouds inherently require unpredictable scalability – both up and down. If you were to Read More, you'd know that digital marketing, SEO, and graphic design is the least of it. Building a storage architecture with the ability to rapidly allocate resources for a specific customer need and reallocate resources based on changing customer requirements allows for storage capacity optimization, creating performance pools in the data center without compromising the responsiveness to the change in needs. Such architecture should also align to the data center level orchestration system to allow for even higher level of resource optimization and flexibility. In this webcast, you will learn:
  • How modern storage technology allows you to build this infrastructure
  • The role of software defined storage
  • Accounting principles that impact CapEx and OpEx
  • How to model cloud costs of new applications and or re-engineering existing applications
  • Performance considerations

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Deciphering the Economics of Building a Cloud Storage Architecture

Eric Lakin

Oct 11, 2018

title of post
Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture. That’s why the SNIA Cloud Storage Technologies Initiative is hosting a live webcast, “Create a Smart and More Economic Cloud Storage Architecture” on November 7th. From an economic perspective, cloud infrastructure is often procured in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures which slows cost recovery based on fluctuating customer adoption. Giving large enterprises and cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets. From a technical perspective, clouds inherently require unpredictable scalability – both up and down. Building a storage architecture with the ability to rapidly allocate resources for a specific customer need and reallocate resources based on changing customer requirements allows for storage capacity optimization, creating performance pools in the data center without compromising the responsiveness to the change in needs. Such architecture should also align to the data center level orchestration system to allow for even higher level of resource optimization and flexibility. In this webcast, you will learn:
  • How modern storage technology allows you to build this infrastructure
  • The role of software defined storage
  • Accounting principles that impact CapEx and OpEx
  • How to model cloud costs of new applications and or re-engineering existing applications
  • Performance considerations
Register today. Our CSTI experts will be on hand to answer your questions on the spot. We hope to see you on November. 7th.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Introducing the Networking Storage Forum

John Kim

Oct 9, 2018

title of post

At SNIA, we are dedicated to staying on top of storage trends and technologies to fulfill our mission as a globally recognized and trusted authority for storage leadership, standards, and technology expertise. For the last several years, the Ethernet Storage Forum has been working hard to provide high quality educational and informational material related to all kinds of storage.

From our "Everything You Wanted To Know About Storage But Were Too Proud To Ask" series, to the absolutely phenomenal (and required viewing) "Storage Performance Benchmarking" series to the "Great Storage Debates" series, we've produced dozens of hours of material.

Technologies have evolved and we've come to a point where there's a need to understand how these systems and architectures work – beyond just the type of wire that is used. Today, there are new systems that are bringing storage to completely new audiences. From scale-up to scale-out, from disaggregated to hyperconverged, RDMA, and NVMe-oF - there is more to storage networking than just your favorite transport. For example, when we talk about NVMe™ over Fabrics, the protocol is broader than just one way of accomplishing what you need. When we talk about virtualized environments, we need to examine the nature of the relationship between hypervisors and all kinds of networks. When we look at "Storage as a Service," we need to understand how we can create workable systems from all the tools at our disposal. Bigger Than Our Britches As I said, SNIA's Ethernet Storage Forum has been working to bring these new technologies to the forefront, so that you can see (and understand) the bigger picture. To that end, we realized that we needed to rethink the way that our charter worked, to be even more inclusive of technologies that were relevant to storage and networking. So... Introducing the Networking Storage Forum. In this group we're going to continue producing top-quality, vendor-neutral material related to storage networking solutions. We'll be talking about:
  • Storage Protocols (iSCSI, FC, FCoE, NFS, SMB, NVMe-oF, etc.)
  • Architectures (Hyperconvergence, Virtualization, Storage as a Service, etc.)
  • Storage Best Practices
  • New and developing technologies
... and more! Generally speaking, we'll continue to do the same great work that we've been doing, but now our name more accurately reflects the breadth of work that we do. We're excited to launch this new chapter of the Forum. If you work for a vendor, are a systems integrator, university or someone who manages storage, we welcome you to join the NSF. We are an active group that honestly has a lot of fun. If you're one of our loyal followers, we hope you will continue to keep track of what we're doing. And if you're new to this Forum, we encourage you to take advantage of the library of webcasts, white papers, and published articles that we have produced here. There's a wealth of un-biased, educational information there, we don't think you'll find anywhere else! If there's something that you'd like to hear about – let us know! We are always looking to hear about headaches, concerns, and areas of confusion within the industry where we can shed some light. Stay current with all things NSF:    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Introducing the Networking Storage Forum

John Kim

Oct 9, 2018

title of post

At SNIA, we are dedicated to staying on top of storage trends and technologies to fulfill our mission as a globally recognized and trusted authority for storage leadership, standards, and technology expertise. For the last several years, the Ethernet Storage Forum has been working hard to provide high quality educational and informational material related to all kinds of storage.

From our “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, to the absolutely phenomenal (and required viewing) “Storage Performance Benchmarking” series to the “Great Storage Debates” series, we’ve produced dozens of hours of material.

Technologies have evolved and we’ve come to a point where there’s a need to understand how these systems and architectures work – beyond just the type of wire that is used. Today, there are new systems that are bringing storage to completely new audiences. From scale-up to scale-out, from disaggregated to hyperconverged, RDMA, and NVMe-oF – there is more to storage networking than just your favorite transport. For example, when we talk about NVMe™ over Fabrics, the protocol is broader than just one way of accomplishing what you need. When we talk about virtualized environments, we need to examine the nature of the relationship between hypervisors and all kinds of networks. When we look at “Storage as a Service,” we need to understand how we can create workable systems from all the tools at our disposal. Bigger Than Our Britches As I said, SNIA’s Ethernet Storage Forum has been working to bring these new technologies to the forefront, so that you can see (and understand) the bigger picture. To that end, we realized that we needed to rethink the way that our charter worked, to be even more inclusive of technologies that were relevant to storage and networking. So… Introducing the Networking Storage Forum. In this group we’re going to continue producing top-quality, vendor-neutral material related to storage networking solutions. We’ll be talking about:
  • Storage Protocols (iSCSI, FC, FCoE, NFS, SMB, NVMe-oF, etc.)
  • Architectures (Hyperconvergence, Virtualization, Storage as a Service, etc.)
  • Storage Best Practices
  • New and developing technologies
… and more! Generally speaking, we’ll continue to do the same great work that we’ve been doing, but now our name more accurately reflects the breadth of work that we do. We’re excited to launch this new chapter of the Forum. If you work for a vendor, are a systems integrator, university or someone who manages storage, we welcome you to join the NSF. We are an active group that honestly has a lot of fun. If you’re one of our loyal followers, we hope you will continue to keep track of what we’re doing. And if you’re new to this Forum, we encourage you to take advantage of the library of webcasts, white papers, and published articles that we have produced here. There’s a wealth of un-biased, educational information there, we don’t think you’ll find anywhere else! If there’s something that you’d like to hear about – let us know! We are always looking to hear about headaches, concerns, and areas of confusion within the industry where we can shed some light. Stay current with all things NSF:    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Oh What a Tangled Web We Weave: Extending RDMA for PM over Fabrics

John Kim

Oct 8, 2018

title of post
For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here Persistent Memory over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA data reads or writes from/to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. Join the Networking Storage Forum (NSF) on October 25, 2018 for out next live webcast, Extending RDMA for Persistent Memory over Fabrics. In this webcast, we will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system. Learn:
  • Why we can't just treat PM just like traditional storage or volatile memory
  • What happens when you write to memory over RDMA
  • Which programming model and protocol changes are required for PMoF
  • How proposed RDMA extensions for PM would work
We believe this webcast will appeal to developers of low-latency and/or high-availability datacenter storage applications and be of interest to datacenter developers, administrators and users. I encourage you to register today. Our NSF experts will be on hand to answer you questions. We look forward to your joining us on October 25th.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Oh What a Tangled Web We Weave: Extending RDMA for PM over Fabrics

John Kim

Oct 8, 2018

title of post
For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here Persistent Memory over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA data reads or writes from/to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. Join the Networking Storage Forum (NSF) on October 25, 2018 for out next live webcast, Extending RDMA for Persistent Memory over Fabrics. In this webcast, we will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system. Learn:
  • Why we can’t just treat PM just like traditional storage or volatile memory
  • What happens when you write to memory over RDMA
  • Which programming model and protocol changes are required for PMoF
  • How proposed RDMA extensions for PM would work
We believe this webcast will appeal to developers of low-latency and/or high-availability datacenter storage applications and be of interest to datacenter developers, administrators and users. I encourage you to register today. Our NSF experts will be on hand to answer you questions. We look forward to your joining us on October 25th.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to