Feb 21, 2014
Feb 21, 2014
Join SSSI members for an “open” call on March 10, 2014 at 7:00 pm ET/4:00 pm PT. SSSI member Jerome McFarland, Principal Product Marketer at Diablo Technologies, will talk about how memory channel storage technology works, how it’s deployed, and its advantages for applications.
Dial in at 1-866-439-4480 passcode 57236696. A WebEx will be available at http://snia.webex.com, meeting number 792 152 928, password sssipcie.
The SSSI PCIe SSD Committee is a SSSI member committee that provides guidance to the marketplace on SSDs. This can take the form of educational materials, best practices documents, and SNIA standards.
The “open” calls of the SSSI PCIe SSD Committee are designed to foster a broad understanding of technologies. All SNIA and SSSI members, and those who are simply interested in technology advances, are welcome to attend. Spread the word!
Feb 11, 2014
Why the FCoE – iSCSI Debate Continues
This is my first blog post for SNIA-ESF. As a Principal Storage Architect, I have been doing extensive research on the factors that are driving FCoE vs. iSCSI choices over the last several years. The more I dive into the topic, the more intriguing the debate becomes. In fact, this blog is a preview of an upcoming white paper I’m writing and a Webcast SNIA is hosting on February 18th. If you agree this debate is interesting, I encourage you to attend. Details on the Webcast are at the end of this post.
A Look Back at FCoE and iSCSI History
There are two entrenched standards for block storage protocols over Ethernet networks. FCoE was ratified in 2009, while iSCSI was ratified in 2004. Of course, various vendors and early adopters supported these protocols before ratification, so the history of these protocols is a couple of years longer than it looks, respectively. While iSCSI simply encapsulates the SCSI protocol in IP, FCoE operates lower in the network stack and to do so required many enhancements to Ethernet. While iSCSI runs on any IP network (mostly Ethernet these days), FCoE requires Data Center Bridging and Converged Network Adapters all running at 10 Gbps or faster.
All of the Data Center Bridging enhancements that make FCoE possible, like lossless Ethernet, benefit all of the protocols using Ethernet as the transport protocol. DCB doesn’t just make FCoE possible, but it improves iSCSI at the same time (see the SNIA-ESF blog, How DCB Makes iSCSI Better). So given that modern servers, networks, and storage may all be connected by hardware capable of running FCoE, that same network is also able to run iSCSI, as well as other network traffic. Nothing precludes them from running simultaneously on the same network either. The leading storage vendors that offer both FCoE and iSCSI target systems allow administrators to present the same LUN over either protocol with little effort, so a transition from one protocol to the other is not difficult.
Strengths and Weaknesses
So which network protocol is the right choice?
Each protocol has strengths and weaknesses when judged relative to each other. FCoE has higher throughput at lower host CPU utilization than iSCSI and FCoE doesn’t have to process the TCP/IP stack as iSCSI does. iSCSI is relatively simple to setup and troubleshoot when compared to FCoE because zoning is not a factor and IP connectivity (although not optimized for storage traffic) is likely in place already. Also, while FCoE has a comprehensive set of existing tools available to ease troubleshooting, there aren’t as many qualified people to use them in most enterprises. Ease of use, plus the ability to use low cost NICs and switches, gives iSCSI a cost advantage. (However, if you check out our SNIA-ESF webcast, “How VN2VN Will Help Accelerate Adoption of FCoE,” you’ll hear about new technologies that reduce the costs of deploying FCoE.) FC, and by extension FCoE, are perceived to be enterprise-grade, suitable for all workloads; and while iSCSI is being widely adopted at the enterprise level, it is still perceived by some not to be ready for Tier-1 applications. The graph below is excerpted from the report “Intel 10GbE Adapter Performance Evaluation” prepared by Demartek for Intel in September 2010. This data is consistent with the rest of the report findings and is only intended to be representative of the results from comparative iSCSI and FCoE testing. The report is interesting reading and I recommend looking at it for more information. This graph shows IOPS and CPU utilization for JetStress tests running against NetApp storage over multi-path iSCSI and FCoE. Note that latencies were all similar and running the tests against EMC storage showed similar results.
Many other factors must be considered, but according to industry pundits- as well as my own personal experience – in the majority of cases either protocol is adequate for the task at hand, and that is to effectively transfer block data across an Ethernet network.
Maximizing Throughput
The reality is, most servers, applications, and storage arrays simply won’t take advantage of FCoE’s superior performance or any storage protocol running over 10GbE. iSCSI and NAS protocols are very fast and are typically sufficient to meet most application requirements. But this is not meant to be a SAN vs NAS post – besides years of history, thousands of happy end users, and billions of continued investment show that both work well enough to meet most business needs. The commonly deployed storage systems and hosts are simply not configured with enough hardware to saturate multiple 10 gigabit network links. While this is rare today, it is going to become more common to see systems capable of saturating 10GbE pipes in the near future, especially as flash memory, either in all-flash arrays or tiered storage systems, find more application. (Hear more on the impact of flash in our SNIA-ESF webcast, “Flash – Plan for the Disruption”). At least as it relates to spinning media disk systems – network bandwidth increases faster than storage system throughput can keep up. So consider the storage system to be the bottleneck or limiting factor when evaluating storage network performance. After all, in most data center environments, the ratio of servers and applications to storage systems is high. So, it’s reasonable to expect the storage system to be the bottleneck. The absolute throughput of FCoE and iSCSI, when pushing a storage system to its limits, is not sufficient alone to be used as the sole basis for the decision between the two protocols except, for a few edge cases. Bottom line: Whether the storage system is the bottleneck or the network is the bottleneck the performance relationship between FCoE and iSCSI does not change.
These edge cases tend to be extremely IO intensive database workloads and big data applications, such as Hadoop. Citing the graph above, FCoE is about 15-20% faster on identical hardware than iSCSI. Granted this is a single graph of a single test, but the data is consistent across tests performed by IBM using Emulex network interfaces. If absolute throughput and efficiency (both network and CPU) are the only criteria when deciding between block protocols, FCoE looks like the choice. Since these cases are rare – because complexity, supportability, and even politics are almost always considered – the decision is not so obvious. Again, beyond the scope of this article, NAS protocols should be considered when determining the proper protocol for an application also.
Is There a Clear Winner?
While FCoE can claim technical superiority, iSCSI has the edge in cost and supportability. The number and range of systems supporting iSCSI connectivity is greater, particularly at the entry level. What’s more, the availability of people that can troubleshoot end-to-end connectivity for iSCSI is also much greater. (The “ping” command diagnoses most iSCSI connectivity problems.) Also, do a resume search on Monster or LinkedIn and the number of people that can configure VLANs dwarfs the number that can properly zone a Fibre Channel network. Greater familiarity reduces the support and operating cost of iSCSI.
IDC predicts that FCoE revenue will ramp very quickly through 2016. (If available to you, see the IDC Worldwide Enterprise Storage Systems 2012-2016 Forecast Update.) As customers decide to transition existing Fibre Channel networks to an Ethernet infrastructure, deploying FCoE would be a comfortable choice due to existing IT expertise and functional expectations of the Fibre Channel protocol.
Both iSCSI and FCoE are capable storage protocols and choosing one over the other will likely be dependent upon budget, IT skill set, and application requirements
Don’t forget to join us on Feb. 18th
Again, I encourage you to attend our February 18th Webcast, “Use Cases for iSCSI and FCoE –Where Each Makes Sense.” Analysts from Dell’Oro Group will share their latest market research on this topic and I’ll dive into use cases for both iSCSI and FCoE. It’s a live event, so please come with your toughest questions. I hope you’ll join us!
Jan 28, 2014
Technology continues to advance rapidly. Making sense of it all can be a challenge. At the SNIA Ethernet Storage Forum, we focus on storage technologies and solutions enabled by and associated with Ethernet Networks. Last year, we modified the charters of our two Special Interest Groups (SIG) to address topics about file protocols and storage over Ethernet. The File Protocols SIG includes the prior focus on Network File System (NFS) related topics and adds discussions around Server Message Block (SMB / CIFS). We had our first webcast last November on the topic of SMB 3.0 and it was our best attended webcast ever. The Storage over Ethernet SIG focuses on general Ethernet storage topics as well as more information about technologies like FCoE, iSCSI, Data Center Bridging, and virtual networking for storage. I encourage you to check out other articles on these hot topics in this SNIAESF blog to hear from our member experts as well as guest posts from leading analysts.
2013 was a busy year and we are already kickin’ it in 2014. This should be an exciting year in IT. Data storage continues to be a hot sector especially in the areas of All-Flash and Hybrid arrays. This year, we will expect to see new standards coming out of the T11 committee for Fibre Channel and possibly FCoE as well as progress in high speed Ethernet networks. Lower cost network interconnects will facilitate adoption of high speed networks in the small to midsize business segment. And a new conversation around “Software Defined…” should push a lot of ink in trade rags and other news sources. Oh, and don’t forget about the “Internet of Things”, mobile solutions, and all things Cloud.
The ESF will be addressing the impact on Ethernet storage solutions from these hot technologies. Next month, on February 18th, experts from the ESF, along with industry analysts from Dell’Oro Group will speak to the benefits and best practices of deploying FCoE and iSCSI storage protocols. This presentation “Use Cases for iSCSI and Fibre Channel: Where Each Makes Sense” will be part of an upcoming BrightTalk Summit on Storage Networking. I encourage you to register for this session. Additionally, we will be publishing a couple of white papers on file-based storage and a review of FCoE and iSCSI in storage applications.
Finally, SNIA will be kicking off its first year of the new user conference, Data Storage Innovation Conference. This will be one of the few storage focused user conferences in the market and should be quite interesting.
We’re excited about our growing membership and our plans for 2014. Our goal is to advance application of innovative technologies and we encourage you to send us mail or comment below with topics that are of interest to you.
Here’s to an exciting 2014!
Jan 28, 2014
Jan 28, 2014
SNIA has just announced a new special interest group around NVDIMM to:
Initial members of the NVDIMM SIG include vendors AgigA Tech, IDT, Inphi, Intel, Micron, Microsoft Corporation, Netlist, Pericom, Samsung, SK Hynix, SMART Modular Technologies, and Viking Technology.
A new webpage under the Solid State Storage Technology Community on the SNIA website at www.snia.org/nvdimm provides a knowledge resource for presentations, white papers, FAQs, and webcasts on NVDIMM contributed by SIG companies. Those interested in joining the NVDIMM SIG should contact nvdimmsigchair@snia.org.
Jan 24, 2014
Jan 24, 2014
Interested in the latest information on SSD technology? Join the SNIA Solid State Storage Initiative Monday January 27 for lunch and an afternoon of the latest on:
Lunch begins at noon, with presentations from 1:00 pm – 4:00 pm. There is no charge to attend this session at the Sainte Claire Hotel in downtown San Jose CA. You can attend in person – register at www.snia.org/events/symp2014 or by WebEx (click here for details and the agenda).
Jan 15, 2014
Over 150 individuals participated in the BrightTALK Enterprise Storage Summit NVDIMM webcast. If you are eager for more information on NVDIMM, you will want to attend an upcoming SNIA Event – the Storage Industry Summit on Non–Volatile Memory.
This Summit will take place at the Sainte Claire Hotel in San Jose, CA on January 28th as part of the SNIA Annual Members’ Symposium, and will offer critical insights into NVM, including NVDIMMs, and the future of computing. This event is complimentary to attend and you can register here.
The Summit will take place from 8:15 AM to 5:30 PM and speakers currently include:
Visit http://www.snia.org/nvmsummit for more information and we hope you will join us in San Jose!
Jan 10, 2014
Leave a Reply