Sorry, you need to enable JavaScript to visit this website.

iSCSI over DCB: RELIABILITY AND PREDICTABLE PERFORMANCE

Gary Gumanow

Jul 21, 2010

title of post
by Gary Gumanow Following up from the previous blogpost on iSCSI over DCB, this blogpost highlights just some of the benefits that DCB can deliver. Welcome back. DCB extends Ethernet by providing a network infrastructure that virtually eliminates packet loss, enabling improved data networking and management within the DCB network environment with features for priority flow control (P802.1Qbb), enhanced transmission selection (P802.1Qaz), congestion notification (P802.1Qau), and discovery. The result is a more deterministic network behavior. DCB is enabled through enhanced switches, server network adapters, and storage targets. DCB delivers a "lossless" network, and makes the network performance extremely predictable. While standard Ethernet performs very well, its performance varies slightly (see graphic). With DCB, the maximum performance is the same, but performance varies very little. This is extremely beneficial for data center managers, enabling them to better predict performance levels and deliver smooth traffic flows. In fact, when under test, DCB eliminates packet re-transmission because of dropped packets making not only the network more efficient, but the host servers as well. SEGREGATING AND PRIORITIZING TRAFFIC In the past, storage networking best practice recommendations included physically separating data network from storage network traffic. Today's servers commonly have quad-port GbE adapters to ensure sufficient bandwidth, so segregating traffic has been easy - for example, two ports can be assigned for storage networks and two for data networks. In some cases these Ethernet adapters aggregate ports together to deliver even greater throughput for servers. With the onslaught of virtualization in the data center today, consolidated server environments have a different circumstance. Using virtualization software can simplify connectivity with multiple 10 GbE server adapters - consolidating bandwidth instead of distributing it among multiple ports and a tangle of wires. These 10 GbE adapters handle all the traffic - database, web, management, and storage - improving infrastructure utilization rates. But with traffic consolidated on fewer larger connections, how does IT segregate the storage and data networks, prioritize traffic, and guarantee service levels? Data Center Bridging includes prioritization functionality, which improves management of traffic flowing over fewer, larger pipes. In addition to setting priority queues, DCB can allocate portions of bandwidth. For example, storage traffic can be configured as higher priority than web traffic - but the administrator can allocate 60% of bandwidth to the storage traffic and 40% to the Web, ensuring operations and predictable performance for all. Thanks again for checking back here at this blog. Hopefully, you find this information useful and a good use of your last four minutes, and if you have any questions or comments, please let me know. Regards, Gary

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Cloud Activities for 2010

mac

Jul 9, 2010

title of post

Given that it’s the middle of summer it may be hot where you are, but the SNIA Cloud activities are heating up for the remainder of this year, and you don’t want to be left out.

SNIA Summer Symposium

At the end of July every year SNIA hosts a Symposium in San Jose for all the groups. The Cloud Storage TWG will be meeting from Monday afternoon through Thursday morning. The agenda is posted publicly and non-SNIA members are encouraged to attend.

Also at the Symposium Monday night is a Birds of Feather (BOF) session where we will be doing a demo of CDMI and OCCI working together in a common infrastructure. There will be time for details on the implementation and discussion afterward.

Thursday morning will be a special session to update folks on the SNIA Cloud activities for the remainder of the year. Besides the in person session at the Symposium, the session will also be broadcast as an online Webinar for folks who cannot make it in person. More information and a registration link is available on the SNIA Website.

Storage Developer Conference

#alttext#
In September will be the annual Storage Developer Conference (SDC) and this year Cloud is a big part of the agenda. There will be a CDMI Plugfest throughout the week, a Cloud Hands on Lab for developers, and Cloud Tracks all week including some big cloud related keynotes. But *wait* there’s more. Following SDC at the same hotel on Thursday September 23rd will be the…

SNIA Cloud Burst Event

#alttext# This is an event that is squarely focused on Cloud Storage and brings together end users, cloud providers and storage vendors for a unique experience including demos, a showcase and in depth sessions on this part of the overall cloud industry. More information is available on the Cloud Burst page.

Storage Networking World

For the past two SNWs, there has been a Cloud Pavilion with great traffic and interest from the attendees for those that participate. At this fall’s SNW in Dallas, we will repeat this successful program with a limited number of slots. In addition we will again have a hands on lab for cloud that is always well attended (by end users only). If you are looking for a speaking opportunity, please consider being a sponsor of the cloud summit at SNW where end users come to learn about the cloud and the offerings that are available.

SNW Europe

Last year SNW Europe was a huge success for the SNIA Cloud Participants, with a year over year increase in record attendance. This year will see an increasing set of activities around the cloud, including a new Cloud Pavilion and Hands on Labs. There are a limited number of slots for these and they will sell out early. Included is an opportunity for a speaking engagement as well.

“Membership has it’s privileges”

Many of these opportunities are open only to Cloud Storage Initiative (CSI) member companies. The membership fees help to fund these activities for the members and augment the work of the volunteers with paid resources. If you can help get your company involved, please contact Marty Foltyn (marty@bitsprings.com) for more information.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SSS Performance Test Specification Coming Soon

Eden Kim

Jul 9, 2010

title of post
SSS Performance Test Specification Coming Soon A new Performance Test Specification (PTS) for solid state storage is about to be released for Public Technical Review by the SNIA SSSI and SSS TWG.  The SNIA PTS is a device level performance specification for solid state storage testing that sets forth standard terminologies, metrics, methodologies, tests and reporting for NAND Flash based SSDs.  SNIA plans to release the final PTS v 1.0 later this year as a SNIA architecture tracking for INCITS and ANSI standards treatment. Why do we need a Solid State Storage Performance Test Specification? Lack of Industry Standards / Difficulty in Comparing SSD Performance There has been no industry standard test methodology for measuring solid state storage (SSS) device performance. As a result, each SSS manufacturer has utilized different measurement methodologies to derive performance specifications for their solid state storage (SSS) products. This made it difficult for purchasers of SSS to fairly compare the performance specifications of SSS products from different manufacturers. The SNIA Solid State Storage Technical Working Group (SSS TWG), working closely with the SNIA Solid State Storage Initiative (SSSI), has developed the Solid State Storage Performance Test Specification (SSS PTS) to address these issues. The SSS PTS defines a suite of tests and test methodologies that effectively measure the performance characteristics of SSS products. When executed in a specific hardware/software environment, SSS PTS provides measurements of performance that may be fairly compared to those of other SSS products measured in the same way in the same environment. Key Concepts Some of the key concepts of the PTS include proper pre test preparation, setting the appropriate test parameters, running the prescribed tests, and reporting results consistent with PTS protocol.  For all testing, the Device Under Test (DUT) must first be Purged (to ensure a repeatable test start point), preconditioned (by writing a prescribed access pattern of data to ensure measurements are taken when the DUT is in a steady state), and measurements taken in a prescribed steady state window (defined as a range of five rounds of data that stay within a prescribed excursion range for the data averages). Standard Tests The PTS sets forth three standard tests for client and enterprise SSDs:  IOPS, Throughput and Latency and measured in IOs per second, MB per second and average msec.  The test loop rounds consist of a Random data pattern stimulus in a matrix of R/W mixes and Block Sizes at a prescribed demand intensity (outstanding IOs - queue depth and thread count).  The user can extract performance measurements from this matrix that relate to workloads of interest.  For example, 4K RND W can equate to small block IO workloads typical in OLTP applications while 128K R can equate to large block sequential workloads typical in video on demand or media streaming applications. Reference Test Environment The SNIA PTS is hardware and software agnostic.  This means that the specification does not require any specific hardware, OS or test software to be used to run the PTS.  However, SSD performance is greatly affected by the system hardware, OS and test software (the test environment).  Because SSD performance is 100 to 1,000 times faster than HDDs, care must be taken not to introduce performance bottlenecks into the test measurements from the test environment. The PTS addresses this by setting forth basic test environment requirements and lists a suggested Reference Test Platform in an informative annex.  This RTP was used by the TWG in developing the PTS.  Other hardware and software can be used and the TWG is actively seeking industry feedback using the RTP and other test environment results. Standard Reporting The PTS also sets forth an informative annex with a recommended test reporting format.  This sample test format reports all of the PTS required test and result information to aid in comparing test data for solid state storage performance. Facilitate Market Adoption of Solid State Storage The SSS PTS will facilitate broader market adoption of Solid State Storage technology within both the client and enterprise computing environments. SSS PTS version 0.9 will be posted very shortly at http://www.snia.org/publicreview for public review. The public review phase is a 60-day period during which the proposed specification is publicly available and feedback is gathered (via http://www.snia.org/tech_activities/feedback/) across the worldwide storage industry. Upon completion of the public review phase, the SSS TWG will remove the SSS PTS from the web site, consider all submitted feedback, make modifications, and ultimately publish version 1.0 of the ratified SSS PTS. PTS Press Release...... Watch for the press release on or about July 12, and keep an eye on http://www.snia.org/forums/sssi for updates.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Cloud Activities for 2010

mac

Jul 9, 2010

title of post
Given that it's the middle of summer it may be hot where you are, but the SNIA Cloud activities are heating up for the remainder of this year, and you don't want to be left out.

SNIA Summer Symposium

At the end of July every year SNIA hosts a Symposium in San Jose for all the groups. The Cloud Storage TWG will be meeting from Monday afternoon through Thursday morning. The agenda is posted publicly and non-SNIA members are encouraged to attend.

Also at the Symposium Monday night is a Birds of Feather (BOF) session where we will be doing a demo of CDMI and OCCI working together in a common infrastructure. There will be time for details on the implementation and discussion afterward.

Thursday morning will be a special session to update folks on the SNIA Cloud activities for the remainder of the year. Besides the in person session at the Symposium, the session will also be broadcast as an online Webinar for folks who cannot make it in person. More information and a registration link is available on the SNIA Website.

Storage Developer Conference

#alttext# In September will be the annual Storage Developer Conference (SDC) and this year Cloud is a big part of the agenda. There will be a CDMI Plugfest throughout the week, a Cloud Hands on Lab for developers, and Cloud Tracks all week including some big cloud related keynotes. But *wait* there's more. Following SDC at the same hotel on Thursday September 23rd will be the...

SNIA Cloud Burst Event

#alttext# This is an event that is squarely focused on Cloud Storage and brings together end users, cloud providers and storage vendors for a unique experience including demos, a showcase and in depth sessions on this part of the overall cloud industry. More information is available on the Cloud Burst page.

Storage Networking World

For the past two SNWs, there has been a Cloud Pavilion with great traffic and interest from the attendees for those that participate. At this fall's SNW in Dallas, we will repeat this successful program with a limited number of slots. In addition we will again have a hands on lab for cloud that is always well attended (by end users only). If you are looking for a speaking opportunity, please consider being a sponsor of the cloud summit at SNW where end users come to learn about the cloud and the offerings that are available.

SNW Europe

Last year SNW Europe was a huge success for the SNIA Cloud Participants, with a year over year increase in record attendance. This year will see an increasing set of activities around the cloud, including a new Cloud Pavilion and Hands on Labs. There are a limited number of slots for these and they will sell out early. Included is an opportunity for a speaking engagement as well.

"Membership has it's privileges"

Many of these opportunities are open only to Cloud Storage Initiative (CSI) member companies. The membership fees help to fund these activities for the members and augment the work of the volunteers with paid resources. If you can help get your company involved, please contact Marty Foltyn (marty@bitsprings.com) for more information.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Combining HDD and Flash in Computers Cures Many Issues

Tom Coughlin

Jul 2, 2010

title of post
SDDs have tried to displace HDDs in computers for a few years but the higher cost of flash memory has been a major barrier to wide-spread adoption.  Lower flash memory prices will help adoption but HDDs decrease in $/GB at about the same rate as SSDs so the relative ratio of prices doesn't improve much in SSDs' favor.  At the same time there are serious issues in performance for many computers with HDDs, associated with the slower access time of HDDs.  There have attempts to combine the advantages of HDDs and flasy memory in the past such as Intel's Turbo Memory and the hybrid hard disk alliance but these were mostly  dependent upon the operating system to provide performance advantages.  The latest initiatives to combine flash memory and hard disk drives to create tiered storage systems in computers are known as Blink Boot, hyperHDD and the solid state hybrid hard drive. Most of these approaches (hyperHDD and solid state hybrid hard drive) don't depend upon the operating system to manage the use of the flash memory and the HDDs.  In the case of the recent solid state hybrid hard drive from Seagate the 4 GB of  flash memory  on the PCB board of the hard drive is used to store the most recently accessed data that the computer is using.  This is done internally by the hard drive providing a boost in access speed for this content without any special requirements on the computer operating system. By adding a little flash memory to a hard disk drive for frequently accessed data or even for OS and application booting while still keeping the HDD for inexpensive mass storage makes a lot of sense.  Computer storage tiering with flash memory and HDDs could finally help flash memory become mainstream in computers.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Kaminario - A New Name for Solid State Storage

Jim Handy

Jun 28, 2010

title of post
An Israeli start-up named Kaminario is attacking Texas Memory Systems’ home turf with a DRAM SSD that offers speeds as fast as 1.5 million IOPS.  While TMS has built itself a comfortable niche business using custom hardware, Kaminario’s K2 SSD, announced on June 16, is made using standard off-the-shelf low-profile blade servers from Dell.  Only the software is proprietary. DRAM SSDs are an interesting product that serves niches which flash SSDs are unlikely to penetrate.  Objective Analysis' new report on Enterprise SSDs explores the price and speed dynamics that separate these two technologies.  See the Objective Analysis Reports page for more information. Some of the K2’s internal servers are dedicated to handling I/O, and are called “io Directors.”  The bandwidth of the storage system scales linearly with the number of io Directors used – a pair of io Directors provides 300K IOPS, and ten io Directors will support 1.5M IOPS.  Below the io Directors are other servers called “Data Nodes” which manage the storage.  Capacity scales linearly with the addition of Data Nodes.  Today’s limit is 3.5TB, but this number will increase over time. Redundancy is a key feature of the Kaminario K2: There are at least two of any device: io Directors, Data Nodes, and HDDs per Data Node, since the DRAM-based data is stored onto HDDs in the event of an unexpected power failure.  The system can communicate with the host through a range of interfaces, with FCOE offered at introduction. Kaminario's K2 boasts a significantly smaller footprint and price tag than HDD-based systems with competing IOPS levels. To find out more about Kaminario visit Kaminario.com

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Violin Memory wants to Replace your Storage Array

Jim Handy

Jun 21, 2010

title of post
Violin Memory introduced their 3000 series memory appliance in mid-May.  This million-plus-IOPS device piles 10-20 terabytes of NAND flash storage into a single 3U cabinet at a price that Violin’s management claims is equivalent to that of high-end storage arrays. The system, introduced at $20/GB, or $200,000, is intended to provide enough storage at a low enough price to eliminate any need to manage hot data into and out of a limited number of small solid state drives.  Instead, Violin argues, the appliance’s capacity is big enough and cheap enough that an entire database can be economically stored within it, giving lightning-fast access to the entire database at once. Note that Violin acquired Gear6 a month later, in mid-June.  This seems to reveal that the company is hedging its bets, taking advantage of a distressed caching company’s expertise to assure a strong position in architectures based upon a smaller memory appliance managed by caching software. There is a good bit of detail about how and why both of these approaches make sense in Objective Analysis' newest Enterprise SSD report.  See the Objective Analysis Reports page for more information. But in regard to the Series 3000, CIOs whose databases are even larger than 10TB will be comforted to hear that Violin will be introducing appliances with as much as 60TB of storage by year-end. Violin's 3000 series can be configured through a communications module to support nearly any interface: Fibre Channel, 10Gb Ethernet, FCOE, PCIe, with Violin offering to support “Even InfiniBand, if asked.”  Inside are 84 modules, each built of a combination of DRAM and both SLC and MLC NAND flash, configured to assure data and pathway redundancy. This high level of redundancy and fault management is one of Violin’s hallmarks. Violin's website is Violin-Memory.com

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Ethernet Storage Market Momentum Continues

David Dale

Jun 18, 2010

title of post

Earlier this month IDC released their Q1 2010 Worldwide Storage Systems Hardware Tracker, a well-established analysis of revenue and capacity shipments for the quarter. For the purposes of classification, IDC calls networked storage (as opposed to direct-attached storage) "Fabric Attached Storage" - which consists of Fibre Channel SAN, iSCSI SAN and NAS.

In Q1, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 43%, up from 39% in 2009, 32% in 2008 and 28% in 2007 - demonstrating continued market momentum. A more detailed breakdown is:

2007

2008

2009

Q1 2010

FC SAN

72%

68%

61%

57%

iSCSI SAN

6%

10%

13%

14%

NAS

22%

29%

26%

29%

In terms of capacity market share, Ethernet Storage was 51% of the total PB shipped, up from 48% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q1 2010

FC SAN

62%

58%

53%

49%

iSCSI SAN

8%

13%

15%

17%

NAS

29%

29%

32%

34%

So, the evidence is that the gains seem in the trough of the recession in 2008 and 2009 are continuing into the recovery. There seem to be three major factors driving this:

· Continuing maturity and acceptance of the technology for enterprise applications

· Companies' willingness to try something new to reduce costs

· The continued rapid growth of unstructured data driving NAS capacity.

But that's just my opinion. What's your take?



Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Jim Handy

Jun 16, 2010

title of post
San Francisco’s Nimbus Data Systems launched a solid state storage system in late April that is intended to replace all the HDDs used in a system except for slow disks used in near line storage.  Nimbus holds a viewpoint that solid-state drives eliminate the need for fast disk storage, and that in future times all data centers will be built using only SSDs for speed and capacity drives (slow HDDs) for mass storage.  This viewpoint is gaining a growing following. Nimbus' S-Class Enterprise Flash Storage System uses a proprietary 6GB SAS flash module, rather than off-the-shelf SSDs, to keep the costs low in their systems.  Storage capacity is 2.5-5.0TB per 2U enclosure, and can be scaled up to 100TB.  Throughput is claimed to be 500K IOPS through 10Gb Ethernet connections.   Prices are roughly $8/GB. Although Nimbus previously sold systems based on a mix of SSDs and HDDs, they have moved away from using HDDs, and expect for data center managers to adopt this new approach. There's merit to this argument, but it will probably take a few years before CIOs agree on the role of NAND flash vs. enterprise HDDs vs. capacity HDDs in the data center. There's a lot more detail on the approaches being considered for flash in the enterprise data center in Objective Analysis' new Enterprise SSD report.  See the Objective Analysis Reports page for more information. You can find out more at  NimbusData.com

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Article: Solid State Drives for Energy Savings

Jim Handy

Jun 7, 2010

title of post
A new article, co-authored by myself and Tom Coughlin, can now be read from the SNIA Europe website.  "Solid State Drives for Energy Savings" explains the energy benefits that are being discovered when IT managers start to bring SSDs into their data centers.  The article is a quick two pager, and it introduces SNIA's new TCO Calculator (Total Cost of Ownership), a clever tool that helps estimate the power, rack space, and other savings that come along with a conversion of fast storage from enterprise HDDs to SSDs. [Update: After clicking on the above link, it will be necessary to download the April 2010 edition of  Storage Networking Times, in order to read the article.]

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to