Sorry, you need to enable JavaScript to visit this website.

New Solid State Storage Performance Test Specification Available for Public Review

Marty Foltyn

Jul 9, 2013

title of post
A new revision of the Enterprise Solid State Storage Performance Test Specification (PTS–E 1.1) is now available for public review. The PTS is an industry standard test methodology and test suite for the comparison of SSD performance at the device level. The PTS–E 1.1 updates the PTS–E 1.0 released in 2011 and adds tests with specific types of workloads common in the enterprise environment. The PTS–E 1.1 may be downloaded at http://www.snia.org/publicreview. "The PTS–Enterprise v1.1 provides both standard testing (IOPS, Throughput, Latency, and Write Saturation) as well as new tests for specific workloads commonly found in Enterprise environments," said Eden Kim, Chair of the SSS Technical Work Group. “These new tests also allow the user to insert workloads into the new tests while maintaining the industry standard methodology for pre conditioning and steady state determination." The new tests target workloads common to OLPT, VOD, VM, and other enterprise applications while paying special attention to the optimization of drives for varying demand intensity, maximum IOPS and minimal response times and latencies. For more information, visit www.snia.org/forums/sssi

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Solid State Storage Performance Test Specification Available for Public Review

Marty Foltyn

Jul 9, 2013

title of post

A new revision of the Enterprise Solid State Storage Performance Test Specification (PTS–E 1.1) is now available for public review. The PTS is an industry standard test methodology and test suite for the comparison of SSD performance at the device level. The PTS–E 1.1 updates the PTS–E 1.0 released in 2011 and adds tests with specific types of workloads common in the enterprise environment. The PTS–E 1.1 may be downloaded at http://www.snia.org/publicreview.

“The PTS–Enterprise v1.1 provides both standard testing (IOPS, Throughput, Latency, and Write Saturation) as well as new tests for specific workloads commonly found in Enterprise environments,” said Eden Kim, Chair of the SSS Technical Work Group. “These new tests also allow the user to insert workloads into the new tests while maintaining the industry standard methodology for pre conditioning and steady state determination.”

The new tests target workloads common to OLPT, VOD, VM, and other enterprise applications while paying special attention to the optimization of drives for varying demand intensity, maximum IOPS and minimal response times and latencies.

For more information, visit www.snia.org/forums/sssi

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS Is Enough?

Marty Foltyn

May 22, 2013

title of post
SNIA's SSSI channel webcast of "How Many IOPS Is Enough?" was a smash success!  Now you can listen to an on demand rebroadcast. Even though there are lots of SSDs on the market today offering IOPS (I/Os Per Second) performance in the thousands to hundreds of thousands (with indications that future models will offer speeds in the million-IOPS range), and HDDs that support from tens to hundreds of IOPS depending on spindle speed and interface, not every application can use the extreme performance of high-end SSDs, and some may not benefit from high IOPS at all. Since performance is tied to cost, users can save money if they understand how many IOPS the system really needs.  "How Many IOPS is Enough?" draws from the recent study by Coughlin Associates and Objective Analysis that examined what makes an application require high IOPS and which profiled applications according to their needs. In the webcast, you will also learn how to take part in an exciting SSSI project - the Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website. Details on WIOCP can be found at tinyurl.com/tryWIOCP.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS Is Enough?

Marty Foltyn

May 22, 2013

title of post

SNIA’s SSSI channel webcast of “How Many IOPS Is Enough?” was a smash success!  Now you can listen to an on demand rebroadcast.

Even though there are lots of SSDs on the market today offering IOPS (I/Os Per Second) performance in the thousands to hundreds of thousands (with indications that future models will offer speeds in the million-IOPS range), and HDDs that support from tens to hundreds of IOPS depending on spindle speed and interface, not every application can use the extreme performance of high-end SSDs, and some may not benefit from high IOPS at all.

Since performance is tied to cost, users can save money if they understand how many IOPS the system really needs.  ”How Many IOPS is Enough?” draws from the recent study by Coughlin Associates and Objective Analysis that examined what makes an application require high IOPS and which profiled applications according to their needs.

In the webcast, you will also learn how to take part in an exciting SSSI project - the Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website. Details on WIOCP can be found at tinyurl.com/tryWIOCP.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS Is Enough?

Marty Foltyn

May 22, 2013

title of post

SNIA’s SSSI channel webcast of “How Many IOPS Is Enough?” was a smash success!  Now you can listen to an on demand rebroadcast.

Even though there are lots of SSDs on the market today offering IOPS (I/Os Per Second) performance in the thousands to hundreds of thousands (with indications that future models will offer speeds in the million-IOPS range), and HDDs that support from tens to hundreds of IOPS depending on spindle speed and interface, not every application can use the extreme performance of high-end SSDs, and some may not benefit from high IOPS at all.

Since performance is tied to cost, users can save money if they understand how many IOPS the system really needs.  ”How Many IOPS is Enough?” draws from the recent study by Coughlin Associates and Objective Analysis that examined what makes an application require high IOPS and which profiled applications according to their needs.

In the webcast, you will also learn how to take part in an exciting SSSI project - the Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website. Details on WIOCP can be found at tinyurl.com/tryWIOCP.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

pNFS and Future NFSv4.2 Features

AlexMcDonald

Apr 30, 2013

title of post

In this third and final blog post on NFS (see previous blog posts Why NFSv4.1 and pNFS are Better than NFSv3 Could Ever Be and The Advantages of NFSv4.1) I’ll cover pNFS (parallel NFS), an optional feature of NFSv4.1 that improves the bandwidth available for NFS protocol access, and some of the proposed features of NFSv4.2 – some of which are already implemented in commercially available servers, but will be standardized with the ratification of NFSv4.2 (for details, see the IETF NFSv4.2 draft documents).

Finally, I’ll point out where you can get NFSv4.1 clients with support for pNFS today.

Parallel NFS (pNFS) and Layouts

Parallel NFS (pNFS) represents a major step forward in the development of NFS. Ratified in January 2010 and described in RFC-5661, pNFS depends on the NFS client understanding how a clustered filesystem stripes and manages data. It’s not an attribute of the data, but an arrangement between the server and the client, so data can still be accessed via non-pNFS and other file access protocols.  pNFS benefits workloads with many small files, or very large files, especially those run on compute clusters requiring simultaneous, parallel access to data.

 NFS 3 image 1

Clients request information about data layout from a Metadata Server (MDS), and get returned layouts that describe the location of the data. (Although often shown as separate, the MDS may or may not be standalone nodes in the storage system depending on a particular storage vendor’s hardware architecture.) The data may be on many data servers, and is accessed directly by the client over multiple paths. Layouts can be recalled by the server, as in the case for delegations, if there are multiple conflicting client requests. 

By allowing the aggregation of bandwidth, pNFS relieves performance issues that are associated with point-to-point connections. With pNFS, clients access data servers directly and in parallel, ensuring that no single storage node is a bottleneck. pNFS also ensures that data can be better load balanced to meet the needs of the client.

The pNFS specification also accommodates support for multiple layouts, defining the protocol used between clients and data servers. Currently, three layouts are specified; files as supported by NFSv4, objects based on the Object-based Storage Device Commands (OSD) standard (INCITS T10) approved in 2004, and block layouts (either FC or iSCSI access). The layout choice in any given architecture is expected to make a difference in performance and functionality. For example, pNFS object based implementations may perform RAID parity calculations in software on the client, to allow RAID performance to scale with the number of clients and to ensure end-to-end data integrity across the network to the data servers.

So although pNFS is new to the NFS standard, the experience of users with proprietary precursor protocols to pNFS shows that high bandwidth access to data with pNFS will be of considerable benefit.

Potential performance of pNFS is definitely superior to that of NFSv3 for similar configurations of storage, network and server. The management is definitely easier, as NFSv3 automounter maps and hand-created load balancing schemes are eliminated; and by providing a standardized interface, pNFS ensures fewer issues in supporting multi-vendor NFS server environments.

Some Proposed NFSv4.2 features

NFSv4.2 promises many features that end-users have been requesting, and that makes NFS more relevant as not only an “every day” protocol, but one that has application beyond the data center. As the requirements document for NFSv4.2 puts it, there are requirements for: 

  • High efficiency and utilization of resources such as, capacity, network bandwidth, and processors.
  • Solid state flash storage which promises faster throughput and lower latency than magnetic disk drives and lower cost than dynamic random access memory.

Server Side Copy

Server-Side Copy (SSC) removes one leg of a copy operation. Instead of reading entire files or even directories of files from one server through the client, and then writing them out to another, SSC permits the destination server to communicate directly to the source server without client involvement, and removes the limitations on server to client bandwidth and the possible congestion it may cause.

Application Data Blocks (ADB)

ADB allows definition of the format of a file; for example, a VM image or a database. This feature will allow initialization of data stores; a single operation from the client can create a 300GB database or a VM image on the server.

Guaranteed Space Reservation & Hole Punching

As storage demands continue to increase, various efficiency techniques can be employed to give the appearance of a large virtual pool of storage on a much smaller storage system.  Thin provisioning, (where space appears available and reserved, but is not committed) is commonplace, but often problematic to manage in fast growing environments. The guaranteed space reservation feature in NFSv4.2 will ensure that, regardless of the thin provisioning policies, individual files will always have space available for their maximum extent.

 NFS 3 image 2

While such guarantees are a reassurance for the end-user, they don’t help the storage administrator in his or her desire to fully utilize all his available storage. In support of better storage efficiencies, NFSv4.2 will introduce support for sparse files. Commonly called “hole punching”, deleted and unused parts of files are returned to the storage system’s free space pool.

Obtaining Servers and Clients

With this background on the features of NFS, there is considerable interest in the end-user community for NFSv4.1 support from both servers and clients. Many Network Attached Storage (NAS) vendors now support NFSv4, and in the last 12 months, there has been a flurry of activity and many developments in server support of NFSv4.1 and pNFS.

For NFS server vendors, there are NFSv4.1 and files based, block based and object based implementations of pNFS available; refer to the vendor websites, where you will get the latest up-to-date information.

On the client side, there is RedHat Enterprise Linux 6.4 that includes full support for NFSv4.1 and pNFS (see www.redhat.com), Novell SUSE Linux Enterprise Server 11 SP2 with NFSv4.1 and pNFS based on the 3.0 Linux kernel (see www.suse.com), and Fedora available at fedoraproject.org.

Conclusion     

NFSv4.1 includes features intended to enable its use in global wide area networks (WANs).  These advantages include:

  • Firewall-friendly single port operations
  • Advanced and aggressive cache management features
  • Internationalization support
  • Replication and migration facilities
  • Optional cryptography quality security, with access control facilities that are compatible across UNIX® and Windows®
  • Support for parallelism and data striping

The goal for NFSv4.1 and beyond is to define how you get to storage, not what your storage looks like. That has meant inevitable changes. Unlike earlier versions of NFS, the NFSv4 protocol integrates file locking, strong security, operation coalescing, and delegation capabilities to enhance client performance for data sharing applications on high-bandwidth networks.

NFSv4.1 servers and clients provide even more functionality such as wide striping of data to enhance performance.  NFSv4.2 and beyond promise further enhancements to the standard that increase its applicability to today’s application requirements. It is due to be ratified in August 2012, and we can expect to see server and client implementations that provide NFSv4.2 features soon after this; in some cases, the features are already being shipped now as vendor specific enhancements. 

With careful planning, migration to NFSv4.1 (and NFSv4.2 when it becomes generally available) from prior versions can be accomplished without modification to applications or the supporting operational infrastructure, for a wide range of applications; home directories, HPC storage servers, backup jobs and a variety of other applications.

FOOTNOTE: Parts of this blog were originally published in Usenix ;login: February 2012 under the title The Background to NFSv4.1. Used with permission.

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

pNFS and Future NFSv4.2 Features

Alex McDonald

Apr 30, 2013

title of post
In this third and final blog post on NFS (see previous blog posts Why NFSv4.1 and pNFS are Better than NFSv3 Could Ever Be and The Advantages of NFSv4.1) I'll cover pNFS (parallel NFS), an optional feature of NFSv4.1 that improves the bandwidth available for NFS protocol access, and some of the proposed features of NFSv4.2 – some of which are already implemented in commercially available servers, but will be standardized with the ratification of NFSv4.2 (for details, see the IETF NFSv4.2 draft documents). Finally, I'll point out where you can get NFSv4.1 clients with support for pNFS today.

Parallel NFS (pNFS) and Layouts

Parallel NFS (pNFS) represents a major step forward in the development of NFS. Ratified in January 2010 and described in RFC-5661, pNFS depends on the NFS client understanding how a clustered filesystem stripes and manages data. It's not an attribute of the data, but an arrangement between the server and the client, so data can still be accessed via non-pNFS and other file access protocols.   pNFS benefits workloads with many small files, or very large files, especially those run on compute clusters requiring simultaneous, parallel access to data. NFS 3 image 1 Clients request information about data layout from a Metadata Server (MDS), and get returned layouts that describe the location of the data. (Although often shown as separate, the MDS may or may not be standalone nodes in the storage system depending on a particular storage vendor's hardware architecture.) The data may be on many data servers, and is accessed directly by the client over multiple paths. Layouts can be recalled by the server, as in the case for delegations, if there are multiple conflicting client requests.   By allowing the aggregation of bandwidth, pNFS relieves performance issues that are associated with point-to-point connections. With pNFS, clients access data servers directly and in parallel, ensuring that no single storage node is a bottleneck. pNFS also ensures that data can be better load balanced to meet the needs of the client. The pNFS specification also accommodates support for multiple layouts, defining the protocol used between clients and data servers. Currently, three layouts are specified; files as supported by NFSv4, objects based on the Object-based Storage Device Commands (OSD) standard (INCITS T10) approved in 2004, and block layouts (either FC or iSCSI access). The layout choice in any given architecture is expected to make a difference in performance and functionality. For example, pNFS object based implementations may perform RAID parity calculations in software on the client, to allow RAID performance to scale with the number of clients and to ensure end-to-end data integrity across the network to the data servers. So although pNFS is new to the NFS standard, the experience of users with proprietary precursor protocols to pNFS shows that high bandwidth access to data with pNFS will be of considerable benefit. Potential performance of pNFS is definitely superior to that of NFSv3 for similar configurations of storage, network and server. The management is definitely easier, as NFSv3 automounter maps and hand-created load balancing schemes are eliminated; and by providing a standardized interface, pNFS ensures fewer issues in supporting multi-vendor NFS server environments.

Some Proposed NFSv4.2 features

NFSv4.2 promises many features that end-users have been requesting, and that makes NFS more relevant as not only an "every day" protocol, but one that has application beyond the data center. As the requirements document for NFSv4.2 puts it, there are requirements for:  
  • High efficiency and utilization of resources such as, capacity, network bandwidth, and processors.
  • Solid state flash storage which promises faster throughput and lower latency than magnetic disk drives and lower cost than dynamic random access memory.

Server Side Copy

Server-Side Copy (SSC) removes one leg of a copy operation. Instead of reading entire files or even directories of files from one server through the client, and then writing them out to another, SSC permits the destination server to communicate directly to the source server without client involvement, and removes the limitations on server to client bandwidth and the possible congestion it may cause.

Application Data Blocks (ADB)

ADB allows definition of the format of a file; for example, a VM image or a database. This feature will allow initialization of data stores; a single operation from the client can create a 300GB database or a VM image on the server.

Guaranteed Space Reservation & Hole Punching

As storage demands continue to increase, various efficiency techniques can be employed to give the appearance of a large virtual pool of storage on a much smaller storage system.   Thin provisioning, (where space appears available and reserved, but is not committed) is commonplace, but often problematic to manage in fast growing environments. The guaranteed space reservation feature in NFSv4.2 will ensure that, regardless of the thin provisioning policies, individual files will always have space available for their maximum extent. NFS 3 image 2 While such guarantees are a reassurance for the end-user, they don't help the storage administrator in his or her desire to fully utilize all his available storage. In support of better storage efficiencies, NFSv4.2 will introduce support for sparse files. Commonly called "hole punching", deleted and unused parts of files are returned to the storage system's free space pool.

Obtaining Servers and Clients

With this background on the features of NFS, there is considerable interest in the end-user community for NFSv4.1 support from both servers and clients. Many Network Attached Storage (NAS) vendors now support NFSv4, and in the last 12 months, there has been a flurry of activity and many developments in server support of NFSv4.1 and pNFS. For NFS server vendors, there are NFSv4.1 and files based, block based and object based implementations of pNFS available; refer to the vendor websites, where you will get the latest up-to-date information. On the client side, there is RedHat Enterprise Linux 6.4 that includes full support for NFSv4.1 and pNFS (see www.redhat.com), Novell SUSE Linux Enterprise Server 11 SP2 with NFSv4.1 and pNFS based on the 3.0 Linux kernel (see www.suse.com), and Fedora available at fedoraproject.org.

Conclusion          

NFSv4.1 includes features intended to enable its use in global wide area networks (WANs).   These advantages include:
  • Firewall-friendly single port operations
  • Advanced and aggressive cache management features
  • Internationalization support
  • Replication and migration facilities
  • Optional cryptography quality security, with access control facilities that are compatible across UNIX ® and Windows ®
  • Support for parallelism and data striping
The goal for NFSv4.1 and beyond is to define how you get to storage, not what your storage looks like. That has meant inevitable changes. Unlike earlier versions of NFS, the NFSv4 protocol integrates file locking, strong security, operation coalescing, and delegation capabilities to enhance client performance for data sharing applications on high-bandwidth networks. NFSv4.1 servers and clients provide even more functionality such as wide striping of data to enhance performance.   NFSv4.2 and beyond promise further enhancements to the standard that increase its applicability to today's application requirements. It is due to be ratified in August 2012, and we can expect to see server and client implementations that provide NFSv4.2 features soon after this; in some cases, the features are already being shipped now as vendor specific enhancements.   With careful planning, migration to NFSv4.1 (and NFSv4.2 when it becomes generally available) from prior versions can be accomplished without modification to applications or the supporting operational infrastructure, for a wide range of applications; home directories, HPC storage servers, backup jobs and a variety of other applications.   FOOTNOTE: Parts of this blog were originally published in Usenix ;login: February 2012 under the title The Background to NFSv4.1. Used with permission. Update: Want to learn more about NFS? Check out these SNIA ESF webcasts:  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Performance Test Service Launched for Solid State Drives

Marty Foltyn

Apr 20, 2013

title of post
The SNIA Solid State Storage Initiative (SNIA SSSI) announces a testing service where interested parties may submit their SSD products for testing to the SSS Performance Test Specification. Drive Requirements Any mSATA, SATA, SAS and PCIe SSDs can be tested. The tested device must be recognized as a logical device by CentOS 6.3 and must support Purge (via Security Erase, Format Unit, or equivalent proprietary method of Purge). Available Tests Testing is based on the SSS PTS version 1.1.  Visit the SSS Performance Test Service page for more information on the tests. Testing Process Testing will be conducted by Calypso Systems, a certified SSS PTS testing facility.  Participants must submit two (2) samples of the SSD to be tested and provide prepaid return express shipment bills (FedEx, DHL or UPS). Testing will take approximately 3-4 weeks to complete. Any failed test, or test that will not complete, will be tested twice and error logs will be provided.  All product test result data will be kept confidential. Test results are provided in standard SNIA Report Format as specified in the SSS Performance Test Specification. For more details, contact ptstest@snia.org

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Performance Test Service Launched for Solid State Drives

Marty Foltyn

Apr 20, 2013

title of post

The SNIA Solid State Storage Initiative (SNIA SSSI) announces a testing service where interested parties may submit their SSD products for testing to the SSS Performance Test Specification.

Drive Requirements

Any mSATA, SATA, SAS and PCIe SSDs can be tested. The tested device must be recognized as a logical device by CentOS 6.3 and must support Purge (via Security Erase, Format Unit, or equivalent proprietary method of Purge).

Available Tests

Testing is based on the SSS PTS version 1.1.  Visit the SSS Performance Test Service page for more information on the tests.

Testing Process

Testing will be conducted by Calypso Systems, a certified SSS PTS testing facility.  Participants must submit two (2) samples of the SSD to be tested and provide prepaid return express shipment bills (FedEx, DHL or UPS). Testing will take approximately 3-4 weeks to complete.

Any failed test, or test that will not complete, will be tested twice and error logs will be provided.  All product test result data will be kept confidential.

Test results are provided in standard SNIA Report Format as specified in the SSS Performance Test Specification.

For more details, contact ptstest@snia.org

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

10 Gigabit Ethernet – 2H12 Results and 2013 Outlook

Seamus Crehan

Apr 18, 2013

title of post

Seamus Crehan, President, Crehan Research Inc.

2H12 results

2012 turned out be another very strong growth year for 10 Gigabit Ethernet (10GbE), with the data center switch market and the server-class adapter and LAN-on-Motherboard (LOM) market both growing more than 50%.  Broad long-term trends such as virtualization, convergence, data center network traffic growth, cloud deployments, and price declines were helped further by more specific demand drivers, many of which materialized in the latter half of 2012. These included the adoption of Romley servers, expanded 10GBASE-T product offerings for both switches and servers, 10GbE LOM solutions for volume rack servers (which drive the majority of server shipments), and the public cloud’s migration to 10GbE for mainstream server networking access. (The SNIA Ethernet Storage Forum wrote about many of these in its July 2012 whitepaper titled 10GbE Comes of Age).

However, despite another stellar growth year, 10GbE still remained a minority of the overall data center and server shipment mix (see Figure 1).  

Crehan figure 1

Furthermore, its adoption hit some turbulence in the latter half of 2012, mostly related to the initial high prices and the learning curve associated with the new Modular LOM form-factor, resulting in some inventory issues.  Another drag on 2H12 10GbE growth was the lack of comprehensive 10GBASE-T offerings from many market participants. Although we saw a very significant step up in 10GBASE-T shipments in 2012, limited product offerings throughout much of 2012 capped its adoption at under less than 10% of total 10GbE shipments.

But these 2H12 issues were more than offset by 10GbE entering its next major stage of volume server adoption during this time period.  Crehan Research reported a near-50% increase in 2H12 10GbE results as many public cloud, Web 2.0, and massively scalable data center companies deployed 10GbE servers and server-access data center switches. We believe this is the second of three major stages of mainstream 10GbE server adoption, the first of which was driven by blade servers. The third will be driven by the upgrade of the traditional enterprise segment’s large installed base of 1GbE rack and tower server ports to 10GbE.

2013 expectations

As we move through 2013, Crehan Research expects the following factors to have positive impacts on the 10GbE market, driving it closer to becoming the majority data center networking interconnect:

Better pricing and understanding of Modular LOMs.  Initial pricing on 10GbE Modular LOMs has been relatively high, contributing to slower adoption and inventory issues.  In the past, end customers were given the higher-speed LOM for free for example, during the 1GbE and blade-server 10GbE transitions.  The Modular LOM is a new product form-factor, and it takes time for buyers and sellers to get comfortable with and fully understand it. During 2013, we should see lower pricing for this class of product, driving a higher server attach rate.

Comprehensive 10GBASE-T product offerings. 2013 should finally bring complete 10GBASE-T product offerings from the major server and switch OEMs, helping drive stronger 10GBASE-T adoption and growth. More specifically, we should see more 10GBASE-T LOMs in addition to top-of-rack and end-of-row data center switches. Furthermore, we expect many of these products to be attractively priced, in order to entice the large installed base of 1GBASE-T customers to upgrade to 10GbE.

Higher-speed uplink, aggregation, and core data center switches. Servers and server-access switches likely won’t see volume deployments to 10GbE without robust and cost-effective higher-speed uplink, aggregation, and core networking options. These have now begun to arrive with 40GbE, and we are starting to see a strong ramp for this technology. Crehan Research expects 2013 to bring the advent of many 40GbE data center switches, and foresees all of the major switch vendors rolling out offerings in 2013. In contrast with the early days of 10GbE, 40GbE prices are already close to parity on a bandwidth basis with 10GbE and have settled on a single interface form factor (QSFP), which should propel 40GbE data center switches to a much stronger start than that seen by 10GbE data center switches.

Continued traction of 10GbE for storage applications. We expect that 2013 will see a continuation of the broader adoption of 10GbE as a storage protocol, in both the public cloud and traditional enterprise segments.  Although Fibre Channel remains a very important data center storage networking technology, Fibre Channel switch and Host Bus Adapter (HBA) shipments each declined slightly in 2012 and have seen flat compound annual growth rates over the past four years (see Figure 2). We expect this gradual Fibre Channel decline to continue in 2013 as more customers run Ethernet-based protocols such as NAS, iSCSI and FCoE, especially over 10GbE, for their storage needs and deployments.

Crehan figure 2

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to