Sorry, you need to enable JavaScript to visit this website.

Join the SSSI at Flash Memory Summit August 12-15 in Santa Clara CA!

Marty Foltyn

Aug 7, 2013

title of post
SSSI returns to the Flash Memory Summit in booth 808, featuring information on updates on new tests in the SNIA Solid State Storage Performance Test Specification-Enterprise 1.1, NVM programming, and Workload I/O Capture Program (WIOCP) activities; new tech notes and white papers, including a PTS User Guide Tech Note, a PCIe SSD 101 Whitepaper, and a Performance Primer Whitepaper; and PCIe SSD demonstrations from SSSI members Bitmicro, Fastor, and Micron. flash memory summitAll current SSSI members attending FMS and individuals from companies interested in the SSSI and their activities are cordially invited to the SSSI Solid State Storage Reception Monday evening August 12 from 5:30 pm - 7:00 pm in Room 209-210 at the Santa Clara Convention Center.   At the reception, SSSI Education Chair Tom Coughlin of Coughlin Associates will provide an overview of the SSD market, and SSSI Chair Paul Wassenberg of Marvell will discuss SSD performance.  SSSI Vice Chair Walt Hubis of Fusion-io will discuss SSSI programs, including PTS, NVM Programming, Workload I/O Capture, and PCIe SSD.  Refreshments, table displays, and an opportunity drawing for SSDs provided by SSSI members Intel, Micron, and OCZ will be featured. FMS conference activities begin August 13, and the agenda can be found here.  SSSI members speaking and chairing panels include: Tuesday August 13 4:35 pm - Paul Wassenberg of Marvell on Standards Wednesday August 14 8:30 am - Eden Kim and Easen Ho of Calypso Testers - PCIe Power Budgets, Performance, and Deployment 9:50 am - Eden Kim and Easen Ho of Calypso Testers -  SNIA Solid State Storage Performance Test Specification 3:10 pm - Walt Hubis of Fusion-io - Revolutionizing Application Development Using NVM Storage Software 3:10 pm - Easen Ho of Calypso Testers -  SSD Testing Challenges 4:30 pm - Paul von Behren of Intel -  SNIA Tutorial: SNIA NVM Programming Model:  Optimizing Software for Flash Thursday August 15 3:10 pm - Jim Pappas of Intel - PCI Express and  Enterprise SSDs 3:10 pm - Jim Handy of Objective Analysis - Market Research An open "Chat with the Experts" roundtable session Tuesday August 13 at 7:00 pm will feature Jim Pappas of Intel at a Standards table, Eden Kim of Calypso Testers at a SSD Performance table, Easen Ho of Calypso Testers at a Testing table, and Paul Wassenberg of Marvell at a SATA Express table.MESS - Final logo #2-Megan Archer The Media Entertainment and Scientific Storage (MESS) will hold their August "Meetup" at the Open Chat with the Experts, and also be located in SSSI Booth 808 for further discussions. Exhibit admission is complimentary until August 8.  SNIA and SSSI members and colleagues can receive a $100 discount on either the 3-day conference or the 1-day technical program using the code SNIA at www.flashmemorysummit.com.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Join the SSSI at Flash Memory Summit August 12-15 in Santa Clara CA!

Marty Foltyn

Aug 7, 2013

title of post

SSSI returns to the Flash Memory Summit in booth 808, featuring information on updates on new tests in the SNIA Solid State Storage Performance Test Specification-Enterprise 1.1, NVM programming, and Workload I/O Capture Program (WIOCP) activities; new tech notes and white papers, including a PTS User Guide Tech Note, a PCIe SSD 101 Whitepaper, and a Performance Primer Whitepaper; and PCIe SSD demonstrations from SSSI members Bitmicro, Fastor, and Micron.

flash memory summitAll current SSSI members attending FMS and individuals from companies interested in the SSSI and their activities are cordially invited to the SSSI Solid State Storage Reception Monday evening August 12 from 5:30 pm – 7:00 pm in Room 209-210 at the Santa Clara Convention Center.   At the reception, SSSI Education Chair Tom Coughlin of Coughlin Associates will provide an overview of the SSD market, and SSSI Chair Paul Wassenberg of Marvell will discuss SSD performance.  SSSI Vice Chair Walt Hubis of Fusion-io will discuss SSSI programs, including PTS, NVM Programming, Workload I/O Capture, and PCIe SSD.  Refreshments, table displays, and an opportunity drawing for SSDs provided by SSSI members Intel, Micron, and OCZ will be featured.

FMS conference activities begin August 13, and the agenda can be found here.  SSSI members speaking and chairing panels include:

Tuesday August 13

4:35 pm – Paul Wassenberg of Marvell on Standards

Wednesday August 14

8:30 am – Eden Kim and Easen Ho of Calypso Testers – PCIe Power Budgets, Performance, and Deployment

9:50 am – Eden Kim and Easen Ho of Calypso Testers -  SNIA Solid State Storage Performance Test Specification

3:10 pm - Walt Hubis of Fusion-io – Revolutionizing Application Development Using NVM Storage Software

3:10 pm – Easen Ho of Calypso Testers –  SSD Testing Challenges

4:30 pm – Paul von Behren of Intel -  SNIA Tutorial: SNIA NVM Programming Model:  Optimizing Software for Flash

Thursday August 15

3:10 pm – Jim Pappas of Intel – PCI Express and  Enterprise SSDs

3:10 pm – Jim Handy of Objective Analysis – Market Research

An open “Chat with the Experts” roundtable session Tuesday August 13 at 7:00 pm will feature Jim Pappas of Intel at a Standards table, Eden Kim of Calypso Testers at a SSD Performance table, Easen Ho of Calypso Testers at a Testing table, and Paul Wassenberg of Marvell at a SATA Express table.MESS - Final logo #2-Megan Archer

The Media Entertainment and Scientific Storage (MESS) will hold their August “Meetup” at the Open Chat with the Experts, and also be located in SSSI Booth 808 for further discussions.

Exhibit admission is complimentary until August 8.  SNIA and SSSI members and colleagues can receive a $100 discount on either the 3-day conference or the 1-day technical program using the code SNIA at www.flashmemorysummit.com.

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

PCI Express Coming to an SSD Near You

Team_SSSI

Aug 2, 2013

title of post
There’s been a lot of press recently about what’s going on in the world of storage regarding the utilization of PCIe as a device interface.  Of course, PCIe has been around a long time as a system bus, while SATA and SAS have been used as storage device interfaces.  But with SSDs getting faster with every new product release, it’s become difficult for the traditional interfaces to keep up. Some folks figure that PCIe is the solution to that problem.  PCIe 3.0 operates at 1GB/s, which is faster than 600MB/s SATA.  And with PCIe, it’s possible to add lanes to increase the overall bandwidth.  The SATA Express spec from SATA-IO defines a client PCIe device as having up to 2 lanes of PCIe, which brings the speed up to 2GB/s.  Enterprise SSDs will have up to 4 lanes of PCIe, which provides 4GB/s of bandwidth. There was also some work on the software side that needed to be done to support PCIe devices, including NVM Express and SCSI Over PCIe (SOP), both of which are well underway. If you are interested in knowing more about PCIe SSDs, keep an eye on our Education page, where, sometime during the week of August 5, we will be posting a new white paper on this topic.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

PCI Express Coming to an SSD Near You

Team_SSSI

Aug 2, 2013

title of post

There’s been a lot of press recently about what’s going on in the world of storage regarding the utilization of PCIe as a device interface.  Of course, PCIe has been around a long time as a system bus, while SATA and SAS have been used as storage device interfaces.  But with SSDs getting faster with every new product release, it’s become difficult for the traditional interfaces to keep up.

Some folks figure that PCIe is the solution to that problem.  PCIe 3.0 operates at 1GB/s, which is faster than 600MB/s SATA.  And with PCIe, it’s possible to add lanes to increase the overall bandwidth.  The SATA Express spec from SATA-IO defines a client PCIe device as having up to 2 lanes of PCIe, which brings the speed up to 2GB/s.  Enterprise SSDs will have up to 4 lanes of PCIe, which provides 4GB/s of bandwidth.

There was also some work on the software side that needed to be done to support PCIe devices, including NVM Express and SCSI Over PCIe (SOP), both of which are well underway.

If you are interested in knowing more about PCIe SSDs, keep an eye on our Education page, where, sometime during the week of August 5, we will be posting a new white paper on this topic.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Solid State Storage Performance Test Specification Available for Public Review

Marty Foltyn

Jul 9, 2013

title of post
A new revision of the Enterprise Solid State Storage Performance Test Specification (PTS–E 1.1) is now available for public review. The PTS is an industry standard test methodology and test suite for the comparison of SSD performance at the device level. The PTS–E 1.1 updates the PTS–E 1.0 released in 2011 and adds tests with specific types of workloads common in the enterprise environment. The PTS–E 1.1 may be downloaded at http://www.snia.org/publicreview. "The PTS–Enterprise v1.1 provides both standard testing (IOPS, Throughput, Latency, and Write Saturation) as well as new tests for specific workloads commonly found in Enterprise environments," said Eden Kim, Chair of the SSS Technical Work Group. “These new tests also allow the user to insert workloads into the new tests while maintaining the industry standard methodology for pre conditioning and steady state determination." The new tests target workloads common to OLPT, VOD, VM, and other enterprise applications while paying special attention to the optimization of drives for varying demand intensity, maximum IOPS and minimal response times and latencies. For more information, visit www.snia.org/forums/sssi

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Solid State Storage Performance Test Specification Available for Public Review

Marty Foltyn

Jul 9, 2013

title of post

A new revision of the Enterprise Solid State Storage Performance Test Specification (PTS–E 1.1) is now available for public review. The PTS is an industry standard test methodology and test suite for the comparison of SSD performance at the device level. The PTS–E 1.1 updates the PTS–E 1.0 released in 2011 and adds tests with specific types of workloads common in the enterprise environment. The PTS–E 1.1 may be downloaded at http://www.snia.org/publicreview.

“The PTS–Enterprise v1.1 provides both standard testing (IOPS, Throughput, Latency, and Write Saturation) as well as new tests for specific workloads commonly found in Enterprise environments,” said Eden Kim, Chair of the SSS Technical Work Group. “These new tests also allow the user to insert workloads into the new tests while maintaining the industry standard methodology for pre conditioning and steady state determination.”

The new tests target workloads common to OLPT, VOD, VM, and other enterprise applications while paying special attention to the optimization of drives for varying demand intensity, maximum IOPS and minimal response times and latencies.

For more information, visit www.snia.org/forums/sssi

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS Is Enough?

Marty Foltyn

May 22, 2013

title of post
SNIA's SSSI channel webcast of "How Many IOPS Is Enough?" was a smash success!  Now you can listen to an on demand rebroadcast. Even though there are lots of SSDs on the market today offering IOPS (I/Os Per Second) performance in the thousands to hundreds of thousands (with indications that future models will offer speeds in the million-IOPS range), and HDDs that support from tens to hundreds of IOPS depending on spindle speed and interface, not every application can use the extreme performance of high-end SSDs, and some may not benefit from high IOPS at all. Since performance is tied to cost, users can save money if they understand how many IOPS the system really needs.  "How Many IOPS is Enough?" draws from the recent study by Coughlin Associates and Objective Analysis that examined what makes an application require high IOPS and which profiled applications according to their needs. In the webcast, you will also learn how to take part in an exciting SSSI project - the Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website. Details on WIOCP can be found at tinyurl.com/tryWIOCP.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS Is Enough?

Marty Foltyn

May 22, 2013

title of post

SNIA’s SSSI channel webcast of “How Many IOPS Is Enough?” was a smash success!  Now you can listen to an on demand rebroadcast.

Even though there are lots of SSDs on the market today offering IOPS (I/Os Per Second) performance in the thousands to hundreds of thousands (with indications that future models will offer speeds in the million-IOPS range), and HDDs that support from tens to hundreds of IOPS depending on spindle speed and interface, not every application can use the extreme performance of high-end SSDs, and some may not benefit from high IOPS at all.

Since performance is tied to cost, users can save money if they understand how many IOPS the system really needs.  ”How Many IOPS is Enough?” draws from the recent study by Coughlin Associates and Objective Analysis that examined what makes an application require high IOPS and which profiled applications according to their needs.

In the webcast, you will also learn how to take part in an exciting SSSI project - the Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website. Details on WIOCP can be found at tinyurl.com/tryWIOCP.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS Is Enough?

Marty Foltyn

May 22, 2013

title of post

SNIA’s SSSI channel webcast of “How Many IOPS Is Enough?” was a smash success!  Now you can listen to an on demand rebroadcast.

Even though there are lots of SSDs on the market today offering IOPS (I/Os Per Second) performance in the thousands to hundreds of thousands (with indications that future models will offer speeds in the million-IOPS range), and HDDs that support from tens to hundreds of IOPS depending on spindle speed and interface, not every application can use the extreme performance of high-end SSDs, and some may not benefit from high IOPS at all.

Since performance is tied to cost, users can save money if they understand how many IOPS the system really needs.  ”How Many IOPS is Enough?” draws from the recent study by Coughlin Associates and Objective Analysis that examined what makes an application require high IOPS and which profiled applications according to their needs.

In the webcast, you will also learn how to take part in an exciting SSSI project - the Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website. Details on WIOCP can be found at tinyurl.com/tryWIOCP.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

pNFS and Future NFSv4.2 Features

AlexMcDonald

Apr 30, 2013

title of post

In this third and final blog post on NFS (see previous blog posts Why NFSv4.1 and pNFS are Better than NFSv3 Could Ever Be and The Advantages of NFSv4.1) I’ll cover pNFS (parallel NFS), an optional feature of NFSv4.1 that improves the bandwidth available for NFS protocol access, and some of the proposed features of NFSv4.2 – some of which are already implemented in commercially available servers, but will be standardized with the ratification of NFSv4.2 (for details, see the IETF NFSv4.2 draft documents).

Finally, I’ll point out where you can get NFSv4.1 clients with support for pNFS today.

Parallel NFS (pNFS) and Layouts

Parallel NFS (pNFS) represents a major step forward in the development of NFS. Ratified in January 2010 and described in RFC-5661, pNFS depends on the NFS client understanding how a clustered filesystem stripes and manages data. It’s not an attribute of the data, but an arrangement between the server and the client, so data can still be accessed via non-pNFS and other file access protocols.  pNFS benefits workloads with many small files, or very large files, especially those run on compute clusters requiring simultaneous, parallel access to data.

 NFS 3 image 1

Clients request information about data layout from a Metadata Server (MDS), and get returned layouts that describe the location of the data. (Although often shown as separate, the MDS may or may not be standalone nodes in the storage system depending on a particular storage vendor’s hardware architecture.) The data may be on many data servers, and is accessed directly by the client over multiple paths. Layouts can be recalled by the server, as in the case for delegations, if there are multiple conflicting client requests. 

By allowing the aggregation of bandwidth, pNFS relieves performance issues that are associated with point-to-point connections. With pNFS, clients access data servers directly and in parallel, ensuring that no single storage node is a bottleneck. pNFS also ensures that data can be better load balanced to meet the needs of the client.

The pNFS specification also accommodates support for multiple layouts, defining the protocol used between clients and data servers. Currently, three layouts are specified; files as supported by NFSv4, objects based on the Object-based Storage Device Commands (OSD) standard (INCITS T10) approved in 2004, and block layouts (either FC or iSCSI access). The layout choice in any given architecture is expected to make a difference in performance and functionality. For example, pNFS object based implementations may perform RAID parity calculations in software on the client, to allow RAID performance to scale with the number of clients and to ensure end-to-end data integrity across the network to the data servers.

So although pNFS is new to the NFS standard, the experience of users with proprietary precursor protocols to pNFS shows that high bandwidth access to data with pNFS will be of considerable benefit.

Potential performance of pNFS is definitely superior to that of NFSv3 for similar configurations of storage, network and server. The management is definitely easier, as NFSv3 automounter maps and hand-created load balancing schemes are eliminated; and by providing a standardized interface, pNFS ensures fewer issues in supporting multi-vendor NFS server environments.

Some Proposed NFSv4.2 features

NFSv4.2 promises many features that end-users have been requesting, and that makes NFS more relevant as not only an “every day” protocol, but one that has application beyond the data center. As the requirements document for NFSv4.2 puts it, there are requirements for: 

  • High efficiency and utilization of resources such as, capacity, network bandwidth, and processors.
  • Solid state flash storage which promises faster throughput and lower latency than magnetic disk drives and lower cost than dynamic random access memory.

Server Side Copy

Server-Side Copy (SSC) removes one leg of a copy operation. Instead of reading entire files or even directories of files from one server through the client, and then writing them out to another, SSC permits the destination server to communicate directly to the source server without client involvement, and removes the limitations on server to client bandwidth and the possible congestion it may cause.

Application Data Blocks (ADB)

ADB allows definition of the format of a file; for example, a VM image or a database. This feature will allow initialization of data stores; a single operation from the client can create a 300GB database or a VM image on the server.

Guaranteed Space Reservation & Hole Punching

As storage demands continue to increase, various efficiency techniques can be employed to give the appearance of a large virtual pool of storage on a much smaller storage system.  Thin provisioning, (where space appears available and reserved, but is not committed) is commonplace, but often problematic to manage in fast growing environments. The guaranteed space reservation feature in NFSv4.2 will ensure that, regardless of the thin provisioning policies, individual files will always have space available for their maximum extent.

 NFS 3 image 2

While such guarantees are a reassurance for the end-user, they don’t help the storage administrator in his or her desire to fully utilize all his available storage. In support of better storage efficiencies, NFSv4.2 will introduce support for sparse files. Commonly called “hole punching”, deleted and unused parts of files are returned to the storage system’s free space pool.

Obtaining Servers and Clients

With this background on the features of NFS, there is considerable interest in the end-user community for NFSv4.1 support from both servers and clients. Many Network Attached Storage (NAS) vendors now support NFSv4, and in the last 12 months, there has been a flurry of activity and many developments in server support of NFSv4.1 and pNFS.

For NFS server vendors, there are NFSv4.1 and files based, block based and object based implementations of pNFS available; refer to the vendor websites, where you will get the latest up-to-date information.

On the client side, there is RedHat Enterprise Linux 6.4 that includes full support for NFSv4.1 and pNFS (see www.redhat.com), Novell SUSE Linux Enterprise Server 11 SP2 with NFSv4.1 and pNFS based on the 3.0 Linux kernel (see www.suse.com), and Fedora available at fedoraproject.org.

Conclusion     

NFSv4.1 includes features intended to enable its use in global wide area networks (WANs).  These advantages include:

  • Firewall-friendly single port operations
  • Advanced and aggressive cache management features
  • Internationalization support
  • Replication and migration facilities
  • Optional cryptography quality security, with access control facilities that are compatible across UNIX® and Windows®
  • Support for parallelism and data striping

The goal for NFSv4.1 and beyond is to define how you get to storage, not what your storage looks like. That has meant inevitable changes. Unlike earlier versions of NFS, the NFSv4 protocol integrates file locking, strong security, operation coalescing, and delegation capabilities to enhance client performance for data sharing applications on high-bandwidth networks.

NFSv4.1 servers and clients provide even more functionality such as wide striping of data to enhance performance.  NFSv4.2 and beyond promise further enhancements to the standard that increase its applicability to today’s application requirements. It is due to be ratified in August 2012, and we can expect to see server and client implementations that provide NFSv4.2 features soon after this; in some cases, the features are already being shipped now as vendor specific enhancements. 

With careful planning, migration to NFSv4.1 (and NFSv4.2 when it becomes generally available) from prior versions can be accomplished without modification to applications or the supporting operational infrastructure, for a wide range of applications; home directories, HPC storage servers, backup jobs and a variety of other applications.

FOOTNOTE: Parts of this blog were originally published in Usenix ;login: February 2012 under the title The Background to NFSv4.1. Used with permission.

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to