Sorry, you need to enable JavaScript to visit this website.

New Webpage about SSD Form-Factors

Team_SSSI

Oct 3, 2011

title of post
There's a new page on the SSSI website which describes the wide range of SSD form-factors (physical formats) on the market today.   SSSI defines three major categories - Solid State Drive, Solid State Card, and Solid State Module - and the new page provides descriptions and examples of each. Take a look.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

New Webpage about SSD Form-Factors

Team_SSSI

Oct 3, 2011

title of post

There’s a new page on the SSSI website which describes the wide range of SSD form-factors (physical formats) on the market today.   SSSI defines three major categories – Solid State Drive, Solid State Card, and Solid State Module – and the new page provides descriptions and examples of each.

Take a look.

Share

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What’s Old is New Again: Storage Tiering

Larry.Freeman

Oct 3, 2011

title of post

Storage tiering is nothing new but then again is all new. Traditionally, tiering meant that you’d buy fast (Tier One) storage arrays, based on 15K Fibre Channel drives, for your really important applications. Next you’d buy some slower (Tier Two) storage arrays, based on SATA drives, for your not-so-important applications. Finally you’d buy a (Tier Three) tape library or VTL to house your backups. This is how most people have accomplished storage tiering for the past couple of decades, with slight variations. For instance I’ve talked to some companies that had as many as six tiers when they added their remote offices and disaster recovery sites – these were very large users with very large storage requirements who could justify breaking the main three tiers into sub-tiers.

Whether you categorized your storage into three or six tiers, the basic definition of a tier has historically been a collection of storage silos with particular cost and performance attributes that made them appropriate for certain workloads. Recent developments, however, have changed this age-old paradigm:

1) The post-recession economy has driven IT organizations to look for ways to cut costs by improving storage utilization
2) The introduction of the SSD offers intriguing performance but a higher cost than most can afford
3) Evolving storage array intelligence now automates the placement of “hot” data without human intervention

These three events lead to a rebirth of sorts in tiering, in the form of Automated Storage Tiering. This style of tiering allows the use of new components like SSD without breaking the bank. Assuming that for any given workload, a small percentage of data is accessed very frequently, Automated tiering allows the use of high performance components for that data only, while the less-frequently accessed data can be automatically stored on more economical media.

As with any new technology, or in this case a new technique, vendors are approaching automated tiering from different angles. This is good for consumers in the long run (the best implementations will eventually win out) but in the short run creates some confusion when determining which vendor you should align you and your data with.

As a result, automated storage tiering is getting quite a bit of press from vendors and industry analysts alike. For example, here are two pieces that appeared recently:

Information Week Storage Virtualization Tour – All About Automated Tiering
Business Week – Auto Tiering Crucial to Storage Efficiency

SNIA is also interested in helping clear any confusion around automated storage tiering. This week the DPCO committee will host a live webcast panel of tiering vendors to discuss the pros and cons of tiering within the scope of their products, you can register for it here: Sign up

Join this session and learn more about similarities and differences in various tiering implementations. We hope to see some “lively” interaction, so join the tiering discussion and get your questions answered.

See you there!

Larry

PS – If you can’t make this week’s Webcast, we’ll also be recording it and you’ll be able to view it from the DPCO website

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Update to SSSI Glossary Now Available

Team_SSSI

Sep 8, 2011

title of post
Just posted to the SSSI site is the latest version of the SSSI Glossary.  New to this edition is a complete set of terms from the SSS Performance Test Specification. The new glossary can be downloaded here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Update to SSSI Glossary Now Available

Team_SSSI

Sep 8, 2011

title of post

Just posted to the SSSI site is the latest version of the SSSI Glossary.  New to this edition is a complete set of terms from the SSS Performance Test Specification.

The new glossary can be downloaded here.

Share

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Plan to Attend Cloud Burst and SDC

mac

Aug 25, 2011

title of post

Cloud Storage Developers will be Converging on Santa Clara in September for the Storage Developer Conference and the Cloud Burst Event

Cloud Burst Event

There are a multitude of events dedicated to cloud computing, but where can you go to find out specifically about cloud storage? The 2011 SNIA Cloud Burst Summit educates and offers insight into this fast-growing market segment. Come hear from industry luminaries, see live demonstrations, and talk to technology vendors about how to get started with cloud storage.

The audience for the SNIA Cloud Burst Summit is IT storage professionals and related colleagues who are looking to cloud storage as a solution for their IT environments. The day’s agenda will be packed with presentations from cloud industry luminaries, the latest cloud development panel discussions, a focus on cloud backup, and a cocktail networking opportunity in the evening.

Check out the Agenda and Register Today…

 

Storage Developer Conference

The SNIA Storage Developer Conference is the premier event for developers of cloud storage, filesystems and storage technologies. The year there is a full cloud track on the Agenda, as well as some great speakers. Some examples include:

Programming the Cloud

CDMI for Cloud IPC

David Slik
Technical Director,
Object Storage
NetApp

Open Source Droplet Library with CDMI Support

Giorgio Regni
CTO,
Scality

CDMI Federations, Year 2

David Slik
Technical Director,
Object Storage,
NetApp

CDMI Retention Improvements

Priya Nc
Principal Software Engineer,
EMC Data Storage Systems

CDMI Conformance and Performance Testing

David Slik
Technical Director,
Object Storage,
NetApp

Use of Storage Security in the Cloud

David Dodgson
Software Engineer,
Unisys

Authenticating Cloud Storage with Distributed Keys

Jason Resch
Senior Software Engineer,
Cleversafe

Resilience at Scale in the Distributed Storage Cloud

Alma Riska
Consultant Software Engineer,
EMC

Changing Requirements for Distributed File Systems in Cloud Storage

Wesley Leggette
Cleversafe, Inc

Best Practices in Designing Cloud Storage Based Archival Solution

Sreenidhi Iyangar
Senior Technical Lead,
EMC

Tape’s Role in the Cloud

Chris Marsh
Market Development Manager,
Spectra Logic

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Plan to Attend Cloud Burst and SDC

mac

Aug 25, 2011

title of post

Cloud Storage Developers will be Converging on Santa Clara in September for the Storage Developer Conference and the Cloud Burst Event

Cloud Burst Event

There are a multitude of events dedicated to cloud computing, but where can you go to find out specifically about cloud storage? The 2011 SNIA Cloud Burst Summit educates and offers insight into this fast-growing market segment. Come hear from industry luminaries, see live demonstrations, and talk to technology vendors about how to get started with cloud storage.

The audience for the SNIA Cloud Burst Summit is IT storage professionals and related colleagues who are looking to cloud storage as a solution for their IT environments. The day’s agenda will be packed with presentations from cloud industry luminaries, the latest cloud development panel discussions, a focus on cloud backup, and a cocktail networking opportunity in the evening.

Check out the Agenda and Register Today...

 

Storage Developer Conference

The SNIA Storage Developer Conference is the premier event for developers of cloud storage, filesystems and storage technologies. The year there is a full cloud track on the Agenda, as well as some great speakers. Some examples include:

Programming the Cloud

CDMI for Cloud IPC

David Slik
Technical Director,
Object Storage
NetApp

Open Source Droplet Library with CDMI Support

Giorgio Regni
CTO,
Scality

CDMI Federations, Year 2

David Slik
Technical Director,
Object Storage,
NetApp

CDMI Retention Improvements

Priya Nc
Principal Software Engineer,
EMC Data Storage Systems

CDMI Conformance and Performance Testing

David Slik
Technical Director,
Object Storage,
NetApp

Use of Storage Security in the Cloud

David Dodgson
Software Engineer,
Unisys

Authenticating Cloud Storage with Distributed Keys

Jason Resch
Senior Software Engineer,
Cleversafe

Resilience at Scale in the Distributed Storage Cloud

Alma Riska
Consultant Software Engineer,
EMC

Changing Requirements for Distributed File Systems in Cloud Storage

Wesley Leggette
Cleversafe, Inc

Best Practices in Designing Cloud Storage Based Archival Solution

Sreenidhi Iyangar
Senior Technical Lead,
EMC

Tape’s Role in the Cloud

Chris Marsh
Market Development Manager,
Spectra Logic

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

A Successful Flash Memory Summit for SSSI

Team_SSSI

Aug 13, 2011

title of post
SSSI had a busy booth at FMS.  Show attendees stopped by to discuss the newly released Client Performance Test Specification, and to see a demonstration of the high performance SSD product from member Texas Memory Systems. We were honored to win a best-of-show award from FMS.  The Enterprise PTS won the Best Enterprise Application category.  The judges were impressed by the industry-wide cooperative effort that went into creating the specification. It was exciting to hear from companies who have implemented or are planning to implement the PTS.  Watch for some case studies to be posted to the SSSI site.  A successful show in all respects.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Client Performance Test Specification Released

Team_SSSI

Aug 9, 2011

title of post
Today, SSSI released the Client PTS.  Client refers to a single user / few tasks environment, as opposed to Enterprise, which implies multiple users / many tasks.  What are the differences between the Client and Enterprise PTS? The Enterprise PTS calls out a Write Saturation test, where the SSD is written to continuously over the entire drive capacity 4 times or for 24 hours, whichever comes first.  This test provides a good idea of the robustness of the drive in an enterprise environment. This test is not applicable to Client environments, and was not included in the Client PTS. The other three main types of tests measure IOPS, throughput (MB/sec), and Latency (how quickly a drive responds to commands) and are included in both Enterprise and Client PTS.  Here the Client PTS differs in that that tests may be performed on smaller segments of the drive, not all of the portions of the drive being tested need to be preconditioned, and different types of test stimulus are applied. These changes were based on the testing of literally dozens of different SSDs, as well as data provided by manufacturers of client SSDs. The Client and Enterprise PTS documents can be downloaded at www.snia.org/pts.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Two Storage Trails on the 10GbE Convergence Path

SteveAbbott

Aug 9, 2011

title of post

As the migration to 10Gb Ethernet moves forward, many data centers are looking to converge network and storage I/O to fully utilize a ten-fold increase in bandwidth.  Industry discussions continue regarding the merits of 10GbE iSCSI and FCoE.  Some of the key benefits of both protocols were presented in an iSCSI SIG webcast that included Maziar Tamadon and Jason Blosil on July 19th: Two Storage Trails on the 10Gb Convergence Path

It’s a win-win solution as both technologies offer significant performance improvements and cost savings.  The discussion is sure to continue.

Since there wasn’t enough time to respond to all of the questions during the webcast, we have consolidated answers to all of them in this blog post from the presentation team.  Feel free to comment and provide your input.

Question: How is multipathing changed or affected with FCoE?

One of the benefits of FCoE is that it uses Fibre Channel in the upper layers of the software stack where multipathing is implemented.  As a result, multipathing is the same for Fibre Channel and FCoE.

Question: Are the use of CNAs with FCoE offload getting any traction?  Are these economically viable?

The adoption of FCoE has been slower than expected, but is gaining momentum.  Fibre Channel is typically used for mission-critical applications so data centers have been cautious about moving to new technologies.   FCoE and network convergence provide significant cost savings, so FCoE is economically viable.

Question: If you run the software FCoE solution would this not prevent boot from SAN?

Boot from SAN is not currently supported when using FCoE with a software initiator and NIC.  Today, boot from SAN is only supported using FCoE with a hardware converged networked adapter (CNA).

Question:  How do you assign priority for FCoE vs. other network traffic.  Doesn’t it still make sense to have a dedicated network for data intensive network use?

Data Center Bridging (DCB) standards that enable FCoE allow priority and bandwidth to be assigned to each priority queue or link.   Each link may support one or more data traffic types. Support for this functionality is required between two end points in the fabric, such as between an initiator at the host with the first network connection at the top of rack switch, as an example. The DCBx Standard facilitates negotiation between devices to enable supported DCB capabilities at each end of the wire.

Question:  Category 6A uses more power that twin-ax or OM3 cable infrastructures, which in large build-outs is significant.

Category 6A does use more power than twin-ax or OM3 cables.  That is one of the trade-offs data centers should consider when evaluating 10GbE network options.

Question: Don’t most enterprise storage arrays support both iSCSI and FC/FCoE ports?  That seems to make the “either/or” approach to measuring uptake moot.

Many storage arrays today support either the iSCSI or FC storage network protocol. Some arrays support both at the same time. Very few support FCoE. And some others support a mixture of file and block storage protocols, often called Unified Storage. But, concurrent support for FC/FCoE and iSCSI on the same array is not universal.

Regardless, storage administrators will typically favor a specific storage protocol based upon their acquired skill sets and application requirements. This is especially true with block storage protocols since the underlying hardware is unique (FC, Ethernet, or even Infiniband). With the introduction of data center bridging and FCoE, storage administrators can deploy a single physical infrastructure to support the variety of application requirements of their organization. Protocol attach rates will likely prove less interesting as more vendors begin to offer solutions supporting full network convergence.

Question: I am wondering what is the sample size of your poll results, how many people voted?

We had over 60 live viewers of the webcast and over 50% of them participated in the online questions. So, the sample size was about 30+ individuals.

Question: Tape? Isn’t tape dead?

Tape as a backup methodology is definitely on the downward slope of its life than it was 5 or 10 years ago, but it still has a pulse. Expectations are that disk based backup, DR, and archive solutions will be common practice in the near future. But, many companies still use tape for archival storage. Like most declining technologies, tape will likely have a long tail as companies continue to modify their IT infrastructure and business practices to take advantage of newer methods of data retention.

Question: Do you not think 10 Gbps will fall off after 2015 as the adoption of 40 Gbps to blade enclosures will start to take off in 2012?

10GbE was expected to ramp much faster than what we have witnessed. Early applications of 10GbE in storage were introduced as early as 2006. Yet, we are only now beginning to see more broad adoption of 10GbE. The use of LOM and 10GBaseT will accelerate the use of 10GbE.

Early server adoption of 40GbE will likely be with blades. However, recognize that rack servers still outsell blades by a pretty large margin. As a result, 10GbE will continue to grow in adoption through 2015 and perhaps 2016. 40GbE will become very useful to reduce port count, especially at bandwidth aggregation points, such as inter-switch links. 40Gb ports may also be used to save on port count with the use of fanout cables (4x10Gb). However, server performance must continue to increase in order to be able to drive 40Gb pipes.

Question: Will you be making these slides available for download?

These slides are available for download at www.snia.org/?

Question: What is your impression of how convergence will change data center expertise?  That is, who manages the converged network?  Your storage experts, your network experts, someone new?

Network Convergence will indeed bring multiple teams together across the IT organization: server team, network team, and storage team to name a few. There is no preset answer, and the outcome will be on a case by case basis, but ultimately IT organizations will need to figure out how a common, shared resource (the network/fabric) ought to be managed and where the new ownership boundaries would need to be drawn.

Question: Will there be or is there currently a NDMP equivalent for iSCSI or 10GbE?

There is no equivalent to NDMP for iSCSI. NDMP is a management protocol used to backup server data to network storage devices using NFS or CIFS. SNIA oversees the development of this protocol today.

Question: How does the presenter justify the statement of “no need for specialized” knowledge or tools?  Given how iSCSI uses new protocols and concepts not found in traditional LAN, how could he say that?

While it’s true that iSCSI comes with its own concepts and subtleties, the point being made centered around how pervasive and widespread the underlying Ethernet know-how and expertise is.

Question: FC vs IP storage. What does IDC count if the array has both FC and IP storage which group does it go in. If a customer buys an array but does not use one of the two protocols will that show up in IDC numbers? This info conflicts SNIA’s numbers.

We can’t speak to the exact methods used to generate the analyst data. Each analyst firm has their own method for collecting and analyzing industry data. The reason for including the data was to discuss the overall industry trends.

Question: I noticed in the high-level overview that FCoE appeared not to be a ‘mesh’ network. How will this deal w/multipathing and/or failover?

The diagrams only showed a single path for FCoE to simplify the discussion on network convergence.  In a real-world, best-practices deployment there would be multiple paths with failover.   FCoE uses the same multipathing and failover capabilities that are available for Fibre Channel.

Question: Why are you including FCoE in IP-based storage?

The graph should indeed have read Ethernet storage rather than IP storage. This was fixed after the webinar and before the presentation got posted on SNIA’s website.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to