Sorry, you need to enable JavaScript to visit this website.

January 29 NVM Summit At the SNIA Symposium Brings Experts Together

Marty Foltyn

Jan 23, 2013

title of post
January 29th's Summit on Non-Volatile Memory - in San Jose, California as part of the SNIA Winter Symposium - delivers an excellent one-day, comprehensive deep-dive on all the issues you need to consider about this technology that has changed the ways that storage devices can be used.   Join 150 of your colleagues with products, strategies, or just an interest in NVM who have already signed up for this complimentary event.  Speakers from companies leading the way in NVM will offer critical insights into NVM and the future of computing in an exciting day-long agenda:
  • Keynotes from Mark Peters, Senior Analyst, ESG on the Storage Industry Landscape and David Alan Grier, President, The Computer Society on The Future of Computing with NVM Inflection Point
  • Industry Analyst Perspectives from Jeff Janukowicz, Research Director, IDC
  • Presentations from:
    • Andy Rudoff, Senior Software Engineer, Intel on the problems being solved
    • Ric Wheeler, Manager, Software Engineering, Red Hat on Linux and NVM
    • Dr. Garret Swart, Database Architect, Oracle on killer apps benefiting from this new architecture
    • Jim Pinkerton, Partner Architect, Microsoft  on design considerations when implementing NVM
    • Steven Peters, Principal Engineer, LSI  on what’s nice to have in this new stack
    • Danny Cobb, CTO, EMC on the workings of subsystem speeds and feeds
    • Kaladhar Vorguranti, Technical Director, NetApp on tools for performance modeling and measuring
Remember, this Summit is COMPLIMENTARY to attend but you must register to guarantee your seat at www.snia.org/nvmsummit-reg.   See you there!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

January 29 NVM Summit At the SNIA Symposium Brings Experts Together

Marty Foltyn

Jan 23, 2013

title of post

January 29th’s Summit on Non-Volatile Memory - in San Jose, California as part of the SNIA Winter Symposium - delivers an excellent one-day, comprehensive deep-dive on all the issues you need to consider about this technology that has changed the ways that storage devices can be used.   Join 150 of your colleagues with products, strategies, or just an interest in NVM who have already signed up for this complimentary event.  Speakers from companies leading the way in NVM will offer critical insights into NVM and the future of computing in an exciting day-long agenda:

  • Keynotes from Mark Peters, Senior Analyst, ESG on the Storage Industry Landscape and David Alan Grier, President, The Computer Society on The Future of Computing with NVM Inflection Point
  • Industry Analyst Perspectives from Jeff Janukowicz, Research Director, IDC
  • Presentations from:
    • Andy Rudoff, Senior Software Engineer, Intel on the problems being solved
    • Ric Wheeler, Manager, Software Engineering, Red Hat on Linux and NVM
    • Dr. Garret Swart, Database Architect, Oracle on killer apps benefiting from this new architecture
    • Jim Pinkerton, Partner Architect, Microsoft  on design considerations when implementing NVM
    • Steven Peters, Principal Engineer, LSI  on what’s nice to have in this new stack
    • Danny Cobb, CTO, EMC on the workings of subsystem speeds and feeds
    • Kaladhar Vorguranti, Technical Director, NetApp on tools for performance modeling and measuring

Remember, this Summit is COMPLIMENTARY to attend but you must register to guarantee your seat at www.snia.org/nvmsummit-reg.   See you there!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Introducing SNIA’s Workload I/O Capture Program

Jim Handy

Jan 17, 2013

title of post

SNIA’s Solid State Storage Initiative (SSSI) recently rolled out its new Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc.)

The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks.  SNIA hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website.

Using this information SNIA member companies will be able to improve the performance of their solid state storage solutions, including SSDs and flash storage arrays.

How it Works

The WIOCP software is a safe and thoroughly-tested tool which runs unobtrusively in the background to constantly capture a large set of SSD and HDD I/O metrics that are useful to both the computer user and to SNIA.

Users simply enter the drive letters for those drives for which I/O operations metrics are to be collected.   The program does not record anything that might be sensitive, including details of your actual workload (for example, files you’ve accessed.)   Results are presented in clear and accessible report formats.

How can WIOCP Help You?

Users can collect (and optionally display in real time) information reflecting their current environment and operations with the security of a tool delivered with digital authentication for their protection.

The collected I/O metrics will provide information useful to evaluate an SSD system environment.

Statistics from a wide range of applications will be collected, and can be used with the SSS Performance Test Specification to help users determine which SSD should  perform best for them.

How can Your Participation Help SNIA and the SSSI?

The WIOCP provides unique, raw information that can be analyzed by SNIA’s Technical Work Groups (TWGs) including the IOTTA TWG to gain insights into workload characteristics, key performance metrics, and SSD design tradeoffs.

The collected data from all participants will be aggregated and publicly available for download and analysis. No personally identifiable information is collected – participants will benefit from this information pool without comprising their privacy or confidentiality.

Downloading the WIOCP

Help SNIA get started on this project by clicking HERE and using the “Download Key Code”: SSSI52kd9A8Z.

The WIOCP tool will be delivered to your system with a unique digital signature.  The tool only takes a few minutes to download and initialize, after which users can return to the task at hand!

If you have any questions or comments, please contact: SSSI_TechDev-Chair@SNIA.org

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Object Storage is a Big Deal (and Ethernet Matters)

Ingo Fuchs

Jan 14, 2013

title of post

A significant challenge in managing large amounts of data (or Big Data) is a lack of what I like to call “total data awareness”. It’s a situation where you know (or suspect) that you have data – you just can’t find it. When you think about many current IT environments, they are often not built for total data awareness. This starts with core elements of the IT infrastructure, such as file systems. Traditional file systems and access methods were not designed to store hundreds of millions or billions of files in a single namespace. This leads to admins storing data in multiple file systems, multiple shares, complex directory structures – not because the data should be logically organized in that way, but simply because of limitations in file system architectures. This issue becomes even more pressing when data sits in multiple locations, maybe even across on-premise and off-premise, cloud-based storage.

Is object-based storage the answer?

Think about how you find data on your computer. Do you navigate complex directory structures, trying to remember the file name of the file that hopefully has the data you are looking for – or have you moved on and just use search tools like Spotlight? Imagine you have hundreds of millions of files, scattered across dozens or hundreds of sites. How about just searching across these sites and immediately finding the data you are looking for? With object storage technology you have the ability to store data in objects, along with metadata that describes the object. Now you can just search for your data based on metadata tags (like a filename – or even better an account number and document type) – as well as manage data based on policies that leverage that metadata.

However, this often means that you have to consider interfacing with your storage system through APIs, as opposed to NFS and CIFS – so your applications need to support whatever API your storage vendor offers.

CDMI to the rescue?

Today, storage vendors often use proprietary APIs. This means that application vendors would have to support a plethora of APIs from a number of different vendors, leading to a lack of commitment from application vendors to support more innovative, object-based storage architectures.

A key path to solve this issue is to leverage technology and standards that have been specifically developed to provide this idea of a single namespace for billions of data sets and across locations and even managed services that might reside off-premise.

Relatively new on the standards side you have CDMI (http://www.snia.org/cdmi), the Cloud Data Management Interface. CDMI is a standard developed by SNIA (http://www.snia.org), the Storage Networking Industry Association, with heavy involvement from a number of leading storage vendors. CDMI not only introduces a standard interface to ingest and retrieve data into and out of a large-scale repository, it also enables applications to easily manage this repository and where the data sits.

CDMI is the new NFS

Forgive the provocation, but when it comes to creating and managing large, distributed content repositories it quickly becomes clear that NFS and CIFS are not ideally suited for this use case. This is where CDMI shines, especially with an object-based storage architecture behind it that was built to support multi-petabyte environments with billions of data sets across hundreds of sites and accommodates retention policies that can reach to “forever”.

CDMI and NFS have something in common – Ethernet

One of the key commonalities between CDMI and NFS is that they both are ideally suited to be deployed in an Ethernet infrastructure. CDMI, specifically, is a RESTful HTTP interface, so it runs on standard Ethernet networks. Even for object storage deployments that don’t support CDMI, practically all of these multi-site, long-term repositories support HTTP (and thus Ethernet) through proprietary APIs based on REST or SOAP.

Why does this matter

Ethernet infrastructure is a great foundation to run any number of workloads, including access to data that sits in large, multi-site content repositories that are based on object storage technologies. So if you are looking at object storage, chances are that you will be able to leverage existing Ethernet infrastructure.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Object Storage is a Big Deal (and Ethernet Matters)

Ingo Fuchs

Jan 14, 2013

title of post
A significant challenge in managing large amounts of data (or Big Data) is a lack of what I like to call "total data awareness". It's a situation where you know (or suspect) that you have data - you just can't find it. When you think about many current IT environments, they are often not built for total data awareness. This starts with core elements of the IT infrastructure, such as file systems. Traditional file systems and access methods were not designed to store hundreds of millions or billions of files in a single namespace. This leads to admins storing data in multiple file systems, multiple shares, complex directory structures – not because the data should be logically organized in that way, but simply because of limitations in file system architectures. This issue becomes even more pressing when data sits in multiple locations, maybe even across on-premise and off-premise, cloud-based storage. Is object-based storage the answer? Think about how you find data on your computer. Do you navigate complex directory structures, trying to remember the file name of the file that hopefully has the data you are looking for – or have you moved on and just use search tools like Spotlight? Imagine you have hundreds of millions of files, scattered across dozens or hundreds of sites. How about just searching across these sites and immediately finding the data you are looking for? With object storage technology you have the ability to store data in objects, along with metadata that describes the object. Now you can just search for your data based on metadata tags (like a filename - or even better an account number and document type) – as well as manage data based on policies that leverage that metadata. However, this often means that you have to consider interfacing with your storage system through APIs, as opposed to NFS and CIFS – so your applications need to support whatever API your storage vendor offers. CDMI to the rescue? Today, storage vendors often use proprietary APIs. This means that application vendors would have to support a plethora of APIs from a number of different vendors, leading to a lack of commitment from application vendors to support more innovative, object-based storage architectures. A key path to solve this issue is to leverage technology and standards that have been specifically developed to provide this idea of a single namespace for billions of data sets and across locations and even managed services that might reside off-premise. Relatively new on the standards side you have CDMI (http://www.snia.org/cdmi), the Cloud Data Management Interface. CDMI is a standard developed by SNIA (http://www.snia.org), the Storage Networking Industry Association, with heavy involvement from a number of leading storage vendors. CDMI not only introduces a standard interface to ingest and retrieve data into and out of a large-scale repository, it also enables applications to easily manage this repository and where the data sits. CDMI is the new NFS Forgive the provocation, but when it comes to creating and managing large, distributed content repositories it quickly becomes clear that NFS and CIFS are not ideally suited for this use case. This is where CDMI shines, especially with an object-based storage architecture behind it that was built to support multi-petabyte environments with billions of data sets across hundreds of sites and accommodates retention policies that can reach to "forever". CDMI and NFS have something in common - Ethernet One of the key commonalities between CDMI and NFS is that they both are ideally suited to be deployed in an Ethernet infrastructure. CDMI, specifically, is a RESTful HTTP interface, so it runs on standard Ethernet networks. Even for object storage deployments that don't support CDMI, practically all of these multi-site, long-term repositories support HTTP (and thus Ethernet) through proprietary APIs based on REST or SOAP. Why does this matter Ethernet infrastructure is a great foundation to run any number of workloads, including access to data that sits in large, multi-site content repositories that are based on object storage technologies. So if you are looking at object storage, chances are that you will be able to leverage existing Ethernet infrastructure.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How is 10GBASE-T Being Adopted and Deployed?

David Fair

Jan 8, 2013

title of post

For nearly a decade, the primary deployment of 10 Gigabit Ethernet (10GbE) has been using network interface cards (NICs) supporting enhanced Small Form-Factor Pluggable (SFP+) transceivers. The predominant transceivers for 10GbE are Direct Attach (DA) copper, short range optical (10GBASE-SR), and long-range optical (10GBASE-LR). The Direct Attach copper option is the least expensive of the three. However, its adoption has been hampered by two key limitations:

- DA’s range is limited to 7m, and

- because of the SFP+ connector, it is not backward-compatible with existing 1GbE infrastructure using RJ-45 connectors and twisted-pair cabling.

10GBASE-T addresses both of these limitations.

10GBASE-T delivers 10GbE over Category 6, 6A, or 7 cabling terminated with RJ-45 jacks. It is backward-compatible with 1GbE and even 100 Megabit Ethernet. Cat 6A and 7 cables will support up to 100m. The advantages for deployment in an existing data center are obvious. Most existing data centers have already installed twisted pair cabling at Cat 6 rating or better. 10GBASE-T can be added incrementally to these data centers, either in new servers or via NIC upgrades “without forklifts.” New 10GBASE-T ports will operate with all the existing Ethernet infrastructure in place. As switches get upgraded to 10GBASE-T at whatever pace, the only impact will be dramatically improved network bandwidth.

Market adoption of 10GBASE-T accelerated sharply with the first single-chip 10GBASE-T controllers to hit production. This integration become possible because of Moore’s Law advances in semiconductor technology, which also enabled the rise of dense commercial switches supporting 10GBASE-T. Integrating PHY and MAC on a single piece of silicon significantly reduced power consumption. This lower power consumption made fan-less 10GBASE-T NICs possible for the first time. Also, switches supporting 10GBASE-T are now available from Cisco, Dell, Arista, Extreme Networks, and others with more to come. You can see the early market impact single-chip 10GBASE-T had by mid-year 2012 in this analysis of shipments in numbers of server ports from Crehan Research:

 

Server-class Adapter & LOM 10GBASE-T Shipments

Note, Crehan believes that by 2015, over 40% of all 10GbE adapters and controllers sold that year will be 10GBASE-T.

Early concerns about the reliability and robustness of 10GBASE-T technology have all been addressed in the most recent silicon designs. 10GBASE-T meets all the bit-error rate (BER) requirements of all the Ethernet and storage over Ethernet specifications. As I addressed in an earlier SNIA-ESF blog, the storage networking market is a particularly conservative one. But there appear to be no technical reasons why 10GBASE-T cannot support NFS, iSCSI, and even FCoE. Today, Cisco is in production with a switch, the Nexus 5596T, and a fabric extender, the 2232TM-E that support “FCoE-ready” 10GBASE-T. It’s coming – with all the cost of deployment benefits of 10GBASE-T.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How is 10GBASE-T Being Adopted and Deployed?

David Fair

Jan 8, 2013

title of post
For nearly a decade, the primary deployment of 10 Gigabit Ethernet (10GbE) has been using network interface cards (NICs) supporting enhanced Small Form-Factor Pluggable (SFP+) transceivers. The predominant transceivers for 10GbE are Direct Attach (DA) copper, short range optical (10GBASE-SR), and long-range optical (10GBASE-LR). The Direct Attach copper option is the least expensive of the three. However, its adoption has been hampered by two key limitations: - DA's range is limited to 7m, and - because of the SFP+ connector, it is not backward-compatible with existing 1GbE infrastructure using RJ-45 connectors and twisted-pair cabling. 10GBASE-T addresses both of these limitations. 10GBASE-T delivers 10GbE over Category 6, 6A, or 7 cabling terminated with RJ-45 jacks. It is backward-compatible with 1GbE and even 100 Megabit Ethernet. Cat 6A and 7 cables will support up to 100m. The advantages for deployment in an existing data center are obvious. Most existing data centers have already installed twisted pair cabling at Cat 6 rating or better. 10GBASE-T can be added incrementally to these data centers, either in new servers or via NIC upgrades "without forklifts." New 10GBASE-T ports will operate with all the existing Ethernet infrastructure in place. As switches get upgraded to 10GBASE-T at whatever pace, the only impact will be dramatically improved network bandwidth. Market adoption of 10GBASE-T accelerated sharply with the first single-chip 10GBASE-T controllers to hit production. This integration become possible because of Moore's Law advances in semiconductor technology, which also enabled the rise of dense commercial switches supporting 10GBASE-T. Integrating PHY and MAC on a single piece of silicon significantly reduced power consumption. This lower power consumption made fan-less 10GBASE-T NICs possible for the first time. Also, switches supporting 10GBASE-T are now available from Cisco, Dell, Arista, Extreme Networks, and others with more to come. You can see the early market impact single-chip 10GBASE-T had by mid-year 2012 in this analysis of shipments in numbers of server ports from Crehan Research:   [caption id="attachment_184" align="alignnone" width="262"] Server-class Adapter & LOM 10GBASE-T Shipments[/caption] Note, Crehan believes that by 2015, over 40% of all 10GbE adapters and controllers sold that year will be 10GBASE-T. Early concerns about the reliability and robustness of 10GBASE-T technology have all been addressed in the most recent silicon designs. 10GBASE-T meets all the bit-error rate (BER) requirements of all the Ethernet and storage over Ethernet specifications. As I addressed in an earlier SNIA-ESF blog, the storage networking market is a particularly conservative one. But there appear to be no technical reasons why 10GBASE-T cannot support NFS, iSCSI, and even FCoE. Today, Cisco is in production with a switch, the Nexus 5596T, and a fabric extender, the 2232TM-E that support "FCoE-ready" 10GBASE-T. It's coming – with all the cost of deployment benefits of 10GBASE-T. [poll id="4"] [poll id="5"] [poll id="6"]

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SSSI Highlighting PCIe SSDs at the Storage Visions Conference

Marty Foltyn

Jan 3, 2013

title of post
Join the SSSI at the Storage Visions Conference, January 6-7,2013 at the Riviera Hotel in Las Vegas, NV.  With a theme of Petabytes are the new Terabytes, the 2013 conference will explore the convergent needs of digital storage to support cloud content distribution and sharing, user- generated content capture and use, and professional media and entertainment applications. The SSSI booth is #6 on the Exhibit floor, and will showcase a PCIe SSD display of drives from SSSI members BitMicro, Fusion-io, IDT, Marvell, Micron, STEC, and Virident, and a live demonstration by Fusion-io.  The latest information gathered by the WIOCP Project will be presented.  Featured SSSI member speakers at Storage Visions include Jim Handy of Objective Analysis, who will examine how new storage developments are driving new storage systems with panelists Jim Pappas of Intel, Paul Wassenberg of Marvell, Mike Fitzpatrick of Toshiba, Paul Luse of Intel, and Sumit Puri of LSI; and Jim Pappas of Intel, who will moderate a panel on new frontiers in storage software with SSSI member panelists Walt Hubis of Fusion-io, Doug Voigt of HP, and Bob Beauchamp of EMC. Follow our activities on Twitter at twitter.com/#!/sniasolidstate

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SSSI Highlighting PCIe SSDs at the Storage Visions Conference

Marty Foltyn

Jan 3, 2013

title of post

Join the SSSI at the Storage Visions Conference, January 6-7,2013 at the Riviera Hotel in Las Vegas, NV.  With a theme of Petabytes are the new Terabytes, the 2013 conference will explore the convergent needs of digital storage to support cloud content distribution and sharing, user- generated content capture and use, and professional media and entertainment applications.

The SSSI booth is #6 on the Exhibit floor, and will showcase a PCIe SSD display of drives from SSSI members BitMicro, Fusion-io, IDT, Marvell, Micron, STEC, and Virident, and a live demonstration by Fusion-io.  The latest information gathered by the WIOCP Project will be presented.  Featured SSSI member speakers at Storage Visions include Jim Handy of Objective Analysis, who will examine how new storage developments are driving new storage systems with panelists Jim Pappas of Intel, Paul Wassenberg of Marvell, Mike Fitzpatrick of Toshiba, Paul Luse of Intel, and Sumit Puri of LSI; and Jim Pappas of Intel, who will moderate a panel on new frontiers in storage software with SSSI member panelists Walt Hubis of Fusion-io, Doug Voigt of HP, and Bob Beauchamp of EMC.

Follow our activities on Twitter at

twitter.com/#!/sniasolidstate

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Ethernet Storage Forum – 2012 Year in Review and What to Expect in 2013

Jason Blosil

Dec 20, 2012

title of post

As we come to a close of the year 2012, I want to share some of our successes and briefly highlight some new changes for 2013. Calendar year 2012 has been eventful and the SNIA-ESF has been busy. Here are some of our accomplishments:

  • 10GbE – With virtualization and network convergence, as well as the general availability of LOM and 10GBASE-T cabling, we saw this is a “breakout year” for 10GbE. In July, we published a comprehensive white paper titled “10GbE Comes of Age.” We then followed up with a Webcast “10GbE – Key Trends, Predictions and Drivers.” We ran this live once in the U.S. and once in the U.K. and combined, the Webcast has been viewed by over 400 people!
  • NFS – has also been a hot topic. In June we published a white paper “An Overview of NFSv4” highlighting the many improved features NFSv4 has over NFSv3. A Webcast to help users upgrade, “NFSv4 – Plan for a Smooth Migration,” has also been well received with over 150 viewers to date.  A 4-part Webcast series on NFS is now planned. We kicked the series off last month with “Reasons to Start Working with NFSv4 Now” and will continue on this topic during the early part of 2013. Our next NFS Webcast will be “Advances in NFS – NFSv4.1 and pNFS.” You can register for that here.
  • Flash – The availability of solid state devices based on NAND flash is changing the performance efficiencies of storage. Our September Webcast “Flash – Plan for the Disruption” discusses how Flash is driving the need for 10GbE and has already been viewed by more than 150 people.

We have also added to expand membership and welcome new membership from Tonian and LSI to the ESF. We expect with this new charter to see an increase in membership participation as we drive incremental value and establish ourselves as a leadership voice for Ethernet Storage.

As we move into 2013, we expect two hot trends to continue – the broader use of file protocols in datacenter applications, and the continued push toward datacenter consolidation with the use of Ethernet as a storage network. In order to better address these two trends, we have modified our charter for 2013. Our NFS SIG will be renamed the File Protocol SIG and will focus on promoting not only NFS, but also SMB / CIFS solutions and protocols. The iSCSI SIG will be renamed to the Storage over Ethernet SIG and will focus on promoting data center convergence topics with Ethernet networks, including the use of block and file protocols, such as NFS, SMB, FCoE, and iSCSI, over the same wire. This modified charter will allow us to have a richer conversation around storage trends relevant to your IT environment.

So, here is to a successful 2012, and excitement for the coming year.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to