Storage and Storage Management

Jump straight to an abstract:

The Abstracts

Storage Performance Management (SPM)
Brett Allison

This is an introduction of storage performance management. It will explain why this new discipline is now emerging.  The cost and performance benefits of storage performance management are explained, as well as the required building blocks including the required processes, tools and skills. This session is appropriate for management as well as technologists

  • Understand the key processes required in order to manage storage performance. 
  • Understand the characteristics of the tools required to manage storage performance.
  • Understand the key skills required to manage storage performance.

Writing Storage RFPs in 2011
John Webster

The data storage industry has changed dramatically over the past few years.  The traditional Request for Proposal (RFP) does not address these changes or the challenges faced by IT organizations.  Therefore, a new way of defining storage requirements to leverage this evolving technology is needed. This tutorial defines the “must-have” criteria that should be included in any data storage RFP in the current era.  Performance, scalability, and resiliency are given; but new challenges will arise around server and desk top virtualization, power consumption, space requirements, and overall cost containment.  Accommodations for developing technologies must be designed into the RFP.  Attendees of this session will receive an RFP template designed specifically to take advantage of current storage and IT technologies

Learning Objectives

  • Learn which storage features are the most desirable in virtual server environments. 
  • Learn how to write a storage-related RFP that delivers differentiated responses. 
  • Learn new ways to approach and deal with preferred storage vendors.

Understanding High Availability in the SAN
Mark Fleming

This session will appeal to those seeking a fundamental understanding of High Availability (HA) configurations in the SAN. Modern SANs have developed numerous methods using hardware and software to assure high availability of storage to customers. The session will explore basic concepts of HA; move through a sample configuration from end-to-end; investigate HA and virtualization, converged networks and the cloud; and discuss some of the challenges and pitfalls faced in testing HA configurations. Real customer experiences will be shared to drive it all home!

Storage Optimization for Virtual Desktops
Russ Fellows

We will explore the critical storage features needed for Virtual Desktop Initiatives (VDI). The focus will be on storage system features required to meet the performance, and the critical TCO levels necessary for a successful VDI project. The recommendations are taken from hands on experience implanting VDI projects in mid-sized and large IT environments. Particular attention will be placed on understanding the storage implications of different VDI implementations and how this impact performance and overall project cost.
The emphasis will be on practical configuration options and how best to leverage specific storage features in order to maximize storage performance and effeciency.

Learning Objectives

  • Learn what impact different VDI choices have on storage
  • Understand the critical storage features required for a successful VDI deployment
  • Learn how VDI configuration options impact storage efficiency and performance 

SAS & SATA Combine to Change the Storage Market
Harry Mason, Marty Czekalski

Serial Attached SCSI (SAS) has become the backbone of enterprise storage deployments.  Functioning as both a device level interface and a tiered storage interconnect, SAS has preserved the usability and cost-effectiveness of the SCSI architecture while rapidly evolving by adding new features, capabilities and performance enhancements.  The combined legacy and evolution of SAS makes it possible to realize extremely high throughput with standard high-volume components while extending to new technologies of the customer’s choice and allows systems to be built that accommodate large numbers of either SAS and/or SATA hard disk drives. 

Intended for OEM, System Builders and End-Users, this tutorial describes the capabilities of the SAS interface, how it’s designed to interoperate with SATA drives, and when combined, how these technologies can be combined to deliver some very compelling storage solutions.  The presenters will take a look at the evolution of SAS, how it has expanded beyond traditional DAS usage, discuss the significance of 6Gb/s SAS to SSDs, examine the effect of SAS on bandwidth aggregation and show a detailed comparison of connector types. 

To keep attendees current on the most recent technology developments, the tutorial will also provide an up-to-the-minute recap of the latest additions to the SAS standard and roadmaps.  It will detail applications and storage deployments requiring Non-volatile memory, faster RAID performance, and enhanced connectivity. The discussion will include an update on the status of 12Gb/s SAS development/standardization efforts, demonstrating how SAS continues to innovate and ultimately protect Enterprise storage investments.

Learning Objectives

  • Attendees will learn how SAS is growing and thriving based on its evolving capabilities and performance enhancements due in part to its Advanced Connectivity roadmap and how it allows systems to be built that accommodate large numbers of either SAS and/or SATA hard disk drives. 
  • Attendees will learn how SAS provides high-performance, time-to-market solutions for SSD and other low-latency, non-volatile memory solutions. Including the emerging capability offered by MultiLink SAS. 
  • The latest development status, capabilities and design guidelines for 12Gb/s SAS will be revealed and additional details of the ongoing standardization efforts designed to preserve SAS and SATA storage investments will be discussed.

Fundamentals and Futures of Long Term Storage Media
Linda Kempster

Capacities of media today are reported as numbers to be simply compared to other numbers.  The amazement of what the numbers represent is missing or no perhaps longer necessary because the audiences are different now.  Audiences of the 80s who could barely comprehend a gigabyte was when the first 12”optical disk was introduced, did understand that it was the equivalent of 4 four-drawer file cabinets of paper. To help understand a terabyte, I explained that the printed 500B ASCII pages would stretch around the earth 11.5 times – and I included the fact it would take 42,500 trees to generate the paper.  When NASA announced they wanted to eventually capture 2.5TB per day, it was unheard of.  It was so difficult to comprehend the larger numbers we had to explain them in physical terms.  It was in the early 90s when the guess was made that it would take 10.5TB to hold the entire Library of Congress, I laughed at thinking the LOC could ever be a term of measurement.  Who today realizes that we can store that capacity on just over two 5TB tapes currently available?    Audiences may be aware that Sony has printed its last audio compact disc, but do they know why it was invented at the size and capacity it was and how it was introduced 30 years ago by Dr. Toshi Doi?  Or why it held 74 min and 44 seconds of recorded music?      Paper was the enemy of the 80’s just like data is the challenge of today. At the time the US Navy replaced operational manuals with CDs, the average aircraft carrier was able to shed 37 tons of paper. According to Admiral Tuttle, the paper weighed just 3 tons less than the planes they carried.      The age-old question is even more timely today: Do the storage requirements drive the state of the art or does the state of the art drive requirements?  Which is most important? Access time or data reliability?  Legality restraints or cost objectives?    Ten years ago there were future technologies that do not exist today.  It is interesting to see which ones survived and are predicted to take us into the future to meet expanding demands driven by social media/technology and long-term data retention requirements.  In this section I will touch on why some technologies did not make it into the current solution offering.  What happened to the predictions from a decade ago and how safe are the predictions we see being made today?  As a final trivia question – which technology improved by 25,000 times in 20 years?  Storage is still amazing!    Finally, when we look over the horizon at the future of the remaining technologies, what are the roadblocks to their development?  What are the physical, chemical or practical limitations?  Are there new candidates that are capturing the imagination of the engineers and developers that have yet to come to light?  Will leapfrog solutions take us into the long-term data preservation capability we need to have available?

Learning Objectives

  • The genesis of past storage technologies. The beginning point often defines the road map to the future. 
  • Current and near term technologies with long term capabilities 
  • Horizon Technologies - what happened to the predicted solutions and which ones are most promising today.

OS Storage Performance Anaitle
Robert Smith

Storage performance analysis at the OS level using tools either included with the OS, or available for free download.

Learning Objectives

  • How to capture and analyze storage performance data for long-term analysis 
  • How to capture and analyze storage performance data for short-term in-depth analysis 
  • Learn metrics that define performance traits of great, good, and poor performance.