Material on this page is intended solely for the purpose of content review by SNIA members. Tutorial material may be read and commented upon by any SNIA member, but may not be saved, printed, or otherwise copied, nor may it be shared with non-members of the SNIA. Tutorial managers are responsible for responding to all comments made during the open review period. No responses will be given to comments made outside the open review period.

Jump straight to an abstract:

The Abstracts

Scaling Data Center Application Infrastructure
Gary Orenstein 

Data center managers must support ever-increasing application workloads for up to tens of thousands of users. The demands placed upon the underlying infrastructure require proper planning and architecture in order to scale efficiently. Application managers can choose to deploy application infrastructure internally using readily available technology solutions. Additionally, there are options to extend application infrastructure with cloud computing offerings from Amazon Web Service and Google AppEngine. Even if application managers do not make use of the cloud computing offerings directly, the respective architectures provide an excellent reference model for private infrastructure deployment. In all cases, application managers need to know what tools and resources are available to help scale infrastructure to support an ever increasing user base. Application Areas Covered Enterprise applications >Traditional heavy-workload enterprise applications that power the business >Scale application performance without over-provisioning infrastructure >MINI-CASE STUDY: Scale and performance for Oracle and Oracle RAC Performance computing applications >Supporting simultaneous access by hundreds to thousands of parallel application servers >Locating and facilitating speedy retrieval of files within deep directories and explosive file counts >MINI-CASE STUDY: Speeding bioinformatics applications Web scale applications >Serving hundreds of thousands to millions of users >Storing near infinite amounts of content and an explosive number of individual objects from user-generated sources >MINI-CASE STUDY: web-scale applications for photo and video hosting and delivery What Application Managers Need to Know >Advantages and disadvantages of different approaches to scaling infrastructure >Key technologies to deploy for scaling application infrastructure Key Application Infrastructure Technologies Focus on data center technologies: >Client layer: Options for virtualization, parallelization and clustering >Network layer: Options for file acceleration, load balancing, and file access optimization >Storage layer: Options for highly-scalable file systems with near infinite capacity.

Learning Objectives:

  • How to assess application infrastructure requirements
  • What deployment models work best for which applications
  • Options for scaling within public and private cloud computing infrastructure

Running Database Applications On NAS:  How and Why?
Stephen Daniel

The use of NFS as a storage interconnect protocol for serious enterprise-class databases began around 1995.  The number of businesses running production databases over NAS protocols has grown steadily since then.  This talk will discuss the benefits and risks of using NFS as a database storage protocol, provide some historical examples of the subtle bugs that NFS clients have had that caused problems for databases, and show some performance measurements comparing NFS-based database performance to database performance using more traditional interconnects such as Fibre-Channel and iSCSI.  The talk will conclude with some future directions for this technology.

Learning Objectives

  • Major benefits and costs of running a database over a NAS protocol
  • Key requirements of NAS implementations required to support correct operation of a database
  • Performance differences between NAS and SAN implementations of databases.

Fundamental Approaches to WAN Optimization
Joshua Tseng

Fast and convenient access to data hosted in central data centers has been a continuous challenge for application users in remote branch offices. WAN-related performance problems associated with bandwidth and latency often pressure IT managers to deploy file and application servers in the branch offices themselves, in order to maintain application performance and end-user productivity. But maintaining and backing-up remote server and storage assets outside of the data center is not only expensive, it also creates significant security risks. This session explores new approaches involving disk-based deduplication, TCP protocol optimization, and application-level protocol chattiness mitigation to address this long-standing productivity vs. cost-efficiency issue. We will compare and contrast the strengths and weaknesses of these new approaches with more traditional methods of addressing this problem.

Learning Objectives

  • Understand why many applications perform poorly over the WAN in an environment with high latency and/or limited bandwidth.
  • Examine the pros and cons of compression, caching, and adding WAN bandwidth, and understand why these measures usually fail to address the entire underlying cause of the performance issue.
  • Explore how TCP protocol optimization, application-level protocol chattiness mitigation, and disk-based deduplication approaches can dramatically improve