SPDEcon Abstracts

webinar

Agenda Abstract Tracks

 


BIG DATA


Introduction to Analytics and Big Data - Hadoop

Rob Peglar, CTO, Americas, EMC Isilon

Abstract

This tutorial serves as a foundation for the field of analytics and Big Data, with an emphasis on Hadoop. An overview of current data analysis techniques, the emerging science around Big Data and an overview of Hadoop will be presented. Storage techniques and file system design for the Hadoop File System (HDFS) and implementation trade-offs will be discussed in detail. This tutorial is a blend of non-technical and introductory-level technical material.

Learning Objectives

  • Gain a further understanding of the field and science of data analytics
  • Comprehend the essential differences surrounding Big Data and why it represents a change in traditional IT thinking
  • Understand introductory-level technical detail around Hadoop and the Hadoop File System (HDFS)


Hadoop Distributed File System (HDFS), to Centralize or not to Centralize?!

Sreev Doddabalapur, Applications Performance Manager, Mellanox

Abstract

Hadoop deployments, traditionally, use a distributed file system to store massive amounts of data. However, some applications see benefits in using a centralized storage approach. This presentation cover different architectures used to centralize the storage portion of Hadoop. We show the different setups used with distributed storage systems such as Lustre, OrageFS and traditional NFS settings. The presentation includes benchmarking results, settings and configuration recommendation to achieve best results and the pitfalls along the way. We also cover the benefits of using high performance networking architectures and the advantages of RDMA based interconnect. RDMA technology helps storage systems to provide highest throughput and lowest latency with rates higher than 1million IOPs.

Learning Objectives

  • Understand the benefits of using centralized storage system such as Lustre with Map Reduce Framework
  • Architectural differences between deploying Hadoop on distributed and non-distributed file system
  • Novel approaches in bringing centralized storage capabilities to the Hadoop Framework

Learning Objectives

  • Participants will get to understand the unique challenges of managing and protecting "Big Data" repositories.
  • Participants will be able to understand the various technologies available for protecting "Big Data" repositories.
  • Participants will get to understand the various data protection considerations for "Big Data" repositories, for various environments, including Disaster Recovery/Replication, Capacity Optimization, etc.
 


CLOUD STORAGE


Combining SNIA Cloud, Tape and Container Format Technologies for the Long Term Retention of Big Data

Gene Nagle, Senior Systems Engineer, Nirvanix

Abstract

Generating and collecting very large data sets is becoming a necessity in many domains that also need to keep that data for long periods. Examples include astronomy, atmospheric science, genomics, medical records, photographic archives, video archives, and large-scale e-commerce. While this presents significant opportunities, a key challenge is providing economically scalable storage systems to efficiently store and preserve the data, as well as to enable search, access, and analytics on that data in the far future. Both cloud and tape technologies are viable alternatives for storage of big data and SNIA supports their standardization. The SNIA Cloud Data Management Interface (CDMI) provides a standardized interface to create, retrieve, update, and delete objects in a cloud. The SNIA Linear Tape File System (LTFS) takes advantage of a new generation of tape hardware to provide efficient access to tape using standard, familiar system tools and interfaces. In addition, the SNIA Self-contained Information Retention Format (SIRF) defines a storage container for long term retention that will enable future applications to interpret stored data regardless of the application that originally produced it. This tutorial will present advantages and challenges in long term retention of big data, as well as initial work on how to combine SIRF with LTFS and SIRF with CDMI to address some of those challenges. SIRF with CDMI will also be examined in the European Union integrated research project ENSURE – Enabling kNowledge, Sustainability, Usability and Recovery for Economic value.

Learning Objectives

  • Recognize the challenges and value in the long-term preservation of big data, and the role of new cloud and tape technologies to assist in addressing them.
  • Identify the need, use cases, and proposed architecture of SIRF. Also, review the latest activities in SNIA LTR technical working group to combine SIRF with LTFS and SIRF with CDMI for long term retention and mining of big data.
  • Discuss the usage of SIRF with CDMI in the ENSURE project that draws on actual commercial use cases from health care, clinical trials, and financial services.


Hybrid Clouds in the Data Center - The End State

Michael Elliott, Enterprise Cloud Evangelist, Dell

Abstract

In this presentation, I will define how to build clouds in a heterogeneous, open, and secure environment to take advantage of the benefits that hybrid cloud provides. I will cover the concepts of the cloud and detail:

  • Data Protection and Archive In the Cloud
  • Components of building a Private Cloud
  • Integration of Private and Public to form Hybrid Clouds The presentation will include case studies to highlight how fortune 500 and global companies are utilizing cloud infrastructure to gain agility and efficiency in their datacenter

 

Learning Objectives

  • Through attending this session, the participant will gain a common understanding of the three cloud constructs: Private, Public, Hybrid
  • Through attending this session, the participant will understand the approach to working in a Hybrid Cloud environment, how that can positively affect both the agility and efficiency of IT, and what it means to the future of their data
  • Through attending this session, the participant will view real life examples of how customers are taking advantage of the cloud.


Windows Azure Storage - Speed and Scale in the Cloud

Joe Giardino, Senior Development Lead - Windows Azure Storage, Microsoft

Abstract

In today’s world that is increasingly dominated by mobile and cloud computing application developers require durable, scalable, reliable, and fast storage solutions like Windows Azure Storage. This talk will cover the internal design of the Windows Azure Storage system and how it is engineered to meet these ever growing demands. This session will have a particular focus on performance, scale, and reliability. In addition, we will cover patterns & best practices for developing performance solutions on storage that optimize for cost, latency, and throughput. Windows Azure Storage is currently leveraged by clients to build big data and web scale services such as Bing, Xbox Music, SkyDrive, Halo 4, Hadoop, and Skype.

Learning Objectives

  • Windows Azure Storage Fundamentals
  • Patterns and best practices for cloud storage
  • How to write applications that scale


Interoperable Cloud Storage with the CDMI Standard

Mark Carlson, Principal Member of Technical Staff, Oracle

Abstract

The Cloud Data Management Interface (CDMI) is an industry standard with ISO ratification. There is now an open source reference implementation available from SNIA as well. Storage vendors and Cloud providers have started announcing their implementations of the CDMI standard, demonstrating the reality of interoperable cloud storage. This talk will help you understand how to keep from getting locked into any given vendor by using the standard. Real world examples will help you understand how to apply this to your own situation.

Learning Objectives

  • Walk away with an understanding of the specific CDMI features, such as a standard cloud interchange format, that enable interoperable cloud storage
  • Gain a deeper understanding of the outstanding issues of cloud storage interoperability and how the standard helps ease this pain.
  • Now that the standard is being implemented, understand what to put in an RFP for cloud storage that prevents vendor lock-in.


Tape Storage for the Uninitiated

David Pease, IBM Distinguished Engineer, IBM Almaden Research Center

Abstract

This talk provides a complete overview of modern tape as a storage medium. It includes a little history, a description of how modern tape works (not as obvious as it looks), a discussion of tape automation, and details on why tape is intrinsically more reliable than disk, where it's capacity growth curve is headed and why, what it's well suited for, how LTFS makes it easier to use, and a cost and environmental comparison with other media.

Learning Objectives

  • Understand the fundamentals of tape as a storage medium.
  • Learn what types of tape media and automation are available in the market.
  • Understand some of the reasons why tape is a fundamentally reliable storage platform.
  • Understand how the Linear Tape File System (LTFS) provides standardization and ease-of-use for tape systems.


The Changing Role of Data Protection in a Virtualized World

Thomas Rivera, Sr. Technical Associate, File, Content & Cloud Solutions, Hitachi Data Systems

Abstract

Backup administrators who have grown accustomed to traditional batch-oriented backup using a single local system are seeing their world change very quickly these days. Virtualized data centers with cloud computing and resilient, self-healing local and cloud storage combined with existing in-house systems are demanding completely new strategies for data protection. And even the functions and responsibilities of data protection administrators are changing at a fast pace. This tutorial presents the rapid changes and challenges in data protection brought about by storage, both local and cloud, that combines protection such as "time machines" (snapshots and versioning) with primary and secondary repositories. Following from those changes, the discussion will include methods being used by backup administrators as both temporary and long-term solutions to cope with these changes and to maximize their potential benefits.

Learning Objectives

  • Understand the differences between data protection in traditional environments versus in virtualized environments.
  • Consider multiple examples of data protection challenges in environments where virtualization and cloud-based storage and computing are increasingly used.
  • Come away with some useful best practices for coping with the challenges brought on by new responsibilities and functions in data protection.


Retention Issues Within the World of Healthcare

Gary Woodruff, Enterprise Infrastructure Architect, Sutter Health

Abstract

Healthcare is the "last" big data organization to truely enter the digital world. Traditionaly until just a few years ago, computers were only used to make workflows easier rather than providing storage and archiving functionality. With the hurry to impliment the Electronic Medical Record, most of the Health Industry has forgotten that they need to maintain the patient information for up to 30 years. When you think of this in the terms of traditional data management tools that our "banking" industry has used for many years, the question was "do you want to keep the data for 7 years". This brings us to the delemia, 1). how to store the data. 2) and how much will it cost me to store that data? I will paint the current situations, along with what technology exists to help manage this delemia, and what things I feel the industry needs to do in order to provide us tools to solve the problem.

Learning Objectives

  • Understand the history of the data growth in the healthcare industry
  • Discribe the differences between healthcare and other industries
  • Describe the current solutions for these problems
  • Wwat solutions do we need in order to build the solutions for the future


Wrangling Big Data Through IT Infrastructure Redesign

Ryan Childs VP Solutions Architecture, Actifio

Abstract

By virtualizing the management and retention of data, Actifio Copy Data Storage captures data once and reuses it for multiple applications. This eliminates the need for multiple data silos and point tools, replacing them with one SLA-driven, virtualized Copy Data Storage platform. As a result, data is copied, stored and moved less, driving capital expenses down by up to 90 percent. Additionally, it enables data to be instantly protected and instantly recoverable from any point in time. Using the console, the administrator defines a service level agreement (SLA) for each protected system or application based on data protection requirements and business policies. Actifio Copy Data Storage captures changed blocks as they occur and stores the captured data on the storage device or devices as defined by the SLA.



Flash Memory & Virtualization: Choosing Your Path

Rich Peterson, Director Software Marketing, SanDisk

Abstract

Among the benefits of enterprise flash, its ability to improve the performance and scalability of virtual computing environments is paramount. A plethora of solutions have appeared in the market, and among the success stories we have learned that applying flash to virtualization is not a "one size fits all" proposition. The challenge for the IT strategist is to understand the advantages flash provides in different types of implementations, and to asses options in terms of the organization's specific requirements. Additionally, rather than seeing different types of flash implementations as mutually exclusive, it's important to recognize complementary opportunities and focus on the fundamentals of IO efficiency in virtualization platforms.

Learning Objectives

  • Discuss the fast growing system I/O performance and efficiency demands in a virtual environment, and what they mean to server and storage system developers
  • Learn about best storage practices in virtual environments and how to take full advantage of enterprise flash technologies
  • Understand other technologies and recognize complementary opportunities to fully leverage the power and performance enhancements of enterprise flash technologies in virtualization platforms


Active Archive Strategies and Media Choices for Long Term Data Retention

Stacy Schwarz-Gardner, Strategic Technical Architect, Spectra Logic

Abstract

Data growth and global data access have forced organizations to rethink how they manage data life cycle. Data management cloud platforms leveraging active archive technologies and principles are designed for data access, scalability, and long-term data management regardless of the storage medium utilized. Tiered storage is just one element of a data management cloud. Active archive principles enable data life cycle management across tier and location while enforcing retention and other compliance attributes. Choosing the appropriate media type for long-term storage of data assets based on access profile and criticality to the organization will be key for future proofing data management strategies.

Learning Objectives

  • Learn how active archive technologies work and how companies are using them to develop data management cloud platforms.
  • Learn the differentiators for using spinning disk vs. tape technologies as short and long-term storage medium.
  • Learn about tape file systems and specialized archive storage systems.


Storage Validation at Go Daddy - Best Practices from the World's #1 Web Hosting Provider

Philippe Vincent, President and CEO, SwiftTest

Abstract

Times are good for storage professionals. A flurry of new technologies promise faster, cheaper, and better storage solutions. Storage-as-a-service offers a new blueprint for flexible, optimized storage operations. Go Daddy is taking full advantage of these opportunities with continual innovation made possible by SwiftTest.

Attend this presentation to hear how Go Daddy established best practices for storage technology validation that produced a winning mix of technologies to manage their 28 PB of data with 99.999% uptime. The new process empowers Go Daddy with the insight they need to control storage costs and optimize service delivery.


FILE STORAGE


SMB Remote File Protocol (including SMB 3.0)

SW Worth, Sr. Standards Program Manager, Microsoft

Abstract

The SMB protocol has evolved over time from CIFS to SMB1 to SMB2, with implementations by dozens of vendors including most major Operating Systems and NAS solutions. The SMB 3.0 protocol, announced at the SNIA SDC Conference in September 2011, is expected to have its first commercial implementations by Microsoft, NetApp and EMC by the end of 2012 (and potentially more later). This SNIA Tutorial describes the basic architecture of the SMB protocol and basic operations, including connecting to a share, negotiating a dialect, executing operations and disconnecting from a share. The second part of the talk will cover improvements in the version 2.0 of the protocol, including a reduced command set, support for asynchronous operations, compounding of operations, durable and resilient file handles, file leasing and large MTU support. The final part of the talk covers the latest changes in the SMB 3.0 version, including persistent handles (SMB Transparent Failover), active/active clusters (SMB Scale-Out), multiple connections per sessions (SMB Multichannel), support for RDMA protocols (SMB Direct), snapshot-based backups (VSS for Remote File Shares) opportunistic locking of folders (SMB Directory Leasing), and SMB encryption.

Learning Objectives

  • Understand the basic architecture of the SMB protocol.
  • Enumerate the main capabilities introduced with SMB 2.0.
  • Describe the main capabilities introduced with SMB 3.0.


Building a Successful Storage Product with Samba

Jeremy Allison, Engineer, Google

Abstract

CIFS/SMB/SMB2/SMB3 server software is now a commodity. Your product has to have it, but it certainly isn't where your customers will perceive the value in your product.

Enter Samba. We've been creating a Free Software/Open Source CIFS/SMB/SMB2/SMB3 server for twenty years, with mostly the same engineers still involved. We're funded by many industry heavyweights such as Google and IBM, and we are used in a wide range of storage products.

Learn how to integrate Samba into your product to provide the needed gateway service into your backend storage, how to navigate the rocky waters of Open Source licensing without releasing copyright code or trade secrets you want to keep, and where Samba is going on a technical level.

Learning Objectives

  • CIFS/SMB/SMB2/SMB3
  • Open Source, Samba


HOT SPARES


Storage Technology Adoption Lessons of the Past, Applied to the Future, a Data-driven Analysis

Hubbert Smith, Consultant CEO, Hubbert Smith LLC

Abstract

Technology adoption is not guaranteed. This starts with a history (with data) of the adoption of Enterprise Serial ATA, the adoption of Enterprise SSD and adoption of Cloud storage; supported by first hand insights into the underlying drivers of storage technology adoption. These history lessons are applied to give insights into the adoption of tomorrows storage technologies, and in the spirit of open-source, concludes with a blueprint of cloud-scale storage architecture, acceptable to mainstream IT.


OPEN SOURCE AND MANAGEMENT


SMB Remote File Protocol (including SMB 3.0)

Jose Barreto, Principal Program Manager, Microsoft

Abstract

The SMB protocol has evolved over time from CIFS to SMB1 to SMB2, with implementations by dozens of vendors including most major Operating Systems and NAS solutions. The SMB 3.0 protocol, announced at the SNIA SDC Conference in September 2011, is expected to have its first commercial implementations by Microsoft, NetApp and EMC by the end of 2012 (and potentially more later). This SNIA Tutorial describes the basic architecture of the SMB protocol and basic operations, including connecting to a share, negotiating a dialect, executing operations and disconnecting from a share. The second part of the talk will cover improvements in the version 2.0 of the protocol, including a reduced command set, support for asynchronous operations, compounding of operations, durable and resilient file handles, file leasing and large MTU support. The final part of the talk covers the latest changes in the SMB 3.0 version, including persistent handles (SMB Transparent Failover), active/active clusters (SMB Scale-Out), multiple connections per sessions (SMB Multichannel), support for RDMA protocols (SMB Direct), snapshot-based backups (VSS for Remote File Shares) opportunistic locking of folders (SMB Directory Leasing), and SMB encryption.

Learning Objectives

  • Understand the basic architecture of the SMB protocol.
  • Enumerate the main capabilities introduced with SMB 2.0.
  • Describe the main capabilities introduced with SMB 3.0.


SMI-S Manage All the Things!

Chris Lionetti, NetApp

Abstract

A chronicle of the development and evolution of the SMI-S protocol that manages multi-vendor environments. Topics covered will include exposing & modifying storage directly to clients. SMI-S allows discover and control of such things as RAID Groups, Primordial Disks, and Thin Provisioning. These are tools that are needed by Storage Consumers to manage ever increasing complexity and capacity datacenters. Complying with SNIA's SMI-S = more than a check box.


SECURITY


Implementing Kerberos Authentication in the Large-Scale Production NFS Environment

Gregory Touretsky, Solutions Architect, Intel IT

Abstract

Intel design environment is heavily dependent on NFS. It includes 100s of NAS servers and 10s of 1000s of mostly Linux clients. Historically, this environment relies on AUTH_SYS security mode. While this is a typical setup for most of NFSv3 shops - it implies various limitations to a large enterprise. For example, 16 groups per user is one of such fundamental limitations. Intel IT is working to provide Global Data Access capabilities and simplify data sharing between multiple design teams and geographies. As part of this program we decided to switch to RPCSEC_GSS (Kerberos) security in our NFS environment. This decision required modifications to multiple components in our distributed environment. How can we ensure Kerberos tickets' distribution between multiple compute servers, for both batch and interactive workloads? How can we provide tickets for faceless accounts and cron jobs? How can NFS with Kerberos authentication be accessed via Samba translators? How to make Kerberos authentication experience in the Linux environment as seamless as it is in the Windows one? These changes can't be performed overnight - how can we support mix of KRB and non-KRB filesystems over long transition period? The presentation will cover these and other challenges we're dealing with as part of this journey to more secure global network file system environment.

Learning Objectives

  • Implementing Kerberos authentication in the large scale production NFS environment
  • Kerberos configuration on compute and file servers
  • Off-the-shelf capabilities and missing components - in-house development to bridge the gaps
  • Planning and executing gradual transition from legacy to Kerberos-based NFS environment


Consumerization of Trusted Computing

Dr. Michael Willett, Storage Security Strategist, Samsung

Abstract

State, Federal, and international legislation mandate the use of strong security measures to protect confidential and personal information. Businesses and governments react through due diligence by implementing security best practices. In fact, being secure in their management of information provides a competitive advantage and enhances the trust that consumers of products and services have in business/government. The modern consumer also manages confidential and personal data, as well as sensitive applications. Net: The consumer, especially in this highly interconnected world, requires equivalent security best practices. The difference is the broad range of technical expertise in the consumer population (all of us!). The security functionality must be: Easy to use Transparent Robust Inexpensive And, be a natural part of the computing infrastructure. Enter: Trusted computing, as defined and standardized by the Trusted Computing Group (TCG). The tenets of the TCG include: robust security functions in hardware, transparency, and integration into the computing infrastructure; a perfect match with the consumer requirements. The TCG, an industry consortium with a broad industry, government, and international membership, has developed technical specifications for a number of trusted elements. Included are specifications for integrated platform security, network client security and trust, mobile device security, and trusted storage; all key components of the consumer computing experience. For example, the storage specifications define the concept of Self-Encrypting Drives (SED). SEDs integrate the encryption into the drive hardware electronics, encrypting all data transparently that is written to the drive; and, with no loss in drive performance. The SED protects against loss or theft, whether a laptop or a data center drive. And, both business professionals and rank-and-file consumers lose a significant number of laptops, according to the FBI. The robust protection afforded the consumer is transparent, inexpensive, and easy to use. Combining the performance, longevity, quietness, and ruggedness of a solid-state drive (SSD) with the SED function equips the consumer with a winning combination, all integrated into the infrastructure.

Learning Objectives

  • Overview of the security challenges facing the consumer
  • Introduction to the tenets of the Trusted Computing Group, especially the integration of security into the computing infrastructure
  • Description of the TCG/SED technology, as a relevant example of trusted computing


Server, App, Disk, Switch, or OS - where is my encryption

Chris Winter, Director Product Management, SafeNet

Abstract

Encryption of critical data is being mandated by compliance regulations and is becoming increasingly utilized for isolation or segregation of important sensitive data that is not yet regulated. There are many different technologies available to encrypt business critical data that can be used in different physical and logical locations in an organization’s production environment. Each location has its advantages and disadvantages depending on factors such as currently deployed infrastructure, compliance demands, sensitivity of data, vulnerability to threats, and staffing, amongst others. To make things even more problematic, the various locations typically fall under the management of different groups within an organization’s operational and IT departments – storage administration, desktop administration, server administration, networking, and application administration. This session will illustrate how to identify the most cost effective location and understand how that meets the needs of the organization while introducing as little operational management overhead as possible.

Learning Objectives

  • Why Security testing in Storage systems needs to be done
  • The impact of not doing security testing on these systems
  • Some of the easiest ways to easily asses the current state of security on your storage systems
  • Why attackers focus on the date held on storage systems
  • How to remediate once security issues are found


Benefits of Flash in Enterprise Storage Systems

Alex McDonald, CTO Office, NetApp

Abstract

This is the latest update to a very popular SNIA tutorial. Targeted primarily at an IT audience, it presents a brief overview of the discontinuity represented by flash technologies which are being integrated into Enterprise Storage Systems today, including technologies, benefits, and price/performance. It then goes on to describe flash fits into typical Enterprise Storage architectures today, with descriptions of specific use cases. Finally the presentation speculates briefly on what the future will bring, including post-flash and dram replacement non-volatile memory technologies.



Can SSDs Achieve HDD Price, Capacity and Performance Parity?

Radoslav Danilak, Co-found & CEO, Skyera

Abstract

Only a system level approach to flash memory management can meet increasing storage performance demands while bringing the price of all solid-state storage arrays to a level that is equal to high performance HDDs. Innovations are necessary to make 19/20 nm and below MLC flash memory usable in mainstream enterprise storage. In addition, reducing high costs that keep flash memory from serving as primary storage (as compared to caches or tiers) is very important. This presentation will highlight flash technology trends and offer system level solutions that have advanced flash memory in the enterprise storage market.



Enhanced SSD with I/O Management

Jiho Hwang, Senior Engineer, Samsung

Abstract

Currently most of the SSDs are working as mere block devices, and still facing the performance stagnation due to the interface bottleneck. Enhanced SSD is designed for the interconnection between host and device with multiple APIs enabled, and host can control SSDs behavior according to its’ I/O workloads. This interconnection ability can improve the total performance level not only in a single device system but also in a distributed storage system. This presentation will cover the concepts of the enhanced SSD and bring open discussion how to gather and manage the features needed for this concept.



Properly Testing SSDs For Steady State Performance

Doug Rollins, Senior Applications Engineer, Micron Technology

Abstract

As the demand for SSDs increases, so does the need to ensure the best drive is selected for each deployment. Essential to the selection process are the ability to validate manufacturers’ claims about SSD performance and a thorough understanding of how published performance is measured for different drive designs, markets, and usage models. While each SSD type will exhibit its own unique behaviors, most of the drives currently in these markets will exhibit similar behavior characteristics: as the drive fills, performance will decrease non-linearly. In this class we will focus on Consumer and Enterprise class SSDs designed for primary data storage.

Learning Objectives

  • Be able to measure SSD steady state performance using common benchmark tools.
  • Ensure the SSD is in the proper state before beginning a test run.
  • Control the granularity of the data reported.
  • Explain how the usage model for which the drive was designed will affect its performance.
  • Adjust test parameters to get a clearer picture of SSD performance, regardless of the market segment for which it was designed.


Designing in Flash in the data center (or How I learned to Love Flash)

Wen Yu, Storage Architect, Nimble Storage, Nimble Storage

Abstract

Most hardware vendors are incorporating flash memory in their products because it has higher performance compared to hard disk and higher density compared to DRAM. One of the driving forces behind this adoption is the virtualization of servers and desktops, which increases the need for serving random I/O at high speeds. However, the architecture around how flash is leveraged varies dramatically from product to product—some are optimized for performance, some for cost of capacity, and some for reliability and data protection. This session will tell users how they can get a good blend of performance, capacity, and data protection.

Learning Objectives

  • Find out more of the changes in data center around the use of Flash.
  • Learn about the different approaches to storage including pure flash and hybrid systems mixing flash and disk
  • Hear about the trends around converged computer and storage appliances and architectures


Flash vs. DRAM - It's Not Always about Storage

Jim Handy, Director, Objective Analysis

Abstract

Sys admins have been deploying SSDs and flash PCIe storage in increasing numbers as an alternative to maxing out the system memory with more DRAM. After an explanation of how this trade-off works, this presentation will share five case studies in which flash has been used to reduce memory requirements by accelerating storage. The net result is a reduction in cost, power, and cooling. A wrap-up will suggest potential problem areas and will present a variety of solutions aimed at solving some of these issues.

Learning Objectives

  • How flash storage can reduce system memory and power/cooling
  • What must be done to use flash in this way


SAN Protocol Analysis and Performance Management

Robert Finlay, Business Development Manager, JDSU

Abstract

This presentation is for the Storage Manager, Administrator or Architect looking to increase their understanding of storage protocols for performance analysis and issue resolution. We will show examples of how traffic analysis can be used to identify storage related performance issues from both the initiator, switch and target points.

Learning Objectives

  • Gain an understanding of storage protocol analysis as it applies to your plumbing performance
  • Learn how packet analytics can be used to for proactive and reactive purposes


Storage Performance Analysis

Lee Donnahoo, Storage Architect, Microsoft Global Foundation Services

Abstract

An overview of the tools and methodologies available to measure and analyze storage performance issues. Methodologies include speeds/feeds, bottlenecks, how to find issues such as the infamous "FC slow drain", and long-term planning.



Architecting Flash-based Caching for Storage Acceleration

Cameron Brett, Director of Solutions Marketing, QLogic

Abstract

Advancements in server technologies continue to widen the I/O performance gap between mission critical servers and I/O subsystems. Flash-based caching technology is proving to be particularly effective in addressing the gap in storage I/O performance. But, getting maximum benefit from caching depends on how and where it is deployed in the storage subsystem. Strong arguments exist both for and against the placement of cache in three specific locations in the storage subsystem: within storage arrays, as network appliances and within servers. It is essential to look for an approach that delivers the accelerated application performance benefits of flash-based caching with support for clustered and virtualized applications, transparent integration (without architecture changes) into the current environment which preserves the SAN data protection model while extending the life of the existing infrastructure, and which features scalable and efficient cache allocation. This session provides a solid, how-to approach to evaluating how best to boost I/O performance taking into consideration the performance and complexity trade-offs inherent to where you place caching technology – in storage arrays, appliances or servers.

Learning Objectives

  • Understanding the current landscape of I/O performance & acceleration solutions
  • What is currently being deployed
  • What is currently being developed
  • Pitfalls and advantages of each solution


Overview of Data Center Networks

Dr. Joseph White, Distinguished Engineer, Juniper Networks

Abstract

With the completion of the majority of the various standards used within the Data Center plus the wider deployment of I/O consolidation and converged networks, a solid comprehension of how these networks will behave and perform is essential. This tutorial covers technology and protocols used to construct and operate Data Center Networks. Particular emphasis will be placed on clear and concise tutorials of the IEEE Data Center Bridging protocols (PFC, DCBX, ETS, QCN, etc), data center specific IETF protocols (TRILL, etc), fabric based switches, LAG, and QoS. QoS topics will address head of line blocking, incast, microburst, sustained congestion, and traffic engineering.



PCI Express and Its Interface to Storage Architectures

Ron Emerick, Principal HW Engineer, Oracle

Abstract

PCI Express Gen2 and Gen3, IO Virtualization, FCoE, SSD, PCI Express Storage Devices are here. What are PCIe Storage Devices – why do you care? This session describes PCI Express, Single Root IO Virtualization and the implications on FCoE, SSD, PCIe Storage Devices and impacts of all these changes on storage connectivity, storage transfer rates. The potential implications to Storage Industry and Data Center Infrastructures will also be discussed.

Learning Objectives

  • Knowledge of PCI Express Architecture, PCI Express Roadmap, System Root Complexes and IO Virtualization.
  • Expected Industry Roll Out of latest IO Technologies and required Root Complex capabilities.
  • Implications and Impacts of FCoE, SSD and PCIe Storage Devices to Storage Connectivity. What does this look like to the Data Center?
  • IO Virtualization connectivity possibilities in the Data Center (via PCI Express).



PCIe-based Storage Technology Architectural Design Considerations

David Deming, CCO, Solution Technology

Abstract

The combination of solid state technology and PCIe connectivity will power enterprise applications into the next decade and forever alter the topography of all servers and storage arrays. This seminar guides you through the landscape surrounding PCIe-based storage and how SCSIe, SATAe, and NVMe will ever more impact your storage designs. SSS is a disruptive technology with compelling performance, power, and form factor benefits that has sparked the creative genius of today's leading storage architects. However, every aspect of the server or array will be impacted from the operating system, to the data center application, to the physical hardware. Don’t miss the opportunity to hear a seasoned storage professionals' predictions on storage array design and how SSS will propel a new class of PCIe-based storage products.

Learning Objectives

  • PCIe Architectural Basics
    • SCSIe
    • SATAe
    • NVMe
  • Hardware implications - internal and external
  • Array and Application software impact
  • Operating system design decisions
  • Future storage array architecture
    • Lo RASC (reliability, availability, serviceability, and connectability) challenges
  • What about HDDs and HHD's?


SAS: The Emerging Storage Fabric

Marty Czekalski, President, SCSI Trade Association
Gregory McSorley, Vice President, SCSI Trade Association; Technical Business Development Manager, Amphenol

Abstract

SAS is the backbone of nearly every enterprise storage deployment, rapidly evolving, adding new features, enhanced capabilities and offering “no compromise” system performance. SAS not only excels as a device level interface, its versatility, reliability and scalability have made it the connectivity standard of choice for creating new Enterprise storage architectures.

This presentation covers the advantages of using SAS as a device interface, and how its capabilities as a connectivity solution, are changing the way data centers are being deployed. Advantaging 12 Gb/s transfer rates, bandwidth aggregation, SAS Fabrics (including switches) active connections, and multi-function connectors (connectors that support SAS as well as PCIe Attached Storage devices) allows data center architects to create sustainable storage solutions that scale well into the future.



RDMA Interconnects for Storage: Technology Update and Use Scenarios

Kevin Deierling, VP Marketing, Mellanox

Abstract

As storage technologies have adapted to accommodate the exponential growth of data worldwide, RDMA interconnect technology has been keeping pace and gaining momentum. With InfiniBand and Ethernet connectivity options, RDMA provides the features needed to keep data flowing to and from applications efficiently and effectively. This session will review the basics of RDMA transports and features that make RDMA ideal for storage and converged networking, the latest updates to the technology, and use cases for storage deployment.

Learning Objectives

  • Understand the basic principles of RDMA networking.
  • Review the latest developments in RDMA transports and technologies.
  • Understand the use cases for RDMA use in storage networking.

Learning Objectives

  • Learn to better define software defined storage and separate the reality from the marketing buzz
  • Learn about the benefits of software defined storage, and how it can bring new flexibility to your datacenter
  • Take a look at the future of software defined storage and how this movement is changing the game


The Time for 16Gb Fibre Channel is Now

Scott Shimomura, Director, Product Marketing, Brocade
Gregory McSorley, Vice President, SCSI Trade Association; Technical Business Development Manager, Amphenol

Abstract

The rapid expansion of business opportunities based on transactional data is reshaping IT investment priorities. Data center teams must now ensure that their storage network is capable of delivering high levels of performance and availability as well as supporting more advanced features. By incorporating 16Gb Fibre Channel, the newest and fastest network standard, IT managers can make certain the organization they are serving can scale rapidly without jeopardizing processing performance. This session will discuss how 16Gb Fibre Channel helps customers in their cloud and virtualization environments and take a look at the latest developments in Gen6 Fibre Channel standards for next generation storage networks.



Extending SAS Connectivity in the Data Center

Bob Hansen, Technical Director, NetApp

Abstract

Serial Attached SCSI (SAS) is the connectivity solution of choice for disk drives and JBODs in the data center today. SAS connections are getting faster while storage solutions are getting larger and more complex. Data center configurations and disaster recovery solutions are demanding longer cable distances. This is making it more and more difficult or impossible to configure systems using passive copper cables. This presentation discusses the application, limitations and performance of passive copper, active copper and optical SAS cabling options available today and those likely to be available in the next few years.

Learning Objectives

  • Extending SAS Connectivity in the Data Center
  • Review SAS network topologies for data center applications
  • Understand SAS connectivity options, limitations and performance

Learning Objectives

  • The audience will gain a general understanding of the concept of using a Data Center type Ethernet for the transmission of Fibre Channel protocols without the need for an FCoE Forwarder (FCF).
  • The audience will gain an understanding of the benefits of converged I/O and how a Fibre Channel protocol can share an Ethernet network with other Ethernet based protocols and establishes a virtual FCoE link directly between the End-Nodes.
  • The audience will gain an understanding of potential business value and configurations that will be appropriate for gaining maximum value from this Direct End-Node to End-Node including the value of this protocol to the "Cloud" IaaS (Infrastructure as a Service) provider.


VDI STORAGE


Storage Performance for VDI Tools, Techniques and Architectures

Russ Fellows, Sr. Partner, Evaluator Group

Abstract

Application workloads present unique challenges to both IT architects and system administrators. Solving performance issues often requires recreating complex application environments. Storage performance in particular is tied closely to specific application workloads and environments that are difficult to establish and recreate. Due to complex setup and licensing restrictions, recreating these environments is costly and time consuming. This is particularly true with VDI, which can often require the use of thousands of costly licenses in order to test a workload against the infrastructure. Additionally, new virtualization technologies, coupled with new storage technologies are rapidly evolving. Learn how emerging technologies impact VDI best practices, including how to architect server, network and in particular storage systems to handle VDI performance effectively. This session will cover the unique storage concerns of VDI workloads with best practices and considerations for architecting a high performance VDI infrastructure.

Learning Objectives

  • Learn the top concerns and implications for storage including performance, capacity, data protection, boot storm and management
  • Understand requirements for planning including performance and capacity
  • A review of architectural choices and their impact for VDI
  • A discussion of the unique administration considerations


VIRTUALIZATION


VDI and Storage: Tools, Techniques and Architectural Considerations for Performance

Russ Fellows, Sr. Partner, Evaluator Group

Abstract

Application workloads present unique challenges to both IT architects and system administrators. Solving performance issues often requires recreating complex application environments. Storage performance in particular is tied closely to specific application workloads and environments that are difficult to establish and recreate. Due to complex setup and licensing restrictions, recreating these environments is costly and time consuming. This is particularly true with VDI, which can often require the use of thousands of costly licenses in order to test a workload against the infrastructure. Additionally, new virtualization technologies, coupled with new storage technologies are rapidly evolving. Learn how emerging technologies impact VDI best practices, including how to architect server, network and in particular storage systems to handle VDI performance effectively. This session will cover the unique storage concerns of VDI workloads with best practices and considerations for architecting a high performance VDI infrastructure.

Learning Objectives

  • Learn the top concerns and implications for storage including performance, capacity, data protection, boot storm and management
  • Understand requirements for planning including performance and capacity
  • A review of architectural choices and their impact for VDI
  • A discussion of the unique administration considerations


Virtual Storage in a VMware Evaluation Environment

Vince Asbridge, Director of Systems and Software, SANBlaze Technology

Abstract

SANBlaze Virtual Storage device fully implement the VMWare VAAI extensions enabling instantaneous snapshots, virtualized de-duplication and accelerated migration, making Emulated Storage an ideal tool for performance and capacity planning in a Virtualized environment. This talk will cover using virtualized storage as a container for guest machines demonstrating the instantaneous clone capabilities, performance testing, boot storm testing and de-duplication. A fully functional Windows guest will be created in less than 5 seconds using less than 2MB of storage, demonstrating the scalability capabilities inherent in a virtualized storage testing and capacity planning environment.

Learning Objectives

  • Learn why Emulated Storage an ideal tool for performance and capacity planning in a Virtualized environment
  • Learn how to use virtualized storage as a container for guest machines demonstrating the instantaneous clone capabilities, performance testing, boot storm testing and de-duplication
  • Gain an understanding of the scalability capabilities inherent in a virtualized storage testing and capacity planning environment.