Sign Up for SPDEcon Updates





SPDEcon Abstracts


Agenda Abstract Tracks

 


BIG DATA


Introduction to Analytics and Big Data - Hadoop

Rob Peglar, CTO, Americas, EMC Isilon

Abstract

This tutorial serves as a foundation for the field of analytics and Big Data, with an emphasis on Hadoop. An overview of current data analysis techniques, the emerging science around Big Data and an overview of Hadoop will be presented. Storage techniques and file system design for the Hadoop File System (HDFS) and implementation trade-offs will be discussed in detail. This tutorial is a blend of non-technical and introductory-level technical material.

Learning Objectives

  • Gain a further understanding of the field and science of data analytics
  • Comprehend the essential differences surrounding Big Data and why it represents a change in traditional IT thinking
  • Understand introductory-level technical detail around Hadoop and the Hadoop File System (HDFS)

Big Data: What You Don’t Know Will Hurt You

Jim McGann, Vice President, Index Engines

Abstract

Leading studies have estimated data storage could increase by 80 percent each year, leading to exponentially growing storage management costs, increased risk of security breaches, and the threat of compliance and legal risks. Organizations need to take back control of their storage costs, understand what they have, and determine the disposition of their data using enterprise data profiling.

Learning Objectives

  • Understand what they have and develop a sound policy that will protect the organization from harm and long term risk
  • Eliminate, on average, 40-70 percent of stored data, much of which is redundant content, personal music files and data that hasn’t been accessed in over seven years
 

CLOUD STORAGE


Combining SNIA Cloud, Tape and Container Format Technologies for the Long Term Retention of Big Data

Gene Nagle, VP - Technical Services, BridgeStor

Abstract

Generating and collecting very large data sets is becoming a necessity in many domains that also need to keep that data for long periods. Examples include astronomy, atmospheric science, genomics, medical records, photographic archives, video archives, and large-scale e-commerce. While this presents significant opportunities, a key challenge is providing economically scalable storage systems to efficiently store and preserve the data, as well as to enable search, access, and analytics on that data in the far future. Both cloud and tape technologies are viable alternatives for storage of big data and SNIA supports their standardization. The SNIA Cloud Data Management Interface (CDMI) provides a standardized interface to create, retrieve, update, and delete objects in a cloud. The SNIA Linear Tape File System (LTFS) takes advantage of a new generation of tape hardware to provide efficient access to tape using standard, familiar system tools and interfaces. In addition, the SNIA Self-contained Information Retention Format (SIRF) defines a storage container for long term retention that will enable future applications to interpret stored data regardless of the application that originally produced it. This tutorial will present advantages and challenges in long term retention of big data, as well as initial work on how to combine SIRF with LTFS and SIRF with CDMI to address some of those challenges. SIRF with CDMI will also be examined in the European Union integrated research project ENSURE – Enabling kNowledge, Sustainability, Usability and Recovery for Economic value.

Learning Objectives

  • Recognize the challenges and value in the long-term preservation of big data, and the role of new cloud and tape technologies to assist in addressing them.
  • Identify the need, use cases, and proposed architecture of SIRF. Also, review the latest activities in SNIA LTR technical working group to combine SIRF with LTFS and SIRF with CDMI for long term retention and mining of big data.
  • Discuss the usage of SIRF with CDMI in the ENSURE project that draws on actual commercial use cases from health care, clinical trials, and financial services.

Solid State of Affairs: The Benefits and Challenges of SSDs in vSphere Datacenters

John Blumenthal, CEO, CloudPhysics

Abstract

A lot of industry buzz surrounds the value of SSDs. New flash-based products have entered the server and storage market in the past few years. Indeed, flash storage can do wonders for critical virtualized applications. However, most vSphere admins and CIOs are still on the sidelines, not yet sure of the value of adding them to vSphere environments. The key questions being asked by all of us are: how can I evaluate the benefit of SSDs to _my_ datacenter and is the cost justified? Our experience has shown that SSDs are not always a silver bullet and different products do well for different workloads. This motivates new tools for predictive benefit analysis prior to purchase. An experienced vSphere datacenter architect involved in flash storage deployments along with the tech lead behind VMware’s own Swap-to-SSD, Storage DRS and Storage I/O Control features will share their experiences working with SSDs in virtualization systems. They will also demonstrate easy tools and techniques for precise prediction of SSD benefits, choosing the best-fit vendor/solution and precision-targeting SSDs within a given datacenter. SSDs are here to stay: let's use them to super-charge our vSphere datacenters while keeping costs low and clients happy.

Learning Objectives

  • How to use data science to evaluate production workload caching benefits
  • How workload traces can be captured and replayed
  • How new sampling techniques can be used for production workloads

Hybrid Clouds in the Data Center - The End State

Michael Elliott, Enterprise Cloud Evangelist, Dell

Abstract

In this presentation, I will define how to build clouds in a heterogeneous, open, and secure environment to take advantage of the benefits that hybrid cloud provides. I will cover the concepts of the cloud and detail:

  • Data Protection and Archive In the Cloud
  • Components of building a Private Cloud
  • Integration of Private and Public to form Hybrid Clouds The presentation will include case studies to highlight how fortune 500 and global companies are utilizing cloud infrastructure to gain agility and efficiency in their datacenter

 

Learning Objectives

  • Through attending this session, the participant will gain a common understanding of the three cloud constructs: Private, Public, Hybrid
  • Through attending this session, the participant will understand the approach to working in a Hybrid Cloud environment, how that can positively affect both the agility and efficiency of IT, and what it means to the future of their data
  • Through attending this session, the participant will view real life examples of how customers are taking advantage of the cloud.

Ceph: A Unified Distributed Storage System

Sage Weil, Founder & CTO, Inktank

Abstract

Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX- compatible distributed file system. The platform is capable of auto-scaling to the exabyte level and beyond, it runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux Kernel and is integrated with the OpenStack and CloudStack cloud platforms. This talk will provide an intro into the Ceph architecture and how it unifies storage for the cloud.

Learning Objectives

  • As a result of participating in this session, attendees will gain a full understanding of the Ceph architecture and the current status and future of the technology.
  • General Ceph use cases for the cloud such as cloud storage (S3) and Ceph deployments in private and public clouds.
  • How Ceph''s network block device integrates with open source cloud platforms such as CloudStack and OpenStack along with an understanding of the RADOS Gateway, which is a Swift and S3-compatible REST API for seamless data access to Ceph''s object storage.

Windows Azure Storage - Speed and Scale in the Cloud

Joe Giardino, Senior Development Lead - Windows Azure Storage, Microsoft

Abstract

In today’s world that is increasingly dominated by mobile and cloud computing application developers require durable, scalable, reliable, and fast storage solutions like Windows Azure Storage. This talk will cover the internal design of the Windows Azure Storage system and how it is engineered to meet these ever growing demands. This session will have a particular focus on performance, scale, and reliability. In addition, we will cover patterns & best practices for developing performance solutions on storage that optimize for cost, latency, and throughput. Windows Azure Storage is currently leveraged by clients to build big data and web scale services such as Bing, Xbox Music, SkyDrive, Halo 4, Hadoop, and Skype.

Learning Objectives

  • Windows Azure Storage Fundamentals
  • Patterns and best practices for cloud storage
  • How to write applications that scale

Implementing a Private Cloud

Alex McDonald, CTO Office, NetApp

Abstract

This session will be a technical dive into implementations of a private storage cloud, and will use cases and best practices examples of how it can fit into your existing IT operations. SNIA’s Cloud Storage Initiative (CSI) has created the Cloud Data Management Interface (CDMI), now an ISO certified standard, that can assist you in your cloud deployments utilizing both traditional file systems and new cloud file system formats. This presentation will dive into the details of designing and deploying a private cloud, including how CDMI can assist, and taking account of server virtualization on cloud design. (This is an update of existing CSI Cloud tutorials)

Learning Objectives

  • Understanding how cloud computing and specifically CDMI interact with traditional file systems (CIFS, NFS, FC)
  • Gain an understanding of how cloud can integrate into existing IT Service Management (ITSM) or ITIL best practices for event, configuration, incident, and problem management.
  • Present a detailed reference architecture for private cloud

Interoperable Cloud Storage with the CDMI Standard

Mark Carlson, Principal Member of Technical Staff, Oracle

Abstract

The Cloud Data Management Interface (CDMI) is an industry standard with ISO ratification. There is now an open source reference implementation available from SNIA as well. Storage vendors and Cloud providers have started announcing their implementations of the CDMI standard, demonstrating the reality of interoperable cloud storage. This talk will help you understand how to keep from getting locked into any given vendor by using the standard. Real world examples will help you understand how to apply this to your own situation.

Learning Objectives

  • Walk away with an understanding of the specific CDMI features, such as a standard cloud interchange format, that enable interoperable cloud storage
  • Gain a deeper understanding of the outstanding issues of cloud storage interoperability and how the standard helps ease this pain.
  • Now that the standard is being implemented, understand what to put in an RFP for cloud storage that prevents vendor lock-in.

DATA PROTECTION


Introduction to Data Protection

Gene Nagle, VP - Technical Services, BridgeStor

Abstract

Extending the enterprise backup paradigm with disk-based technologies allow users to significantly shrink or eliminate the backup time window. This tutorial focuses on various methodologies that can deliver an efficient and cost effective disk-to-disk-to-tape (D2D2T) solution. This includes approaches to storage pooling inside of modern backup applications, using disk and file systems within these pools, as well as how and when to utilize deduplication and virtual tape libraries (VTL) within these infrastructures.

Learning Objectives

  • Get a basic understanding of backup and restore technology including tape, disk, snapshots, deduplication, virtual tape, and replication technologies.
  • Compare and contrast backup and restore alternatives to achieve data protection and data recovery.
  • Identify and define backup and restore operations and terms.

Tape Storage for the Uninitiated

David Pease, IBM Distinguished Engineer, IBM Almaden Research Center

Abstract

This talk provides a complete overview of modern tape as a storage medium. It includes a little history, a description of how modern tape works (not as obvious as it looks), a discussion of tape automation, and details on why tape is intrinsically more reliable than disk, where it's capacity growth curve is headed and why, what it's well suited for, how LTFS makes it easier to use, and a cost and environmental comparison with other media.

Learning Objectives

  • Understand the fundamentals of tape as a storage medium.
  • Learn what types of tape media and automation are available in the market.
  • Understand some of the reasons why tape is a fundamentally reliable storage platform.
  • Understand how the Linear Tape File System (LTFS) provides standardization and ease-of-use for tape systems.

Trends in Data Protection

Jason Iehl, Manager - Systems Engineering, NetApp

Abstract

Many disk technologies, both old and new, are being used to augment tried and true backup and data protection methodologies to deliver better information and application restoration performance. These technologies work in parallel with the existing backup paradigm. This session will discuss many of these technologies in detail. Important considerations of data protection include performance, scale, regulatory compliance, recovery objectives and cost. Technologies include contemporary backup, disk based backups, snapshots, continuous data protection and capacity optimized storage.

Learning Objectives

  • Understand legacy and contemporary storage technologies that provide advanced data protection.
  • Compare and contrast advanced data protection alternatives.
  • Gain insights into emerging DP technologies.

The Changing Role of Data Protection in a Virtualized World

Thomas Rivera, Sr. Technical Associate, File, Content & Cloud Solutions, Hitachi Data Systems

Abstract

Backup administrators who have grown accustomed to traditional batch-oriented backup using a single local system are seeing their world change very quickly these days. Virtualized data centers with cloud computing and resilient, self-healing local and cloud storage combined with existing in-house systems are demanding completely new strategies for data protection. And even the functions and responsibilities of data protection administrators are changing at a fast pace. This tutorial presents the rapid changes and challenges in data protection brought about by storage, both local and cloud, that combines protection such as "time machines" (snapshots and versioning) with primary and secondary repositories. Following from those changes, the discussion will include methods being used by backup administrators as both temporary and long-term solutions to cope with these changes and to maximize their potential benefits.

Learning Objectives

  • Understand the differences between data protection in traditional environments versus in virtualized environments.
  • Consider multiple examples of data protection challenges in environments where virtualization and cloud-based storage and computing are increasingly used.
  • Come away with some useful best practices for coping with the challenges brought on by new responsibilities and functions in data protection.

FEATURED SPEAKERS


State of the Stack

Randy Bias, Co-founder and CTO, Cloudscaling

Abstract

OpenStack is the fastest growing open source movement in history, but its marketing momentum has largely outrun its technology growth. Why are organizations so eager to embrace OpenStack? Some components – like Swift – are ready for prime time. But others – like Horizon and Quantum – are still evolving.

What needs the most attention: networking, storage, compute, or something else? Where are the reference architectures and real world deployments? How are different product and service companies implementing OpenStack in production today? We’ll go beyond the hype and dig deep on OpenStack, exploring all that is great and all that needs serious work. Attendees will leave with a firsthand account of the State of the Stack, ready to help their organizations embrace OpenStack armed with practical knowledge.


Enterprise Flash and Virtualization: Choosing Your Path

Rich Peterson, Director Software Marketing, SanDisk

Abstract

Among the benefits of enterprise flash, its ability to improve the performance and scalability of virtual computing environments is paramount. A plethora of solutions have appeared in the market, and among the success stories we have learned that applying flash to virtualization is not a "one size fits all" proposition. The challenge for the IT strategist is to understand the advantages flash provides in different types of implementations, and to asses options in terms of the organization's specific requirements. Additionally, rather than seeing different types of flash implementations as mutually exclusive, it's important to recognize complementary opportunities and focus on the fundamentals of IO efficiency in virtualization platforms.

Learning Objectives

  • Discuss the fast growing system I/O performance and efficiency demands in a virtual environment, and what they mean to server and storage system developers
  • Learn about best storage practices in virtual environments and how to take full advantage of enterprise flash technologies
  • Understand other technologies and recognize complementary opportunities to fully leverage the power and performance enhancements of enterprise flash technologies in virtualization platforms

The New European Data Protection Regulation - Why You Should Care

Gilles Chekroun, Distinguished Systems Engineer, Cisco

Abstract

The European Commission is undertaking a major effort to try and harmonize the data protection rules across all 27 EU members. This proposal includes some significant changes like defining a data breach, adding the right to be forgotten, appointing a Data Protection Officer for large companies, and many other new elements. Another major change is a shift from a directive to a regulation and include significant financial penalties for infractions.

This session explores the new EU data protection legislation and highlights the elements that could have significant impacts on data handling practices.

Learning Objectives

  • General introduction to the new EU data protection legislation
  • Understand the potential impacts the new data protection rules could have on storage

Active Archive Strategies and Media Choices for Long Term Data Retention

Stacy Schwarz-Gardner, Strategic Technical Architect, Spectra Logic

Abstract

Data growth and global data access have forced organizations to rethink how they manage data life cycle. Data management cloud platforms leveraging active archive technologies and principles are designed for data access, scalability, and long-term data management regardless of the storage medium utilized. Tiered storage is just one element of a data management cloud. Active archive principles enable data life cycle management across tier and location while enforcing retention and other compliance attributes. Choosing the appropriate media type for long-term storage of data assets based on access profile and criticality to the organization will be key for future proofing data management strategies.

Learning Objectives

  • Learn how active archive technologies work and how companies are using them to develop data management cloud platforms.
  • Learn the differentiators for using spinning disk vs. tape technologies as short and long-term storage medium.
  • Learn about tape file systems and specialized archive storage systems.

Open Data Center Alliance: Smooth Scaling

David Casper, Executive Director, UBS

Abstract

With the relentless growth of digital data, a unified voice on requirements is needed for rapid adoption of efficient, innovative and adoptable storage solutions. This talk will explore current real-world storage trends and opportunities in large, global enterprises, as seen through the lens of the Open Data Center Alliance, a nearly 400-member group of IT Managers that has come together to help shape the evolution of the data center.

Topics Include:

  • True scale-out storage approaches
  • Globally unified namespaces & abstraction layers
  • Data Usage Patterns, Automation and Policy Engines
  • Programmable Datacenter Future : How Unified Cloud Roles Replace Traditional Compute, Storage and Network Roles

FILE STORAGE


SMB Remote File Protocol (including SMB 3.0)

Jose Barreto, Principal Program Manager, Microsoft

Abstract

The SMB protocol has evolved over time from CIFS to SMB1 to SMB2, with implementations by dozens of vendors including most major Operating Systems and NAS solutions. The SMB 3.0 protocol, announced at the SNIA SDC Conference in September 2011, is expected to have its first commercial implementations by Microsoft, NetApp and EMC by the end of 2012 (and potentially more later). This SNIA Tutorial describes the basic architecture of the SMB protocol and basic operations, including connecting to a share, negotiating a dialect, executing operations and disconnecting from a share. The second part of the talk will cover improvements in the version 2.0 of the protocol, including a reduced command set, support for asynchronous operations, compounding of operations, durable and resilient file handles, file leasing and large MTU support. The final part of the talk covers the latest changes in the SMB 3.0 version, including persistent handles (SMB Transparent Failover), active/active clusters (SMB Scale-Out), multiple connections per sessions (SMB Multichannel), support for RDMA protocols (SMB Direct), snapshot-based backups (VSS for Remote File Shares) opportunistic locking of folders (SMB Directory Leasing), and SMB encryption.

Learning Objectives

  • Understand the basic architecture of the SMB protocol.
  • Enumerate the main capabilities introduced with SMB 2.0.
  • Describe the main capabilities introduced with SMB 3.0.

Practical Steps to Implementing pNFS and NFSv4.1

Alex McDonald, CTO Office, NetApp

Abstract

Much has been written about pNFS (parallelized NFS) and NFSv41, the latest NFS protocol. But practical examples of how to implement NFSv4.1 and pNFS are fragmentary and incomplete. This presentation will take a step-by-step guide to implementation, with a focus on file systems. From client and server selection and preparation, the tutorial will cover key auxiliary protocols like DNS, LDAP and Kerberos, and finally a demonstration of working pNFS environment will be given.

Learning Objectives

  • An overview of the practical steps required to implement pNFS and NFSv4.1
  • Detailed information on the selection of software components to ensure a suitable environment
  • Show how these parts are engineered and delivered as a solution

Building a Successful Storage Product with Samba

Jeremy Allison, Engineer, Google

Abstract

CIFS/SMB/SMB2/SMB3 server software is now a commodity. Your product has to have it, but it certainly isn't where your customers will perceive the value in your product.

Enter Samba. We've been creating a Free Software/Open Source CIFS/SMB/SMB2/SMB3 server for twenty years, with mostly the same engineers still involved. We're funded by many industry heavyweights such as Google and IBM, and we are used in a wide range of storage products.

Learn how to integrate Samba into your product to provide the needed gateway service into your backend storage, how to navigate the rocky waters of Open Source licensing without releasing copyright code or trade secrets you want to keep, and where Samba is going on a technical level.

Learning Objectives

  • CIFS/SMB/SMB2/SMB3
  • Open Source, Samba

Massively Scalable File Storage

Philippe Nicolas, Director of Product Strategy, Scality

Abstract

Internet changed the world and continues to revolutionize how people are connected, exchange data and do business. This radical change is one of the cause of the rapid explosion of data volume that required a new data storage approach and design. One of the common element is that unstructured data rules the IT world. How famous Internet services we all use everyday can support and scale with thousands of new users added daily and continue to deliver an enterprise-class SLA ? What are various technologies behind a Cloud Storage service to support hundreds of millions users ? This tutorial covers technologies introduced by famous papers about Google File System and BigTable, Amazon Dynamo or Apache Hadoop. In addition, Parallel, Scale-out, Distributed and P2P approaches with Lustre, PVFS and pNFS with several proprietary ones are presented as well. This tutorial adds also some key features essential at large scale to help understand and differentiate industry vendors offering.

Learning Objectives

  • Understand market trends and associated technologies
  • Learn advantages of new recent emerging technologies for very large data environment
  • Be able to compare and select the right solution for specific need

OPEN SOURCE AND MANAGEMENT


SMB Remote File Protocol (including SMB 3.0)

Jose Barreto, Principal Program Manager, Microsoft

Abstract

The SMB protocol has evolved over time from CIFS to SMB1 to SMB2, with implementations by dozens of vendors including most major Operating Systems and NAS solutions. The SMB 3.0 protocol, announced at the SNIA SDC Conference in September 2011, is expected to have its first commercial implementations by Microsoft, NetApp and EMC by the end of 2012 (and potentially more later). This SNIA Tutorial describes the basic architecture of the SMB protocol and basic operations, including connecting to a share, negotiating a dialect, executing operations and disconnecting from a share. The second part of the talk will cover improvements in the version 2.0 of the protocol, including a reduced command set, support for asynchronous operations, compounding of operations, durable and resilient file handles, file leasing and large MTU support. The final part of the talk covers the latest changes in the SMB 3.0 version, including persistent handles (SMB Transparent Failover), active/active clusters (SMB Scale-Out), multiple connections per sessions (SMB Multichannel), support for RDMA protocols (SMB Direct), snapshot-based backups (VSS for Remote File Shares) opportunistic locking of folders (SMB Directory Leasing), and SMB encryption.

Learning Objectives

  • Understand the basic architecture of the SMB protocol.
  • Enumerate the main capabilities introduced with SMB 2.0.
  • Describe the main capabilities introduced with SMB 3.0.


SECURITY


Implementing Kerberos Authentication in the Large-Scale Production NFS Environment

Gregory Touretsky, Solutions Architect, Intel

Abstract

Intel design environment is heavily dependent on NFS. It includes 100s of NAS servers and 10s of 1000s of mostly Linux clients. Historically, this environment relies on AUTH_SYS security mode. While this is a typical setup for most of NFSv3 shops - it implies various limitations to a large enterprise. For example, 16 groups per user is one of such fundamental limitations. Intel IT is working to provide Global Data Access capabilities and simplify data sharing between multiple design teams and geographies. As part of this program we decided to switch to RPCSEC_GSS (Kerberos) security in our NFS environment. This decision required modifications to multiple components in our distributed environment. How can we ensure Kerberos tickets' distribution between multiple compute servers, for both batch and interactive workloads? How can we provide tickets for faceless accounts and cron jobs? How can NFS with Kerberos authentication be accessed via Samba translators? How to make Kerberos authentication experience in the Linux environment as seamless as it is in the Windows one? These changes can't be performed overnight - how can we support mix of KRB and non-KRB filesystems over long transition period? The presentation will cover these and other challenges we're dealing with as part of this journey to more secure global network file system environment.

Learning Objectives

  • As a result of participating in this session, attendees will be able to understand which threats can and should be addressed by data encryption and which threats need other solutions
  • As a result of participating in this session, attendees will be able to understand where and when multiple data encryption technologies are complementary, and where they are redundant
  • As a result of participating in this session, attendees will be able to understand how to justify the deployment of data encryption based on cost of deployment, management and the cost of failure to do so

Securing File Data in a Distributed or Mobile World

Chris Winter, Director, Product Management, SafeNet

Abstract

When an organization has a distributed or mobile workforce or requires executives or key personnel to work from home, the issue of securing the business critical data becomes especially problematic. In most cases today, responsibility for the security of the file data is the left up to the individual end user. Theft or even the borrowing of the mobile device leaves critical file data exposed. The rapid growth of BYOD (Bring Your Own Device) brings cost savings and efficiency for organizations but also a greater risk of security breaches. Today, no control or reporting of use of critical data on mobile devices is possible and this creates regulatory problems.


Consumerization of Trusted Computing

Dr. Michael Willett, Storage Security Strategist, Samsung

Abstract

State, Federal, and international legislation mandate the use of strong security measures to protect confidential and personal information. Businesses and governments react through due diligence by implementing security best practices. In fact, being secure in their management of information provides a competitive advantage and enhances the trust that consumers of products and services have in business/government. The modern consumer also manages confidential and personal data, as well as sensitive applications. Net: The consumer, especially in this highly interconnected world, requires equivalent security best practices. The difference is the broad range of technical expertise in the consumer population (all of us!). The security functionality must be: Easy to use Transparent Robust Inexpensive And, be a natural part of the computing infrastructure. Enter: Trusted computing, as defined and standardized by the Trusted Computing Group (TCG). The tenets of the TCG include: robust security functions in hardware, transparency, and integration into the computing infrastructure; a perfect match with the consumer requirements. The TCG, an industry consortium with a broad industry, government, and international membership, has developed technical specifications for a number of trusted elements. Included are specifications for integrated platform security, network client security and trust, mobile device security, and trusted storage; all key components of the consumer computing experience. For example, the storage specifications define the concept of Self-Encrypting Drives (SED). SEDs integrate the encryption into the drive hardware electronics, encrypting all data transparently that is written to the drive; and, with no loss in drive performance. The SED protects against loss or theft, whether a laptop or a data center drive. And, both business professionals and rank-and-file consumers lose a significant number of laptops, according to the FBI. The robust protection afforded the consumer is transparent, inexpensive, and easy to use. Combining the performance, longevity, quietness, and ruggedness of a solid-state drive (SSD) with the SED function equips the consumer with a winning combination, all integrated into the infrastructure.

Learning Objectives

  • Overview of the security challenges facing the consumer
  • Introduction to the tenets of the Trusted Computing Group, especially the integration of security into the computing infrastructure
  • Description of the TCG/SED technology, as a relevant example of trusted computing

Storage Security Risk Assessments and Mitigation

Lee Donnahoo, Storage Architect, Microsoft

Abstract

Storage security risk assessments and mitigations: How to run a storage security risk assessment and develop a mitigation strategy. Will also include a discussion of various storage security issues in the industry. Storage networking, fiber channel in particular, is vulnerable to a variety of security threats. This talk will address how to analyze, document, and resolve these issues using both industry best practices and common sense.


Server, App, Disk, Switch, or OS - where is my encryption

Chris Winter, Director Product Management, SafeNet

Abstract

Encryption of critical data is being mandated by compliance regulations and is becoming increasingly utilized for isolation or segregation of important sensitive data that is not yet regulated. There are many different technologies available to encrypt business critical data that can be used in different physical and logical locations in an organization’s production environment. Each location has its advantages and disadvantages depending on factors such as currently deployed infrastructure, compliance demands, sensitivity of data, vulnerability to threats, and staffing, amongst others. To make things even more problematic, the various locations typically fall under the management of different groups within an organization’s operational and IT departments – storage administration, desktop administration, server administration, networking, and application administration. This session will illustrate how to identify the most cost effective location and understand how that meets the needs of the organization while introducing as little operational management overhead as possible.

Learning Objectives

  • Why Security testing in Storage systems needs to be done
  • The impact of not doing security testing on these systems
  • Some of the easiest ways to easily asses the current state of security on your storage systems
  • Why attackers focus on the date held on storage systems
  • How to remediate once security issues are found

SOLID STATE STORAGE


Putting on the Flash - Using Flash Technology to Accelerate Databases

Michael Ault, Oracle Flash Consulting Manager, IBM

Abstract

Flash storage technology has matured to the point where it is used at the enterprise level for database storage and performance. This presentation shows through actual client use cases how to best utilize flash technology with databases.

Learning Objectives

  • The attendee will learn how to determine if their database will benefit from flash storage.
  • The attendee will learn about the different types of flash and how to utilize them.
  • The attendee will see how actual companies have utilized flash to improve database performance.

Benefits of Flash in Enterprise Storage Systems

David Dale, Director Industry Standards, NetApp

Abstract

This is the latest update to a very popular SNIA tutorial. Targeted primarily at an IT audience, it presents a brief overview of the discontinuity represented by flash technologies which are being integrated into Enterprise Storage Systems today, including technologies, benefits, and price/performance. It then goes on to describe flash fits into typical Enterprise Storage architectures today, with descriptions of specific use cases. Finally the presentation speculates briefly on what the future will bring, including post-flash and dram replacement non-volatile memory technologies.


The State of Solid State Storage

Jim Handy, Director, Objective Analysis

Abstract

It's been nine years since NAND flash prices dropped below those of DRAM, drawing the attention of the computing community. This presentation looks at the changes SSDs have brought to computing and projects where this important technology is headed. It will answer several important questions:

  • Why is NAND interesting and where does it fit in the memory/storage hierarchy?
  • How many IOPS do different applications really need?
  • How does NAND/SSD trade off against HDDs? DRAM?
  • What is TCO and why does it matter?
  • How should I evaluate SSD specifications? What has SNIA done to improve this?
  • What about endurance? Are SSDs reliable?
  • How will new storage interfaces impact my storage plans?
  • What are the trade-offs in a successful storage system architecture?

 

Learning Objectives

  • Bring audience up to speed on SSDs
  • Explain storage issues relating to SSDs
  • Show what SNIA is doing to help in solid state storage

Can SSDs Achieve HDD Price, Capacity and Performance Parity?

Radoslav Danilak, Co-found & CEO, Skyera

Abstract

Coming Soon


You Know SSD’s Make MySQL Fast, But Do You Know Why?

Jared Hulbert, Software Development Manager, Micron Technology

Abstract

Everybody knows SSDs make MySQL faster because they are fast. But the story doesn’t end there. We will share what we’ve learned about the mechanics of MySQL performance on SSD’s. Our analysis will shed light on some interesting questions, such as: Why do some SSDs perform better with MySQL than others? Why does relative IO performance not always correlate to MySQL performance? Exactly which characteristics of SSD’s matter to MySQL? What causes the big gap between what high end SSDs are capable of and what MySQL can actually drive? How can MySQL evolve to get more benefit out of SSDs?

Learning Objectives

  • Articulate the SSD characteristics that matter to MySQL
  • Describe the reasons for the gap between what high end SSDs are capable of and what MySQL can actually drive
  • Identify which SSDs perform better with MySQL

Enhanced SSD with I/O Management

Jiho Hwang, Senior Engineer, Samsung

Abstract

Currently most of the SSDs are working as mere block devices, and still facing the performance stagnation due to the interface bottleneck. Enhanced SSD is designed for the interconnection between host and device with multiple APIs enabled, and host can control SSDs behavior according to its’ I/O workloads. This interconnection ability can improve the total performance level not only in a single device system but also in a distributed storage system. This presentation will cover the concepts of the enhanced SSD and bring open discussion how to gather and manage the features needed for this concept.


A Close Look at Hybrid Storage Performance

Kirill Malkin, CTO, Starboard Storage

Abstract

Hybrid storage platforms mixing SSD and HDD storage changed the way we have to think about performance and capacity scaling in storage systems. Measuring performance for applications becomes less of a function of disk spindles. The presenter will discuss the implications of solid state used as a read and write accelerators on performance for various workloads and the changes seen as the accelerator resources are grown. He will use real-world examples, as well as laboratory modeling.

Learning Objectives

  • What is the optimum balance of SSD to HDD?
  • What is the best way to enhance performance using SSD and how can you measure the performance to ensure you are meeting the goals of your applications?
  • What effect do hybrid architectures have on VMware performance?

Properly Testing SSDs For Steady State Performance

Doug Rollins, Senior Applications Engineer, Micron Technology

Abstract

As the demand for SSDs increases, so does the need to ensure the best drive is selected for each deployment. Essential to the selection process are the ability to validate manufacturers’ claims about SSD performance and a thorough understanding of how published performance is measured for different drive designs, markets, and usage models. While each SSD type will exhibit its own unique behaviors, most of the drives currently in these markets will exhibit similar behavior characteristics: as the drive fills, performance will decrease non-linearly. In this class we will focus on Consumer and Enterprise class SSDs designed for primary data storage.

Learning Objectives

  • Be able to measure SSD steady state performance using common benchmark tools.
  • Ensure the SSD is in the proper state before beginning a test run.
  • Control the granularity of the data reported.
  • Explain how the usage model for which the drive was designed will affect its performance.
  • Adjust test parameters to get a clearer picture of SSD performance, regardless of the market segment for which it was designed.

STORAGE PERFORMANCE


The Plumbing Power: Open Source Technologies and Market Products Together to Create a Scale-Out (layered specialized) Storage Solution, Delivering Unprecedented Performance, Scalability, Data Protection and Lowering Datacenter and Storage Acquisition Costs.

Marcelo Leal, Storage Architect, Locaweb

Abstract

Explain how Locaweb could reduce by many times storage acquisition and datacenter costs; at the same time that the utilization, performance, scalability and data protection were enhanced. Using some market products combined with Open Source technologies in a ingenious way, and creating a solution based on specialized layers, focused on the understanding of the workload and specific app's tuning. As well as features like: compression, deduplication, SSD's, RAIDZ3, and etc.

Learning Objectives

  • How to understand a workload and determine IO pattern and requirements.
  • How to create and use a model to understand some workload requirements and do comparisons between different storage solutions.
  • How a storage solution based on specialized layers can provide you with the best cost/benefit regarding performance, data protection and availability.
  • How to create availability zones and enhance overall solution's availability without using HA.
  • How strong and stable is the NFSv3 protocol and how you can rely on it to create a really scalable, high performance and cost effective solution with different vendors in the same storage cluster.

SAN Protocol Analysis and Performance Management

Robert Finlay, Business Development Manager, JDSU

Abstract

This presentation is for the Storage Manager, Administrator or Architect looking to increase their understanding of storage protocols for performance analysis and issue resolution. We will show examples of how traffic analysis can be used to identify storage related performance issues from both the initiator, switch and target points.

Learning Objectives

  • Gain an understanding of storage protocol analysis as it applies to your plumbing performance
  • Learn how packet analytics can be used to for proactive and reactive purposes

Workload Mixology

Lee Johns, VP of Product Management, Starboard Storage

Abstract

Storage workloads are diverse, and, typically, custom system designs are used to optimize the performance to match the application need. With a push for delivering internal clouds, it would be simpler if you could get predictable performance for diverse applications across SAN and NAS using a single pool of storage. This session will discuss: how a single storage system can be optimized to consolidate multiple diverse workloads; what plays well together and what does not; what architectural principles need to be solved for application consolidation on a single system; and how both virtual and physical applications peacefully can coexist on one storage system.

Learning Objectives

  • How a single storage system can be optimized to consolidate multiple diverse workloads
  • What architectural principles need to be solved for application consolidation on a single system
  • How both virtual and physical applications peacefully can coexist on one storage system
  • What plays well together and what does not

Storage Performance Analysis

Lee Donnahoo, Storage Architect, Microsoft Global Foundation Services

Abstract

An overview of the tools and methodologies available to measure and analyze storage performance issues. Methodologies include speeds/feeds, bottlenecks, how to find issues such as the infamous "FC slow drain", and long-term planning.

STORAGE PLUMBING


How VN2VN Will Help Accelerate Adoption of FCoE

David Fair, ESF- Board of Directors, Unified Networking Marketing Manager, Intel

Abstract

VN2VN is an enhancement to the ANSI T11 specification for Fibre Channel over Ethernet (FCoE) that promises to significantly reduce the cost of implementing an FCoE SAN. VN2VN allows end-to-end FCoE over an L2 network of DCB switches. VN2VN is part of the BB6 draft standard that is drawing steadily to completion and is now in letter ballet stage. Anticipating the value of VN2VN, some vendors have already started to release VN2VN capable products. Customers and bloggers are starting to discuss the impact of VN2VN on their environments. This presentation will begin with a brief overview of what VN2VN is and proceed to elucidate its real capabilities in illustrative usage models. How VN2VN can be used in typical customer deployments will be shown.

Learning Objectives

  • What capabilities are in the near final version of BB6 that facilitate deployment of end-to-end FCoE in a DCB Ethernet environment?
  • What are the compelling use cases for deployment of VN2VN?
  • What are the best practices to allow a robust secure deployment of VN2VN?

Overview of Data Center Networks

Joseph White, Distinguished Engineer, Juniper Networks

Abstract

With the completion of the majority of the various standards used within the Data Center plus the wider deployment of I/O consolidation and converged networks, a solid comprehension of how these networks will behave and perform is essential. This tutorial covers technology and protocols used to construct and operate Data Center Networks. Particular emphasis will be placed on clear and concise tutorials of the IEEE Data Center Bridging protocols (PFC, DCBX, ETS, QCN, etc), data center specific IETF protocols (TRILL, etc), fabric based switches, LAG, and QoS. QoS topics will address head of line blocking, incast, microburst, sustained congestion, and traffic engineering.


NVM Express: Optimizing for PCIe SSD Performance

David Akerson, Strategic Marketing Engineer, Intel

Abstract

Non-Volatile Memory Express (NVMExpress) is a new storage standard specifically designed for PCIe SSDs. NVM Express is an optimized, high performance, scalable host controller interface with a streamlined register interface and command set designed for Enterprise, Datacenter, and Client systems that use PCIe SSDs. NVM Express is architected from the ground up for Non-Volatile Memory (NVM). NVM Express significantly improves both random and sequential performance by reducing latency, enabling high levels of parallelism, and streamlining the command set while providing support for security, end-to-end data protection, and other Client and Enterprise features users need. This session will provide information on how NVM Express delivers performance and faster response times that Data Center and client systems need.

Learning Objectives

  • Develop an understanding of NVM Express and how it delivers greater performance from PCIe SSDs
  • Develop an understanding of the benefits of NVM Express for Data Center and Client systems.
  • Develop an understanding of product and ecosystem availability to support future planning cycles.

PCI Express and It's Interface to Storage Architectures

Ron Emerick, Principal HW Engineer, Oracle

Abstract

PCI Express Gen2 and Gen3, IO Virtualization, FCoE, SSD, PCI Express Storage Devices are here. What are PCIe Storage Devices – why do you care? This session describes PCI Express, Single Root IO Virtualization and the implications on FCoE, SSD, PCIe Storage Devices and impacts of all these changes on storage connectivity, storage transfer rates. The potential implications to Storage Industry and Data Center Infrastructures will also be discussed.

Learning Objectives

  • Knowledge of PCI Express Architecture, PCI Express Roadmap, System Root Complexes and IO Virtualization.
  • Expected Industry Roll Out of latest IO Technologies and required Root Complex capabilities.
  • Implications and Impacts of FCoE, SSD and PCIe Storage Devices to Storage Connectivity. What does this look like to the Data Center?
  • IO Virtualization connectivity possibilities in the Data Center (via PCI Express).

The Open Daylight Project: An Open Source path to a Unified SDN Controller

Thomas Nadeau, Distinguished Engineer, Juniper Networks

Abstract

Over the past few years a growing number of SDN controllers of various capabilities have emerged both from the commercial industry, as well as from the academic community. Many of these offerings have been offered as open source directly, while some are cut-down versions of "enterprise class" editions of commercial controllers. What has resulted is an industry fragmented with truly few differentiating "core" features. These include common north-bound programmable APIs, and south-bound protocol connectors such as OpenFlow, I2RS, Netconf or PCE-P. To this end, the industry has come together to create and nurture a single "base" controller containing infrastructure that is common to all of these various offerings, that also offers a single, common north-bound API to which applications can program to, as well as common south-bound plug-ins to industry standard protocols. These north and south-bound interfaces are created in a plugable architecture fostering future extension either in the freely available open source version, or as part of "enterprise" packages offered commercially. This approach has the distinct advantage of pushing the technological envelope in areas other than the base controller functionality such as "applications" that consume or utilize basic functions exposed and managed by a controller.

VDI STORAGE


VDI and Storage: Tools, Techniques and Architectural Considerations for Performance

Russ Fellows, Sr. Partner, Evaluator Group

Abstract

Application workloads present unique challenges to both IT architects and system administrators. Solving performance issues often requires recreating complex application environments. Storage performance in particular is tied closely to specific application workloads and environments that are difficult to establish and recreate. Due to complex setup and licensing restrictions, recreating these environments is costly and time consuming. This is particularly true with VDI, which can often require the use of thousands of costly licenses in order to test a workload against the infrastructure. Additionally, new virtualization technologies, coupled with new storage technologies are rapidly evolving. Learn how emerging technologies impact VDI best practices, including how to architect server, network and in particular storage systems to handle VDI performance effectively. This session will cover the unique storage concerns of VDI workloads with best practices and considerations for architecting a high performance VDI infrastructure.

Learning Objectives

  • Learn the top concerns and implications for storage including performance, capacity, data protection, boot storm and management
  • Understand requirements for planning including performance and capacity
  • A review of architectural choices and their impact for VDI
  • A discussion of the unique administration considerations

VIRTUALIZATION


VDI and Storage: Tools, Techniques and Architectural Considerations for Performance

Russ Fellows, Sr. Partner, Evaluator Group

Abstract

Application workloads present unique challenges to both IT architects and system administrators. Solving performance issues often requires recreating complex application environments. Storage performance in particular is tied closely to specific application workloads and environments that are difficult to establish and recreate. Due to complex setup and licensing restrictions, recreating these environments is costly and time consuming. This is particularly true with VDI, which can often require the use of thousands of costly licenses in order to test a workload against the infrastructure. Additionally, new virtualization technologies, coupled with new storage technologies are rapidly evolving. Learn how emerging technologies impact VDI best practices, including how to architect server, network and in particular storage systems to handle VDI performance effectively. This session will cover the unique storage concerns of VDI workloads with best practices and considerations for architecting a high performance VDI infrastructure.

Learning Objectives

  • Learn the top concerns and implications for storage including performance, capacity, data protection, boot storm and management
  • Understand requirements for planning including performance and capacity
  • A review of architectural choices and their impact for VDI
  • A discussion of the unique administration considerations

What's Old is New Again - Storage Tiering

Thomas Rivera, Sr. Technical Associate, File, Content & Cloud Solutions, Hitachi Data Systems

Abstract

The SNIA defines tiered storage as “storage that is physically partitioned into multiple distinct classes based on price, performance or other attributes.” Although physical tiering of storage has been a common practice for decades, new interest in automated tiering has arisen due to increased availability of techniques that automatically promote “hot” data to high performance storage tiers – and demote “stale” data to low-cost tiers.

Learning Objectives

  • Participants will gain an understanding of Tiering fundamentals and benefits.
  • Participants will be learn trends and best practices in automated tiering.
  • Participants will be given some resources for additional study of Storage Tiering.