Data Storage Innovation Conference Abstracts

Break Out Sessions and Agenda Tracks Include:

 


 

ANALYTICS AND BIG DATA


Can Enterprise Storage Fix Hadoop

John Webster, Senior Partner, Evaluator Group


Abstract

Survey data shows that at least half if not more of all enterprise data center Hadoop projects are stalled and that only 20% actually get into production. This presentation looks at the problems with Hadoop that enterprise data center administrators encounter and how the storage environment can be used to fix at least some of these problems including up-time, data integrity, long term data retention, and data governance.

Learning Objectives

  • Understand the issues with current enterprise data center Hadoop implementations
  • Learn what the open source community and vendors are doing to fix the problems
  • Learn how enterprise storage platforms can be used to address the problems

Hadoop-based Open Source eDiscovery

Sujee Maniyam, Co-Founder and Principal, Elephant Scale


Abstract

The task of E-discovery is to preserve, analyze, and review the business documents (such as emails and Word documents) that may have legal ramifications. As such, E-discovery is uniquely positioned to be both a reasonably generic search application, and to provide specific benefits to the legal departments of corporations and to law firms.

Learning Objectives

  • Modern technology for E-discovery
  • Legal search open source engine
  • Best practices architecture for Big Data
  • Starting point for data governance and compliance

Introduction to Analytics & Big Data - Hadoop

Thomas Rivera Sr., Technical Associate, Hitachi Data Systems


Abstract

This tutorial serves as a foundation for the field of analytics and Big Data, with an emphasis on Hadoop. An overview of current data analysis techniques, the emerging science around Big Data and an overview of Hadoop will be presented. Storage techniques and file system design for the Hadoop File System (HDFS) and implementation tradeoffs will be discussed in detail. This tutorial is a blend of non-technical and introductory-level technical detail, ideal for the novice.

It will give the attendees enough depth on how Hadoop storage works to make more informed decisions as they consider deploying Hadoop infrastructures.

Learning Objectives

  • Gain a further understanding of the field and science of data analytics
  • Comprehend the essential differences surrounding Big Data and why it represents a change in traditional IT thinking
  • Understand introductory-level background detail around Hadoop and the Hadoop File System (HDFS)

Transforming Cloud Infrastructure to Support Big Data Storage and Workflows

Jay Migliaccio, Director of Cloud Solutions, Aspera


Abstract

As companies have turned to cloud-based services to store, manage and access big data, it has become clear that the cloud’s promise of virtually unlimited, on-demand increases in storage, computing, and bandwidth is hindered by a series of technical bottlenecks: transfer performance over the WAN, HTTP throughput within remote infrastructures, and size limitations of cloud object stores.

This session will discuss the principles of cloud object stores, using the examples of Amazon S3, Microsoft Azure, Akamai NetStorage and OpenStack Swift, and performance benchmarks of their native HTTP I/O. It will share best practices in orchestrating complex, large-scale big data workflows. It will also examine the requirements and challenges of such IT infrastructure designs (on-premise, in the cloud or hybrid), including integration of necessary high-speed transport technologies to power ultra-high speed data movement, and adoption of appropriate high-performance network-attached storage systems.

The session will also explore how organizations across different industries are using big data in the cloud for ever-greater efficiencies and innovation, including those in the media and entertainment industry and in the field of life sciences.

Learning Objectives

  • How to overcome the technical bottlenecks associated with cloud-based services
  • How to take advantage of the cloud for storing, managing and accessing big data
  • How to plan and implement complex, large-scale data workflows
 

CLOUD STORAGE


Private Cloud Storage for Different Enterprise Compute Clouds

John Dickinson, Director of Technology, SwiftStack


Abstract

Enterprise users are building clouds with VMware and Citrix Cloud Platform, neither of which has a defacto storage platform. OpenStack is used to build clouds and does have an object storage at its core, but often the best answer to any cloud architecture is to use the best combination of technologies for the specific environment. This session will cover the process of selecting the right storage for different private compute clouds, focusing on OpenStack Object Storage and contrasting with other vendor specific offerings. Attendees will learn how object storage can be integrated with any compute cloud technology, as well as get more background on software defined storage, and leveraging open source software and standard hardware to build private cloud storage.

Learning Objectives

  • Integrating object storage into any cloud
  • Learn more about software-defined storage
  • Learn about leveraging open source software
  • How to pick the right storage for private clouds
  • Build a combined cloud

The Cloud Hybrid “Homerun” – Life Beyond The Hype

Greg Schulz, Senior Advisory Analyst, StorageIO


Abstract

Do not be scared of clouds, be prepared, learning about options, identify concerns so that they can be addressed. Hybrid clouds provide an approach to bridge the gap, enable a phased transformation or deployment, or simply allowing you to have a cloud your way for your requirements. This conversations looks at hybrid clouds, technology, product, service as well as management paradigm are the homerun for adapting to and enabling organization needs. After all, technology should work for an organization vs. the organization working for the technology.

Learning Objectives

  • Why hybrids make sense for many environments •
  • Hybrid use of clouds, spanning compute, application and storage I/O
  • How to determine when, where, why and how a hybrid cloud is in your future
  • Addresses common cloud concerns: control, costs, comfort, compliance, confidence

Delivering a Cloud Architecture

Alex McDonald, CTO Office, NetApp


Abstract

The emphasis in the design and implementation of cloud architectures is often on the virtualization and network aspects, and storage is often not considered in the overall design until late (often too late) in the implementation. This session will be an overview on developing & delivering a cloud architecture with a focus on getting the storage aspects correctly specified and defined. A commercial implementation of such a system will be presented as a case study on the benefits of treating storage as an important part of the process of delivering a practical cloud architecture.

Learning Objectives

  • Undertstand the place of storage in cloud architectures
  • Learn about specific storage requirements for cloud
  • Identify the issues in using storage in a cloud architecture

Leveraging the Cloud for Scalable and Simplified Data Storage

Connor Fee, Director of Marketing, Nasuni


Abstract

More and more organizations today are leveraging the public cloud for its infinite scale and economic benefits, but how can you leverage the key attributes of the cloud while still maintaining the control and performance traditional storage allows. The answer? A cloud gateway. Whether you call it a cloud gateway or cloud-integrated storage, if your organization is looking to reduce storage costs, simplify management, or improve service levels to end users, join us as we discuss the growing trend among enterprises today.

Learning Objectives

  • How and why organizations are utilizing cloud gateways in their environment today
  • Key considerations when looking into cloud storage and gateways
  • How you can leverage a gateway to reduce costs and deliver a higher service level to your end users

Key Criterias when Building ExaScale Data Center

Philippe Nicolas, Director of Product Strategy, Scality


Abstract

With the rapid data growth associated with big data requirements, data centers touch a dead-end and must consider some radical new approaches. In a nutshell, these IT infrastructure must be highly scalable, both in performance and capacity, provide high level of data durability, be geographically deployed with multiple points of presence, able to offer multiple and flexible access methods and finally consider a comprehensive ecosystem. This session presents some modern approaches and illustrate how distributed storage tackles these challenges with several key technologies.

Learning Objectives

  • Understand new market trends
  • Learn new technology developments
  • Compare different approaches to select the right one for each usage
  • Integrate key criteria for large data centers

Choosing a File Sync and Share Solution

Darryl Pace, Storage Architect, Optimal Computer Solutions, Inc.


Abstract

Today, consumers have many choices for electronic file sharing, file synchronization, and collaboration. The ease of use and availability of these products have made their adoption for personal use nearly ubiquitous. The use of file-sharing products in consumers’ personal lives has naturally bled over into their business lives as well. However, while file synchronization and sharing products can facilitate efficiency and productivity in the workplace, the use of consumer grade and/or unauthorized versions of these products at work may introduce business challenges in the areas of data security, regulatory compliance, data governance, and e-discovery (legal).

The goal of this presentation is to show why and how one company selected a file synchronization and sharing solution, so that other companies can use the information from this example in their own selection of a file sync & share product.

Learning Objectives

  • What is file sync and share?
  • Why is it important for your company?
  • How does it work?
  • What are example selection criteria for file sync and share solution?

Recover 2 Cloud - A Common Sense Approach to Disaster Recovery

Raj Krishnamurthy, Senior Product Manager, SunGard Availability Services


Abstract

Cloud ushers a common sense financial and operational model, through increased scale and scope. However, this significantly increases the risk for IT enterprises. SunGard Availability Services addresses these risks with a clear focus on Disaster Recovery and Business Continuity, that expands beyond x86 virtualizaiton capabilities.

Learning Objectives

  • Cloud as effective Disaster Recovery platform
  • High Availability vs. Disaster Recovery
  • Categories of protection/replication
  • Tiering protection portfolio
  • Managing complete recovery life cycle

Reaching to the Cloud

Subo Guha, VP of Product Management and Marketing, Unitrends


Abstract

To do more with less in today’s increasingly demanding IT environment, administrators need to reach higher rather than broader. It’s time to reach to the cloud. A growing cloud infrastructure now gives organizations the power to back up, store and secure an ever-expanding volume of data and applications within tightly-constrained IT resources and budgets.

In this session, Subo Guha, Unitrends, will discuss why a hybrid cloud model is the best option for organizations working with a cloud-based environment. Using the hybrid cloud approach, IT administrators get the best of both worlds – the speed, reliability and security of onsite backup and the intelligent replication capabilities of the cloud – ensuring their ability to consistently meet even the most rigorous SLAs, RPOs and RTOs.

Learning Objectives

  • How to most effectively use the cloud to assist in storage, backup and recovery
  • How local backups provide the fastest performance times
  • How cloud replication ensures the complete protection and recovery of data

S3 API Deep Storage Extensions for Hadoop

Stacy Schwarz-Gardner, Strategic Technical Architect, Spectra Logic


Abstract

A data revolution is occurring as more and more organizations discover new ways to extract value from their data. The desire to collect and analyze information for the sake of improving everything from business decisions to overall life experiences has driven data repositories to grow to sizes that were once inconceivable – driving new requirements for long-term mass storage aimed at increasing efficiency, lowering costs and improving access. Deep Simple Storage Service (DS3) is an open Restful API that extends the Amazon S3 API specification by optimizing data transport and long-term data storage options for large datasets. Existing DS3 clients allow Hadoop clusters to take advantage of low-cost, highly scalable deep storage tiers for long-term data storage purposes. Hadoop clusters can focus on active datasets while being able to readily access data stored within the deep storage environment when necessary. For the first time, Hadoop users can leverage DS3 capabilities to manage growth of massive data analytic projects while keeping costs contained. This session will provide an introduction to DS3, and its integration with Hadoop and different storage mediums to provide cost-effective, long-term data retention.

Learning Objectives

  • Understand new requirements for long-term mass storage
  • Receive intro to DS3, an open Restful API that extends the Amazon S3 API
  • Learn how Hadoop users can use DS3 to manage massive data analytic projects

OpenStack Cloud Storage

Sam Fineberg, Distinguished Technologist, HP


Abstract

OpenStack is an open source cloud operating system that controls pools of compute, storage, and networking. It is currently being developed by thousands of developers from hundreds of companies across the globe, and is the basis of multiple public and private cloud offerings. In this presentation I will outline the storage aspects of OpenStack including the core projects for block storage (Cinder) and object storage (Swift), as well as the emerging shared file service. It will cover some common configurations and use cases for these technologies, and how they interact with the other parts of OpenStack. The talk will also cover new developments in Cinder and Swift that enable advanced array features, new storage fabrics, new types of drives, and searchable metadata.

Learning Objectives

  • Learn what OpenStack is, and what storage support is available in OpenStack
  • Learn about the OpenStack block storage service, Cinder
  • Learn about the OpenStack object storage service, Swift
  • Learn about the emerging OpenStack shared file service, Manila
  • Learn about new developments in OpenStack storage

Combining SNIA Cloud, Tape and Container Format Technologies for Long Term Retention

Sam Fineberg, Distinguished Technologist, HP
Simona Rabinovici-Cohen, Researcher, IBM


Abstract

Generating and collecting very large data sets is becoming a necessity in many domains that also need to keep that data for long periods. Examples include astronomy, genomics, medical records, photographic archives, video archives, and large-scale e-commerce. While this presents significant opportunities, a key challenge is providing economically scalable storage systems to efficiently store and preserve the data, as well as to enable search, access, and analytics on that data in the far future. Both cloud and tape technologies are viable alternatives for storage of big data and SNIA supports their standardization. The SNIA Cloud Data Management Interface (CDMI) provides a standardized interface to create, retrieve, update, and delete objects in a cloud. The SNIA Linear Tape File System (LTFS) takes advantage of a new generation of tape hardware to provide efficient access to tape using standard, familiar system tools and interfaces. In addition, the SNIA Self-contained Information Retention Format (SIRF) defines a storage container for long term retention that will enable future applications to interpret stored data regardless of the application that originally produced it.

We'll present advantages and challenges in long term retention of big data, as well as initial work on how to combine SIRF with LTFS and SIRF with CDMI to address some of those challenges. SIRF for the cloud will also be examined in the European Union integrated research project ForgetIT.

Learning Objectives

  • Importance or long term retention
  • Challenges in long term retention
  • How SIRF works with tape and in the cloud

Database-as-a-Service vs. Do-It-Yourself MySQL in the Cloud

Cashton Coleman, CEO, ClearDB


Abstract

When you're looking to move your website, app, or your whole business from your traditional hosting provider, one of the first challenges that you're faced with is whether or not to run your own MySQL database and MySQL-powered infrastructure (the DYI approach) vs. using a cloud database service to run it for you. The ability to run 80-100% of your app using readily available services to reduce time, resources, and cost vs. trying to conquer the world by yourself and "roll your own" components, including your database, is certainly very appealing. This session will look at the pros and cons of working with a cloud database-as-a-service.

Learning Objectives

  • The TCO of running your MySQL infrastructure in the cloud is higher than DBaaS
  • Why one server instance is typically never enough in the cloud when things fail
  • Looking at the features of highly scalable, fault-tolerant database services

COVER YOUR ASSETS (CYA)


Virtualization Drives New Approaches to Backup

Nikolay Yamakawa, Analyst, 451 Research


Abstract

Backup redesign was cited as the number two storage project two years in a row in 451 Research Storage studies, signaling changing enterprise needs in backup requirements. Traditional backup software is a dominating force in data protection, but as the rate of data growth accelerates and virtualization streamlines operations, stakeholders, whether enterprises or vendors, have to support the latest hypervisor features to stay competitive. Based on hundreds of interviews with storage professionals, I will report where enterprises are successfully redesigning backup.


Best Practices in the Management of Unstructured Data

Moderator:
Molly Rector, CMO, DataDirect Networks

Panelists:
David Cerf, Executive Vice President of Business and Corporate Development, Crossroads Systems
Mark Fleischhauer, Tape Solutions Manager, HP
Philippe Nicolas, Director of Product Strategy, Scality


Abstract

In order to thrive in the new world of petabyte-level data storage, organizations must look beyond traditional storage solutions. Big Data, cloud, back-up, data preservation, compliance, data center environments and the fast-paced growth of unstructured data are all driving the need for more advanced storage capabilities to effectively retain and access this explosion of information.

This session, consisting of Floyd Christofferson, SGI (moderator); and; Chris Oswald, Fujifilm; David Cerf, Crossroads; and Simon Watkins, HP (panelists), will showcase end user case studies of active archive implementations and strategies. An active archive solution delivers a new level of capability that allows companies to store and easily retrieve data in ways that were not previously possible outside of HPC and Broadcast environments. Panel members will openly discuss how active archives enable straight-from-the-desktop access to files stored at any tier for rapid data access. These best practices in storage management will certainly deliver significant efficiency and cost savings.

Learning Objectives

  • Learn how to use active archives for environments with unstructured data.
  • Learn how active archives combine the best advantages of many technologies.
  • Learn how to ensure data is stored on the best media type for that specific data

Protecting Data in the "Big Data" World

Thomas Rivera Sr., Technical Associate, Hitachi Data Systems r


Abstract

Data growth is in an explosive state, and these "Big Data" repositories need to be protected. In addition, new regulations are mandating longer data retention, and the job of protecting these ever-growing data repositories is becoming even more daunting. This presentation will outline the challenges, methodologies, and best practices to protect the massive scale "Big Data" repositories.

Learning Objectives

  • Understand the challenges of managing and protecting "Big Data" repositories
  • Understand the technologies available for protecting "Big Data" repositories
  • Understand considerations and best practices for "Big Data" repositories

Data Protection in Transition to the Cloud

David Chapa, Chief Technology Evangelist, EVault


Abstract

Organizations of all types and sizes are moving many, but usually not all, applications and data to public and private clouds, and the hybrid environments thus created are an increasing challenge for those responsible for data protection. There are many new services available in the cloud for backup and disaster recovery that can help, but IT managers want to avoid setting up separate data protection procedures for each of the parts of their hybrid environments. Topics will include:

  • Trends in cloud usage and the impact on data protection
  • Challenges to the manageability of data protection brought on by the Cloud
  • New cloud-based tools and services available for backup and disaster recovery
  • Best practices for managing data protection in today's hybrid environment

Learning Objectives

  • Have a clear understanding of the impact of the Cloud on data protection
  • Gain knowledge of new cloud-based alternative approaches to data protection
  • Be better able to make good choices of data protection methods in and for cloud

Reducing Backup Windows and Increasing Performance When Data Reaches the Terabyte Range

Mark McKinnon, Information Technology Architect, Grand River Conservation Authority


Abstract

Grand River Conservation Authority [GRCA] is one of the largest watershed management agencies in Canada, managing water and natural resources for nearly 40 municipalities and close to a million residents. When databases reached the terabyte range, the organization experienced difficulty meeting required 24-hour backup windows. GRCA had been through four purchasing cycles, including a disk-to-tape backup and a disk-to-disk-to-tape solution and still couldn’t meet designated backup windows. After calculating a deficit of over $400,000 in productivity if one month’s data was lost, they knew it was time to solve their backup and recovery problems, leading them to deploy a flash-optimized storage solution which allowed GRCA to reduce RPO from 1 day to 1 hour, but also reduce backup windows, eliminate tapes and store 6+ years of file shares.

Learning Objectives

  • How to manage backup windows when your database reaches terabyte levels
  • Recovery best practices and the significance in reducing RPO times
  • How snapshot replication can eliminate the need for tape backups that require constant management
  • The importance of having a tool to continuously monitor sensitive data and provide storage analytics, specifically when working with small IT staffing compliments

Vendor Neutral Archive, the How and Why

Gary Woodruff, Enterprise Infrastructure Architect, Sutter Health


Abstract

The expansion of the VNA (Vendor-Neutral-Archive) into the Medical Storage field is expanding way beyound the need of PACS Storage Infrastructure. Clinical Medical storage is a very fast growing need that has several long term problems which involve the depth of the solution, but also have a horizontal need which is spanning way beyound the needs of an Imaging Department (Radiology and Cardiology). This presentation will dig into the other "ologies" which will be confronting the I.T. infrastucture and how the use of a VNA can resolve or provide the glue that inables us to grow our systems both accross many different applications and data types, but also build a storage system that respond to ILM rules and policies which allow us to "downsize", re-use and scale the solution to better fit our needs.

Learning Objectives

  • Understand how the VNA works, the use of Meda-Data
  • Horizontal growth of data types
  • Bridging silo'd systems, and migrations
  • Apply Meda Data to your storage solutions
  • Tiering of data within the VNA

True Data Disaster Recovery

Matthew Kinderwater, Director of IT Services, iCube Development (Calgary) Ltd.


Abstract

Understanding how a data recovery lab actually works, the types of hard drive failures (both in and outside multi-disk environments), and how your data is recovered is essential when disaster strikes. Learn about the common mistakes IT professionals make when it comes to self-recovery. This session will clarify fact vs. fiction, disclose data recovery secrets, and ultimately help you keep your data safe.


New Challenges - New Solutions with LTO-6 Technology and LTFS

Shawn Brune, Sc.D., Business Line Manager, Data Protection and Retention, IBM, the LTO Program


Abstract

The changing dynamics of data availability to end users is creating new challenges for data center and IT managers. End users are demanding that data of any type of any age be available through a standard file systems interface. This is a challenge for Data managers to balance the file systems availability with the cost of storing the data of which up to 90% could be unaccessed or 'cold' data. Studies have found that up to 95% of data not accessed in 90 days will never be accessed again.

The changing dynamics of data availability to end users is creating new challenges for data center and IT managers. Learn about best practices for managing file system access to data while lowering total costs of ownership, maintaining data security with tape drive encryption, how to utilize disk and tape to address objectives, and LTO-6 technology with Linear Tape File System.

Learning Objectives

  • Best practices for managing tiered data storage
  • Costs for storing data on disk and tape
  • How the open standard LTFS can be used for long-term data retention in a file system

FEATURED SPEAKERS


Enabling Data Infrastructure Return on Innovation – The Other ROI

Greg Schulz, Founder of StorageIO and Analyst, Consultant, Educator, and Author


Abstract

There is no such thing as an information recession however there are economic realities that require smart investments and innovation to move, process, protect, preserve and serve data for longer periods. After all, both people and data are getting larger and living longer. Data Infrastructures consists of servers, storage I/O networking hardware software and services spanning physical, virtual and cloud along with associated techniques, best practices, polices and people skillsets. This session looks at current and emerging trends, technologies, tools as well as challenges along with what to do to bridge the gap between barriers and opportunities from an IT customer and vendor perspective. In addition we will address popular industry buzzword bingo themes, the unknown and known, doing things the old way vs. new ways, cutting costs vs. removing cost, return on investment vs. return on innovation and technolutionary, the meritage of technology revolution and evolution.


Architecting with Flash for Web-Scale Enterprise Storage

Andy Warfield, CTO / Co-Founder, Coho Data, and CS Professor, University of British Columbia


Abstract

The cost of flash has dropped dramatically in the past couple years and many organizations are excited at how this high performance storage medium can be leveraged in their environments. In this session, University of British Columbia CS Professor and Founder of Coho Data, Andrew Warfield, will provide a primer on the challenges of architecting with flash in today's storage architectures and concepts that can be borrowed from web-scale IT. He will present test results from research his team has done on how performance can vary significantly depending on where flash is placed in a storage architecture and the amount of CPU and network bandwidth that is available, as well as explain what the key considerations IT organizations must keep in mind as they think about implementing different solutions available on the market or building their own solution from commodity hardware.

Learning Objectives

  • Review of current flash price and configuration options
  • Architectural considerations for getting the best performance from PCI-e flash
  • Overview of web-scale storage architectures leveraging flash

Customer Case Study: Multi-Terabyte Database Backup and Restores Over High Speed

Marty Stogsdill, Architect, Discover Financial Services


Abstract

A customer case study providing perspective on Discover Financial Services (DFS) OLTP database backup and restore evolution from a “traditional” enterprise tired storage model to leveraging dedicated low latency networks. DFS has evolved to an OLTP database backup and restore environment that went from days to hours and hours to minutes for 24x7x365 mission critical production environments. The increased resiliency has provided DFS with the ability to keep more data online in OLTP and decision support systems allowing for improved customer experiences and the ability to bring new products to market.

Discover Financial Services (DFS) is a credit card issuer and electronic payment services company. DFS has many brands that are household names including the Discover Card, the Discover (formerly Novus) network, Diners Club International, the Pulse debit network, Discover Bank, and issues student, personal, and home loans.

Learning Objectives

  • Requirements gathering process
  • Architecture creation process
  • Deploying a solution that meets / exceeds the evolving expectations of IT execut
  • Continuous improvement process for IT deployments through automation

Simplify Branch Office Recovery and Disaster Avoidance with Riverbed

Rob Whiteley, VP, Product Marketing, Riverbed


Abstract

Riverbed delivers a new approach to branch storage consolidation enabling organizations to replace distributed branch backup processes with a datacenter-centric approach where advanced EMC storage, snapshots, and backup capabilities can be used. Riverbed enables instant recovery of servers and data from the datacenter, simplifying disaster avoidance processes and dramatically reducing disaster recovery times.

Learning Objectives

  • Improve branch recovery point objectives (RPOs) and recovery time objectives (RTOs)
  • Centralize all backup operations – stop deploying backup servers in remote offices
  • Simplify disaster avoidance and eliminate downtime by temporarily moving operations from a branch to other locations

Disrupting the Storage Industry? - How Small Startups Can Change the Traditional, Stuffy Storage Landscape

Adrian Cockcroft, Technology Fellow, Battery Ventures


Abstract

Two trends are disrupting the storage industry. One is the expected march of technology, as commoditization turns high end features into basic needs, and turns exclusive custom hardware into open source software. The other is organizational, using DevOps to increase the speed of development silos are broken down, automation replaces manual operations, and the storage admin is replaced by an API call. Solid State Disk has moved from attached storage, to the IO bus, and is now about to appear directly on the memory bus. Highly available replication of that storage has moved out of the storage domain, using open source NoSQL data stores and distributed filesystems to create clouds based on Redundant Arrays of Independent Nodes (RAIN).


The Role of High Performance Storage in Accelerating Applications

Andy Walls, CTO and Chief Architect for Flash Systems, IBM


Abstract

It is well known that storage bandwidth has not been able to keep up with the tremendous increase in CPU speed, Memory throughput and CPU IO throughput. That has been changing of late as such technologies as Flash have made dramatic increases in throughput and IOPS and have done so in smaller densities and with less power dissipated then HDDs. This talk will explore the state of the art of high performance storage including direct attach as well as SAN attached and how such storage is a powerful accelerator for many types of applications like analytics. The state of the art in endurance and performance with such technology will also be discussed.


Scale-out Object Store for PB/hr Backups and Long Term Digital Archive

Gideon Senderov, Director Advanced Storage, NEC


Abstract

The primary challenge facing enterprise deployments is how to effectively scale a capacity optimized solution across massive amounts of long lived data for data protection and disaster recovery. Scale-out object store architectures enable massive modular scalability to support long-term data retention, while maintaining simple shared access and maximum utilization via standard front-end interfaces. Underlying object store architectures enable advanced features such as scalable efficient inline global data deduplication, dynamic provisioning and automatic data distribution across multi-generation grid, flexible data resiliency with erasure coding and multi-node data distribution, and distributed background task workload. This session will review how such as architecture is used within NEC’s HYDRAstor scale-out storage system to enable extreme scale-out throughput and capacity for backup and archive for long-term data, while maintaining a simple management paradigm with a self healing distributed resilient system.


Software Defined Storage @ Microsoft

Siddhartha Roy, Group Program Manager, File Server and Clustering, Cloud and Enterprise, Microsoft Corporation


Abstract

The storage industry is going through strategic tectonic shifts fostering renewed innovation! Hear about Storage @ Microsoft – Public Cloud, Private Cloud, and Hybrid Cloud storage. We will walk through Microsoft’s Software Defined Storage journey  –  how we got started, what our customers are telling us, where we are now, and how cloud cost and scale inflection points are shaping the journey.  We will delve into how Microsoft is channeling learnings, from hosting some of the largest data centers in the planet, towards private cloud storage solutions for service providers and enterprises.


HOT TOPICS


How To Get The Most Out of Flash Deployments

Eric Burgener, Research Director, Storage, IDC


Abstract

There are big differences in efficiencies and costs associated with flash deployments, depending on the location and architectures selected. This presentation will review why and how enterprises are deploying flash, which application environments are a good fit and why, and then discuss the storage functionality enterprises should consider when looking to deploy flash. Features will be discussed in general and not with respect to specific vendor implementations


Storage Technology Solution Presentation

Speaker To Be Announced, Presented by CommVault Technologist


Abstract

Abstract Pending


The Future Technology for NAND Flash

Sylvain Dubois, Senior Director, Strategic Marketing & Business Development, Crossbar, Inc.


Abstract

The NAND Flash technology has been fueling various data storage systems for the last three decades by advancing the technology and lowering the cost per bit. However, due to down-scaling, the continuous reliability and performance degradation of raw NAND Flash storage is causing major overhead and performance issues at the data storage system-level. Complex algorithms and advanced architectures of memory controllers will not be sufficient to work around the inherent limitations of NAND Flash.

Crossbar’s 3D ReRAM CMOS compatible technology has superior characteristics. It will provide storage systems with unprecedented density, much faster read and write performance, no block erase architecture, small page alterability and higher retention and reliability. Memory architectural features of Crossbar ReRAM-based devices will provide emerging storage systems with significant performance, simplification and reduction in system overhead and cost. In this presentation, we will review NAND and Crossbar ReRAM technology differentiating characteristics and product features, and demonstrates Data Storage system performance calculations utilizing NAND and Crossbar ReRAM.


PERFORMANCE


Benchmarking the New Storage Technologies

Bob Hansen, Managing Director, Kitaro Consulting


Abstract

Given the frantic pace of storage technology innovation today it is more important than ever to sort out real customer value from marketing hype. “1 million IOPs!” makes a great headline but may have no correlation to the performance that can be achieved under a production workload. This presentation will discuss Storage Performance Council benchmarks and their application to new storage technologies. Subjects will include the value of using complex workloads, solution and component applications, the SPC benchmark disclosure as well as benchmark applications for end users, product architects, planners, development, marketing and sales. Current benchmark results will be used to demonstrate how SPC benchmarks are applied to the latest storage technologies including flash and hybrid configurations. The presentation will conclude with a discussion of future directions for storage benchmarking.

Learning Objectives

  • Understand the difference between focused and comprehensive benchmarks
  • Develop a basic understanding of the SPC-1 and SPC-2 benchmarks and applications
  • Learn how to relate benchmark results to production applications
  • Learn how to use benchmark results during throughout the product life cycle

PROFESSIONAL DEVELOPMENT


Consumerization of IT - What is Right for Your Organization?

Marty Foltyn, President, BitSprings Systems


Abstract

Consumerization is a reality in enterprises today. Is your staff weighing the pros and cons of integrating social media, and how to propose an implementation strategy – if at all? How will consumer practices and issues like BYOD (bring-your-own-device) affect the workplace? As an executive, do you have the knowledge you need to advise, recommend and approve? In this interactive session, we'll discuss how consumerization is affecting your IT operations, strengths and shortcomings of approaches, and best practices for moving forward in with a strategy that works for your organization. Examine how companies have attempted to implement a strategy and policy in their organizations and better understand their successes, common pitfalls, and roadblocks. Learn how to define a strategy, and gain tips and techniques to get started.

Learning Objectives

  • Better understand how to define a "consumerization of IT" strategy and tactics
  • Learn about the advantages and disadvantages of consumer

Reaction Management - Trend for All Technology Professionals

David Deming, Chief Technologist, Solution Technology


Abstract

Every experience and choice we make starts in our brain. What we’re aware of , how we think, feel, and how we respond (or react) to life & work situations depends on the combination of brainwave patterns and our ability to process information.

Understanding how our brains function and learning how to manage our brain patterns gives us the freedom to make responsible verses reactionary choices.

This experiential session will introduce you to “reaction management” techniques, basic technology of the brain, brainwave pattern characteristics and how you can access deep levels of creativity to solve complicated issues and relieve stress.


SECURITY


Interoperable Key Management for Storage

Subhash Sankuratripati, Technical Director, NetApp and Co-Chair, KMIP TC, OASIS


Abstract

A standard for interoperable key management exists but how do you ensure interoperability? Practical experience from implementing the OASIS Key Management Interoperability Protocol (KMIP) and from deploying and interoperability testing multiple vendor implementations of KMIP form the basis of this presentation.

Also covered is an in-depth analysis of the SNIA SSIF KMIP conformance testing program and its importance in delivering on the interoperable key management product promise for storage products.

Learning Objectives

  • In-depth knowledge of the core of the OASIS KMIP
  • Awareness of requirements for practical interoperability
  • Guidance on important of conformance testing
  • Details of the SSIF KMIP conformance testing program

Practical Secure Storage: A Vendor Agnostic Overview

Walt Hubis, Storage Standards Architect, Hubis Technical Associates


Abstract

This presentation will explore the fundamental concepts of implementing secure enterprise storage using current technologies. This tutorial will  focus on the implementation of a practical secure storage system, independent of any specific vendor implementation or methodology. The high level requirements that drive the implementation of secure storage for the enterprise, including legal issues, key management, current technologies available to the end user, and fiscal considerations will be explored in detail. In addition, actual implementation examples will be provided that illustrate how these requirements are applied to actual systems implementations.

This presentation has been significantly updated to include current and emerging technologies, and to include changes in international standards (e.g., ISO/IEC) for secure storage.

Learning Objectives

  • Undertsand the drivers for secure storage
  • Undersatnd the key technical concepts of secure storage
  • Explore cost effective methods for securing storage in various of environments

Best Practices for Cloud Security and Privacy

Eric Hibbard, Senior Director, Data Networking Technology, Hitachi Data Systems


Abstract

As organizations embrace various cloud computing offerings it is important to address security and privacy as part of good governance, risk management and due diligence. Failure to adequately handle these requirements can place the organization at significant risk for not meeting compliance obligations and exposing sensitive data to possible data breaches. Fortunately, ISO/IEC, ITU-T and the Cloud Security Alliance (CSA) have been busy developing standards and guidance in these areas for cloud computing, and these materials can be used as a starting point for what some believe is a make-or-break aspect of cloud computing.

This session provides an introduction to cloud computing security concepts and issues as well as identifying key guidance and emerging standards. Specific CSA materials are identified and discussed to help address common issues. The session concludes by providing a security review of the emerging ISO/IEC and ITU-T standards in the cloud space.

Learning Objectives

  • General introduction to cloud security threats and risks
  • Identify applicable materials to help secure cloud services
  • Understand key cloud security guidance and requirements

Implementing Stored-Data Encryption

Michael Willett, Storage Security Strategist, Samsung


Abstract

Data security: top priority re:security breaches, punitive costs. Combined with litigation risks and compliance issues, companies face many products claiming to protect data-at-rest. The storage industry has standardized/deployed technologies to secure stored-data. This tutorial will highlight drive-level self-encryption technology that provides secure foundation and compare with other methods: host-based to controller-based. Self-encryption will be compared to software-based encryption.

Learning Objectives

  • The mechanics of SEDs, as well as application and database-level encryption
  • The pros and cons of each encryption subsystem
  • The overall design of a layered encryption approach


SOFTWARE DEFINED STORAGE


Software-Defined Storage in Windows Server 2012 R2 and System Center 2012 R2

SW Worth, Senior Standards PM, Microsoft Corporation


Abstract

Microsoft is enabling strategic shifts to reducing storage costs with Windows Server 2012 R2 software-defined storage. The server operating system and hypervisor (Hyper-V) are "cloud-optimized" for private clouds, hybrid clouds, and public clouds from Hosters and Cloud Service Providers (CSPs), using industry standard servers, networking, and JBOD storage. This allows reduction of capital and operational expenses for storage and availability features. This presentation will focus on the use of SMB3 for file-based storage for server applications, and 'Storage Spaces' for cost-effective business critical storage using JBOD. We'll also touch on other storage-related aspects of the operating system, such as iSCSI Target, deduplication, ODX (offloaded data transfer), Trim/Unmap, file system advances (ReFS), and even NFSv4.1. Finally, System Center Virtual Machine Manager manages all aspects of traditional and software-defined storage based on standards such as SMI-S.

Learning Objectives

  • Software-defined storage features of a leading server operating systems
  • File-based storage using SMB3 protocols
  • 'Storage Spaces' erasure coding technoligy allows JBOD for enterprise storage
  • Standards-based storage management using SMI-S

Deploying Software Defined Storage for the Enterprise with Ceph

Paul Von Stamwitz, Senior Storage Architect, Fujitsu


Abstract

Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance, and scalability. These services are unified into a single system through Ceph's underlying distributed object store, RADOS. Not only does RADOS abstract the physical storage hardware, its unique software-defined architecture allows it to control the appropriate level of reliability and performance for various storage services. This talk will discuss some approaches that can be used with Ceph to address enterprise storage requirements.

Learning Objectives

  • An overview of Ceph from a SDS perspective
  • Why Ceph is a viable component for Software Defined Data Centers
  • Design considerations and challenges in deploying Ceph for the Enterprise

Software Defined Storage - Storage Management Analytics

Ramani Routray, STSM and Manager, IBM Master Inventor, IBM Almaden Research


Abstract

The recent emergence of cloud technologies has provided an interesting business model for both customers and cloud providers. However, from a management standpoint, an important problem that remains unsolved in modern data centers is the workload optimization between storage, network and compute. As application diversity had grown up tremendously, future storage management would require provisioning efficiency, cost-effective, secure, easy to use interoperability across different products that have different demands from the underlying storage system. From Software-Defined Storage perspective, integrated approach to extend storage services from the storage systems would bring storage performance and costs in lockstep with the scaling of virtual infrastructure. The Software-Defined Data Center will provide automation, agility, flexibility, and efficiency to transform the delivery of IT. This talk provides highlights of OpenStack based IaaS with building blocks for these next generation data centers in an evolutionary way that are software controlled, high performing, highly available, flexible and scalable. This IaaS framework enables dynamic demand of highly virtualized workloads in a low cost on top of efficiently utilized commodity and/or specialized hardware. Advanced storage management analytics such as i)performance aware storage placement with device models ii) storage resiliency for disaster protection iii) storage tiering are some of the key features highlighted.

Learning Objectives

  • OpenStack based Software Defined Storage
  • Advanced Storage Management Analytics
  • Performance aware storage placement
  • Storage resiliency for disaster protection
  • Storage tiering - ILM

The Meaning and Value of Software Defined Storage

Doug Voigt, Distinguished Technologist, HP and SNIA Technical Council


Abstract

Software defined storage has emerged as an important concept in storage solutions and management. However, the essential characteristics of software defined storage have been subject to interpretation. This session defines the elements that differentiate software defined storage solutions in a way that enables the industry to rally around their core value. A model of software defined storage infrastructure is described in a way that highlights the roles of virtualization and management in software defined storage solutions.

Learning Objectives

  • Learn about the characteristics of various software defined storage solutions
  • Understand a vendor neutral model that melds these characteristics together
  • Learn how those characteristics create flexible storage solution value

Thinking Outside the Box with Software

Yoram Novick, Founder, President and Chief Executive Officer, Maxta, Inc.


Abstract

What if a user could re-architect his storage infrastructure to eliminate storage management and storage arrays altogether? Maxta would like to present ideas on how implementing a storage-defined software platform that integrates and delivers compute and storage power on the same pool of commodity, off-the-shelf hardware is the solution. Storage looks like layered software, running in virtual machines on pools of industry-standard servers. Pooling and sharing all resources – CPU, memory, capacity – delivers resource and operational efficiency, since there is only one abstracted system to manage versus isolated physical systems. The savings and simplification of this new model, plus OPEX and CAPEX is compelling.

Learning Objectives

  • The next big step in simplifying IT
  • The Software-Defined platform – how it works
  • Cost Savings

Building Multi-Purpose Storage Infrastructure Using a Software-Defined System

Paul Evans, Principal Architect, Daystrom Technology Group


Abstract

Multiple technologies are converging to make the automated deployment of infrastructure faster, easier, and more reliable, but gaps in communication remain between Applications, the Data Sets, and the Infrastructure itself that prevent the formation of an optimal System. The Software-Defined Systems model seeks to bridge those gaps, and new advances in SDSys allow for the visual design and deployment of a variety of advanced storage infrastructure. This paper will discuss the SDSys model, the communications, and some sample deployments that include a Reliable Archive, 4K Content Review, and Scientific Analytics - all using the same infrastructure building blocks.

Learning Objectives

  • Review SDSys Architecture and Components
  • Discuss protocols used for SDSys communications
  • Compare SDSys to Infrastructure (IaaS) and Platform (PaaS) models
  • Detail use cases for SDSys in a variety of production application environments

Bring back the flexibility of the cloud with SDS and Open Source

Michael Letschin, Sr. Product Manager, Nexenta


Abstract

The cloud today is a combination of service providers or corporations and the solution stacks they provide. Open Source stacks dominate this space but deploying in a cost effective way is limited with traditional storage arrays. Software defined storage is the solution to the excessive TCO that these providers face. Utilizing the latest unified storage systems for block, file and even object storage, SDS gives providers and corporations the flexibility that is promised in the Cloud of Today.

Learning Objectives

  • Advantages of SDS
  • File system comparisons
  • Calculating TCO for Open Source and SDS solutions
  • Cloud Stack Comparison
  • Cloud Automation product overview

Software Defined Storage – the New Storage Platform

Anil Vasudeva President & Chief Analyst, IMEX Research


Abstract

The runaway success of Virtualization lay in software driven provisioning the pooled resources to optimally meet the workload requirements for an efficient data center. Software defined Datacenter takes the next step of driving storage services of data protection and storage efficiencies techniques such as encryption, Compression, Snapshots, Deduplication, Auto-tiering etc. to become an efficiently integrated and dynamically active system and not merely a passive keeper of data.

Learning Objectives

  • This presentation delineates how Software Defined Storage is evolving
  • The session will provide a clear understanding of how Software Defined Data Centers take the next step
  • This session will appeal to Development Managers, System Integrators and IT Managers

SOLID STATE STORAGE


Shining Light on the DIMM Slot

Adrian Proctor, VP Marketing, Viking Technology


Abstract

As data sets continuing to grow, IT managers have begun seeking out new ways for flash to be deployed in the data center in order to take greater advantage of the performance and latency benefits. With traditional interfaces such as SAS, SATA and PCIe already taking advantage of flash and deployed, the focus has shifted to non-traditional interfaces in order to further penetrate current infrastructure. This has led to the emergence of new solutions that leverage the DDR3 interface, utilizing existing DIMM slots in server hardware.



Storage Speed and Human Behavior

Eric Herzog, CMO, Violin Memory


Abstract

People want instant gratification. From news on Twitter, Google searches and McDonald’s cheesburgers, humans are becoming conditioned to the speed of now. But how important is it for business? When considering your datacenter how much performance really has an impact? What are you willing to pay for in terms of performance? What kinds of behaviors change when things go from fast, to really fast? And is fast ever going to be fast enough? Eric Herzog of Violin Memory will discuss the questions above.

Learning Objectives

  • Human response to different latencies
  • Business impact of latency
  • Technologies to reduce latency in compute-systems
  • Flash array impact on latency

SSD Synthetic Enterprise Application Workloads

Eden Kim, CEO, Calypso Testers


Abstract

Presentation of Enterprise class synthetic workloads used in SSD performance analysis and product qualification. Examination of a wide range of workloads typically seen in a variety of common Enterprise applications including, but not limited to, OLTP, SQL server, database, VOD, digital imaging, VDI and more. Survey of advanced tests, analysis of response time distributions and operating demand intensity and performance relative to application response time ceilings.

Learning Objectives

  • Identify synthetic workload access patterns
  • Define multiple stream pre conditioning loads
  • Analyze workload performance
  • Evaluate IOPS v Response Times v Demand Intensity

Benefits of Flash in Enterprise Storage

David Dale, Director Industry Standards, NetApp


Abstract

Targeted primarily at an IT audience, this presentation, an update to a populate SNIA Tutorial, provides an overview of the impact of NAND Flash on Enterprise storage systems. After describing the architectural impact, the session goes on to describe where Flash fits in today's Enterprise Storage solutions, with descriptions of specific use cases. Finally the presentation speculates on what the future will bring.


The New Flash Architecture ... Wait ... There's a New One Already?

Marco Coulter, Vice President, 451 Research


Abstract

When flash (solid state) arrived for storage there was a lot of talk and VC investment around the All Flash Array or AFA. But flash offers many choices and not all are popular. Beginning with the current deployments based on hundreds of interviews with storage professionals, this session will show how deployments start slowly at first for specific purposes, such as improvement of specific application performance. But as the magic line is passed of cost vs. benefit we may see the replacement of disk capacity with solid state. Each of the solid state options, including hybrid arrays, solid state in servers, and all-flash array, has different benefits and costs associated with it. Is 2014 the time to begin? Where and how are storage professionals applying this technology to justify the investments?


Paving the Way to the Non-Volatile Memory Frontier

Doug Voigt, Distinguished Technologist, HP


Abstract

Flash technology has fueled an explosion of new storage solutions and is now opening up new non-volatile memory frontiers. The convergence of storage and memory has profound implications for systems and applications that will drive change for years to come. This session describes how new technologies will impact storage and memory solutions and what is being done to pave the way. We will establish the context for, and the significance of ongoing work by SNIA’s NVM Programming Technical Working Group.

Learning Objectives

  • Learn what new NVM technologies are on the horizon and how they impact systems
  • Learn what SNIA’s NVM Programming TWG is doing to enable these new technologies
  • Learn how applications will evolve to create new solutions using NVM technology

Utilizing Ultra-Low Latency within Enterprise Architectures

Page Tagizad, Senior Product Marketing Manager, SanDisk


Abstract

Gathering real-time information has become more important now than ever for enterprises if they are to compete. To provide real-time data access today’s applications are forced to overcome a new bottleneck – storage – leading data center managers to seek better response times. However, the question remains, how can storage be reimagined to deliver even lower latency and higher performance?


Changing the Economics of Flash Storage Delivering Flash at the Price of Disk

Mike Davis, Director of Marketing, File Storage Solutions, Dell


Abstract

Virtualization and the explosion of I/O-intensive applications for big data analytics and database transactions increase the demand for higher levels of storage performance while the growth of unstructured data drives the need for capacity-optimized storage. This session will explore how to balance fast performance, capacity and cost with a scale-out architecture approach to enable organizations to adjust their infrastructure cost-effectively with new technology, like flash and tiering to address growing workload and capacity needs, rather than taking a rip-and-replace approach. We will discuss how to achieve flash performance at the price of an all-disk solution and the benefits of tiering across SLC, MLC and HDDs while increasing capacity with hybrid arrays that offer a lower price point per/GB than all-flash arrays.

Learning Objectives

  • Questions to ask before adding flash to existing data center infrastructure
  • Benefits of flash like lower cost for performance, lower power consumption
  • Benefits of tiering like improving performance for data intensive apps
  • Best practices from customer iland Internet Solutions use of flash & tiering

STORAGE ARCHITECTURE


Storing 85 Petabytes of Cloud Data without Going Broke

Gleb Budman, CEO, Backblaze


Abstract

One way to store data, especially bulk data, is to outsource the storage to some else. For 85 Petabytes, that would cost at least a couple of million dollars a month with a service such as Amazon S3. On the other end of the spectrum you can build your storage, deploy it to a colocation facility and then staff the operation and management of everything. In this session we’ll compare these two alternatives by covering the challenges and benefits of rolling your own data storage versus outsourcing the entire effort. The insights presented are based on real world observations and decisions made in the process growing a data center from 40 Terabytes to 85 Petabytes over a five year period.

Learning Objectives

  • Identify different types of data (transactional, bulk, et al) and how they merit
  • Understand the requirements in storing and managing 85 Petabytes of bulk storage
  • Outline the financial implications of choosing a given storage model
  • Compare different strategies of storing 85 Petabytes of data

Extending the Benefits of HDD: Breaking Down Walls All Storage Vendors Face

Keith Hageman, Storage Technology Evangelist, X-IO Technologies


Abstract

This is a presentation that describes extending the benefits of HDD and breaking the walls that every storage vendor struggles with in the areas of disk replacements, RAID rebuilds, and performance drop-off at greater than 40% used capacity.

Learning Objectives

  • Adaptive Disk Queuing Models
  • Properly Striping Data Across HDDs
  • Eliminating Disk and RAID groups
  • Creating an Environment where multiple HDDs work together
  • Self-Healing HDD Techniques

Project Fermi - A Highly Available NAS Gateway Built from Open Source Software

Dan Pollack, Senior Operations Architect, Aol Inc.


Abstract

The presentation describes the components and implementation of a highly available NAS gateway built from open source software. Using industry standard hardware and open source OS and HA clustering tools AOL has built a replacement for high cost commercial NAS systems. The performance and availability characteristics approach the capabilities of high cost commercial systems for a fraction of the cost.

Learning Objectives

  • Using open source for storage
  • Combining existing components to create robust systems
  • Tuning and lessons learned when building systems from scratch

What is Old is New Again: Storage Tiering

Gideon Senderov, Director, Advanced Storage Products, NEC


Abstract

Although physical tiering of storage has been a common practice for decades, new interest in automated tiering has arisen due to increased availability of techniques that automatically promote “hot” data to high performance storage tiers – and demote “stale” data or compliance data to low-cost tiers.

Learning Objectives

  • Understand tiering fundamentals and benefits
  • Understand trends in automated Tiering
  • Understand Tiering Best Practices


Rightsizing Tiered Storage Systems

Octavian Paul Rotaru, IT Project Manager and SAN/Storage/Backup Line Leader, Amdocs


Abstract

A multi-tiered storage system with automated data movement provides the best solution for managing the data explosion IT is experiencing. While tiered storage strategies can cut enterprise data storage costs and address storage capacity issues, rightsizing the storage tiers is a difficult exercise in many environments.

The purpose of this lecture is to go over the commonly used tiering estimating algorithms and methods (usually based on IO skew calculation) and explain their shortcomings in different workload contexts (cyclical data workloads specific to telecom industry, high performance workloads, etc.), as well as propose a new storage tiering estimation methods which attempts to solve these issues and provide more accurate estimates.

Unpredictable rate of storage growth and fluctuations in data rates often lead to performance issues. Automated storage tiering software can solve this problem and optimize storage allocation for performance if the sizing of the tiers is estimated right.

The impact of the following factors that are usually overlooked on the tiering mix will be discussed in this lecture: data movement speed and tiering overheads, storage based replication, snapshots and clones, IO size, Sequential vs. Random IO, etc.


The Root Cause of Unstructured Data Problem Is Not What You Think

Bruce Thompson, CEO, Action Information Systems


Abstract

AIS, through its Expedite product, will show the root cause of the unstructured data problems lie with its very definition. What business users think of as unstructured data is not a “pile-of-files” but information assets, an asset being a set of files, metadata, logs, emails, people, and rules that collectively, constitute an entity meaningful to the business. Users work with contracts, quotes, invoices, engineering reports, etc., not strings of bytes.

This new perspective forces a fundamental shift for storage from essentially being ignorant of what it is storing to knowing what the data is, how it is used, who needs to use it, etc. By layering information asset management over the existing storage infrastructure, this precious, new-found knowledge dramatically changes the way storage functions are funded, implemented, configured, triggered, integrated, and valued. Currently impossible functions turn out to be relatively straightforward with this new view of unstructured data.

Learning Objectives

  • The root cause of unstructured data problems lie with its very definition
  • Users work with sets of information assets, not strings of bytes
  • Forces shift storage to know what it is storing
  • The new vision changes the game for storage functions

Who Says Data Center Storage Has to Be Inefficient?

Larry Chisvin, VP, Strategic Initiatives, PLX Technology


Abstract

Data center storage is evolving in ways no one could have imagined just a few years back. The storage subsystems are faster and larger, the interconnections are more sophisticated and flexible, and expectations of cloud capability are reaching near-mythic levels. Trouble is, this has been accomplished with a largely brute-force approach – exacting a penalty in terms of cost, space, power distribution, and cooling. Fortunately, new developments in data center architectures are paving the way for far more efficient storage.

This session will put the spotlight on several important industry trends that allow the data center to provide high performance and advanced storage at substantially lower costs and power than what had been the status quo. The first is shared I/O, which enables the processing subsystems to share the storage devices, thus eliminating duplication. The second is convergence, which allows the processing, communication and storage subsystems to be disaggregated, yet interact more efficiently than they do today. These two trends can be effectively implemented with a high-speed fabric built from PCI Express switching, which can eliminate most of the intermediate bridging devices and additional switch fabrics that contribute to the unnecessary cost and power.

Learning Objectives

  • How data centers can be architected to enable more efficient storage
  • How shared IO can enhance a data center's overall cost and scalability
  • How PCI Express will play a critical role in data center storage

A New Strategy for Data Management in a Time of Hyper Change

David Langley, Director of System Engineering, CommVault


Abstract

Major changes in data management strategy and implementation are needed in order to keep pace with exploding data volumes and to optimize storage economics. The same old way of managing data will not allow organizations to leverage the innovation, diversity and intelligence in the next generation storage infrastructure and data itself. Find out what the next generation data management strategy and implementation can be to significantly lower costs, improve availability and extract greater value even in the face of exponential increases.

STORAGE MANAGEMENT AND PERFORMANCE


Automated Methodology for Storage Consolidation & Optimization for Large Infrastructures

Alok Jain, Founder and CEO, Interscape Technologies Inc. and
Ram Ayyakad, COO and Biz Dev, Interscape Technologies Inc.


Abstract

The presentation will provide a strategy and a automated methodology for Data Center Storage Optimization and Consolidation from Performance, Capacity & Cost perspective for a large multi-vendor multi-tier infrastructure. This methodology shows how to leverage existing OEM tools in a highly efficient and automated manner to extract the data, summarize & aggregate data and then model for a desired target storage configuration. This multi-step automated process has already been implemented successfully in a commercially available tool. This strategy has been used multiple times for up to 30 PetaByte environments.

Learning Objectives

  • Performing storage infrastructure discovery of current storage infrastructures
  • How large storage infrastructure can be consolidated into denser footprints
  • How to create consolidation models for a multi-vendor multi-tier environments
  • How to leverage the methodology for ongoing Storage Analytics - Performance & CP


SMI-S and Storage in Your Data Center

Chris Lionetti, Reference Architect, NetApp


Abstract

SMI-S is the standards-based way to Expose and modify Storage directly to clients; Discover and control RAID groups and Primordial disks; Configure Thin Provisioning, Initiator Groups and Mappings and File Shares. Best of all, all of these activities are cross-vendor and incorporated end to end from the host through the switching infrastructure to the controllers and down to the storage devices.


Turning a High-Wire Juggling Act into a Walk in the Park

Lavan Jeeva, IT Operations Manager, KIPP Foundation


Abstract

Best Practices for Simple, Effective Storage Management. When your responsibilities as a hands-on IT professional span multiple layers of data center infrastructure and critical applications, the last thing you need is to be constantly tied up in the often precarious act of manual storage performance, capacity and data protection management. Storage is undoubtedly the most critical layer of infrastructure for today’s data center workloads, such as VDI environments, and with the right storage solution, sizing, deployment, management, scaling and disaster recovery should be easily handled.

Learning Objectives

  • Storage sizing & selecting the right storage infrastructure for your datacenter
  • Deploying storage to support VDI
  • Efficient storage management and data protection

Applied Storage Technologies for Performance Optimized Big Analytics

Hubbert Smith, Office of CTO, Alliance Manager, LSI


Abstract

Big Analytics is quickly evolving away from batch toward real-time to address high value and time sensitive analytics problems. This is reviews the intersection of big data and applied storage technologies and systems.

Learning Objectives

  • Understand Big Data storage workload
  • Understand the storage workload to ingest data
  • Understand the storage workload to query and extract value from the data
  • Review benchmark data, and the underlying "why"

Performance and Innovation of Storage Advances through SCSI Express

Marty Czekalski, President, SCSI Trade Association
Greg McSorley, Vice President, STA


Abstract

The SCSI Trade Association (STA) is spearheading the SCSI Express initiative. SCSI Express represents the natural evolution of enterprise storage technology building upon decades of customer and industry experience. SCSI Express is based on two of the highest volume and widely deployed, interoperable technologies in the world – SCSI and PCI Express. These two technologies enable unprecedented performance gains while maintaining the enterprise storage experience. STA will present an in-depth overview of SCSI Express, including what it is, potential markets, where it is being developed, why it is important to the enterprise computing platform, how it is implemented, and the current status and timeline.

Learning Objectives

  • Provide attendees with a definition of SCSI Express and a timeline
  • Attendees will be given a deeper dive into SCSI Express hardware and software
  • Provide a look into SCSI Express target applications

STORAGE PLUMBING


Next Generation Storage Networking for Next Generation Data Centers

Dennis Martin, President, Demartek


Abstract

With 10GigE gaining popularity in data centers and storage technologies such as 16Gb Fibre Channel beginning to appear, it's time to rethink your storage and network infrastructures. Learn about futures for Ethernet such as 40GbE and 100GbE, 32Gb Fibre Channel, 12Gb SAS and other storage networking technologies. We will touch on some technologies such as USB 3.1 and Thunderbolt 2 that may find their way into datacenters later in 2014. We will also discuss cabling and connectors and which cables NOT to buy for your next datacenter build out.

Learning Objectives

  • What is the future of Fibre Channel?
  • What I/O bandwidth capabilities are available with the new crop of servers?
  • Share some performance data from the Demartek lab for various storage interfaces

Gen 6 Fibre Channel is Coming: What You Need to Know

Craig Carlson, Technologist, Office of the CTO, QLogic Corporation


Abstract

Beyond doubling throughput to 32Gb, how will Gen 6 Fibre Channel meet future data center requirements for hyper-scale virtualization, solid-state storage technologies and new architectures? The FCIA Speedmap, the technology roadmap that accompanies each new Fibre Channel specification, pinpoints highly attractive market propositions balanced with sound engineering feasibility. Vendors will craft their Gen 6 Fibre Channel solutions confidently using the FCIA Speedmap and deliver a bevy of new security, scalability, reliability and economic benefits. This session provides unique insights into what storage industry users need to know in preparation for Gen 6 Fibre Channel solutions.

Learning Objectives

  • Technological advances of Gen 6 Fibre Channel beyond 32Gb throughput
  • Why Fibre Channel is and will continue as the #1 storage networking technology
  • How longevity has hone end2end technology advancements for Fibre Channel

Forming Storage Grids Using iSCSI

Felix Xavier, Founder and CTO, CloudByte


Abstract

So far, storage nodes talk to each other only for the DR or backup/archival purposes. In the case of scale-out NAS, there were some proprietary protocols being used along with the dedicated or distributed metadata server. However, the advent of cloud brought a new requirement on the storage where each of the storage nodes need to talk to each other and bring the hot data near the application across datacenters. The communication between the storage nodes should be standard-based to form a global cloud. This topic covers how storage nodes can communicate with each other on the iSCSI standard protocol to form a storage grid and serve the same set of data across multiple datacenters to application instances.

Learning Objectives

  • Learning how to get storage nodes to communicate on iSCSI

Use Cases for iSCSI and FCoE: Where Each Makes Sense

Jeff Asher, Principal Architect, NetApp


Abstract

For many years, Fibre Channel was the protocol of choice for Storage Area Networks (SANs), but iSCSI and more recently Fibre Channel over Ethernet (FCoE) have challenged Fibre Channel’s dominance. Datacenter Ethernet (DCE) is a series of enhancements to the common Ethernet implementation that allow the performance and resiliency required for storage networks. However, all of the improvements to Ethernet to allow FCoE to function provide those same benefits to iSCSI. Since, the hardware for DCE using FCoE and iSCSI is often identical, it comes down to a matter of choosing which protocol to run on the network. This tutorial will delve into these topics and help answer when each protocol may be appropriate to a particular set of requirements. Users will learn:

  • Market perceptions of FCoE and iSCSI
  • The impact of Data Center Bridging (DCB)
  • Topology requirements and limitations
  • Performance requirements and bottlenecks
  • Resource utilization
  • Skills and support - it’s not always a pure technical decision

Learning Objectives

  • Market perceptions and use cases of FCoE and iSCSI
  • Topology requirements and limitations
  • Performance requirements and bottlenecks

SAS: The Fabric for Storage Solutions

Marty Czekalski, President, SCSI Trade Association
Greg McSorley, Vice President, STA


Abstract

SAS is the backbone of nearly every enterprise storage deployment. It is rapidly evolving, adding new features, enhancing capabilities and offering “no compromise” system performance. SAS not only excels as a device level interface, its versatility, reliability and scalability have made it the connectivity standard of choice for creating new storage architectures.

This presentation covers the advantages of using SAS as a storage device interface, and how its capabilities as a connectivity solution are changing the way data centers are being deployed. 12Gb/s SAS transfer rates, bandwidth aggregation, SAS Fabrics (including switches) active connections, and multi-function connectors (connectors that support SAS as well as PCIe attached storage devices) allows data center architects to create sustainable storage solutions that scale well into the future.

Learning Objectives

  • Understand the basic capabilities of SAS, including its compatibility with SATA
  • Hear the latest updates on the market adoption of 12Gb/s SAS
  • See examples of SAS as a potent connectivity solution

STORAGE TECHNOLOGIES


Emerging Storage and Memory Technologies

Thomas Coughlin, President, Coughlin Associates
Edward Grochowski, Computer Storage Consultant


Abstract

While Flash and DRAM devices based on silicon as well as magnetic hard disk drives will continue to be the dominant storage and memory technologies in 2014, this trend is expected to be impacted through 2016 and beyong by new and emerging device structures. These advanced technologies are based on new mechanisms in combination with existing silicon cells to create high density, lower cost products which have an additional property of non volatility. These structures include MRAM, RRAM, FRAM, PRAM and others manufactured by new techniques and equipment.The promise of terabyte devices appearing in the near future to replace existing memory and storage products is based on a on a continued improvement in processing techniques which drive a competitive price per GB for large server units as well as PC's and consumer based products.

Learning Objectives

  • Technology advancing to identify new and emerging memory and storage devices
  • Potential candidates MRAM, RRAM, Advanced Flash
  • Market inmpact begins in 2015/2016
  • New storage mechanisms are involved
  • Shift in production equipment priorities

Storage Systems Can Now Get ENERGY STAR Labels and Why You Should Care

Dennis Martin, President, Demartek


Abstract

We all know about ENERGY STAR labels on refrigerators and other household appliances. In an effort to drive energy efficiency in data centers, the EPA announced its ENERGY STAR Data Center Storage program in December 2013 that allows storage systems to get ENERGY STAR labels. This program uses the taxonomies and test methods described in the SNIA Emerald Power Efficiency Measurement specification, which is part of the SNIA Green Storage Initiative. In this session, Dennis Martin will discuss the similarities and differences in power supplies used in computers you build yourself and in data center storage equipment, 80PLUS ratings, and why it is more efficient to run your storage systems at 230v or 240v rather than 115v or 120v. Dennis will share his experiences running the EPA ENERGY STAR Data Center Storage tests for storage systems and why vendors want to get approved.

Learning Objectives

  • Learn about power supply efficiencies
  • Learn about 80PLUS power supply ratings
  • Learn about running datacenter equipment at 230v vs. 115v
  • Learn about the SNIA Emerald Power Efficiency Measurement
  • Learn about the EPA ENERGY STAR Data Center Storage program

The Internet of Things is a Huge Opportunity for Object Storage

Tom Leyden, PMM WOS, DataDirect Network


Abstract

The storage industry is going through a big paradigm shift that is caused by drastic changes in how we generate and how we consume data. This shift is also referred to as The Internet of Things, a concept that was first predicted over a decade ago, but which is happening now more than ever. As a result, we also have to drastically change how we store and access data: the market needs simple, massive, online storage pools that can be accessed from anywhere and anytime. Object Storage meets these needs and is currently a hot topic as it creates opportunities for new revenue streams..


Building Open Source Storage Products with Ceph

Neil Levine, VP Product, Inktank


Abstract

Ceph is an open-source, massively scalable, software-defined storage system which provides object, block and file system storage in a single platform. As well as providing a snapshot of the current Ceph project - its roadmap, community and ecosystem, Neil will explore both the challenges of bringing open source storage technology to the enterprise and the options for using Ceph as a foundation for product innovation.


The Curious Case of Database Deduplication

Gurmeet Goindi, Principal Product Manager, Oracle Corporation


Abstract

In the recent years, deduplication technologies have become increasingly popular in the modern datacenter. Whether it be purpose built backup appliances that use deduplication to reduce the footprint of backups, or be the all flash arrays that utilize deduplication to get better storage efficiency out of flash storage. Though all deduplication technologies are not created equal but they do promise significant reduction in footprint of the data actually stored on media. The amount of data reduction not only depends on the deduplication technology but also the type of data that is being duplicated. While majority of the storage appliances advertise huge storage savings when it comes file system data, the duplication ratios for data stored in relational databases is significantly lower. This session makes an attempt is analyzing the reasons behind low deduplication ratios for relational databases and also contrast and compares various deduplication techniques in the context of relational databases.

Learning Objectives

  • Acquire a deeper technical knowledge of the deduplication technologies available
  • Understand the implications of using these technologies on relational databases
  • Develop a comparison framework to assess the suitability of various deduplication

Practical Steps to Implementing pNFS and NFSv4.1

Alex McDonald, Industry Evangelist, NetApp


Abstract

Much has been written about pNFS (parallelized NFS) and NFSv41, the latest NFS protocol. But practical examples of how to implement NFSv4.1 and pNFS are fragmentary and incomplete. This presentation will take a step-by-step guide to implementation, with a focus on file systems. From client and server selection and preparation, the tutorial will cover key auxiliary protocols like DNS, LDAP and Kerberos.

Learning Objectives

  • An overview of the practical steps required to implement pNFS and NFSv4.1
  • Show how these parts are engineered and delivered as a solution

SMB Remote File Protocol (Including SMB 3.0)

John Reed, Product Manager, NetApp


Abstract

The SMB protocol has evolved over time from CIFS to SMB1 to SMB2, with implementations by dozens of vendors including most major Operating Systems and NAS solutions. The SMB 3.0 protocol, announced at the SNIA SDC Conference in September 2011, is expected to have its first commercial implementations by Microsoft, NetApp and EMC by the end of 2012 (and potentially more later). This SNIA Tutorial describes the basic architecture of the SMB protocol and basic operations, including connecting to a share, negotiating a dialect, executing operations and disconnecting from a share. The second part of the talk will cover improvements in the version 2.0 of the protocol, including a reduced command set, support for asynchronous operations, compounding of operations, durable and resilient file handles, file leasing and large MTU support. The final part of the talk covers the latest changes in the SMB 3.0 version, including persistent handles (SMB Transparent Failover), active/active clusters (SMB Scale-Out), multiple connections per sessions (SMB Multichannel), support for RDMA protocols (SMB Direct), snapshot-based backups (VSS for Remote File Shares) opportunistic locking of folders (SMB Directory Leasing), and SMB encryption.

Learning Objectives

  • Understand the basic architecture of the SMB protocol
  • Enumerate the main capabilities introduced with SMB 2.0
  • Describe the main capabilities introduced with SMB 3.0

Massively Scalable File Storage

Philippe Nicolas, Director of Product Strategy, Scality


Abstract

Internet changed the world and continues to revolutionize how people are connected, exchange data and do business. This radical change is one of the cause of the rapid explosion of data volume that required a new data storage approach and design. One of the common element is that unstructured data rules the IT world. How famous Internet services we all use everyday can support and scale with thousands of new users and hundreds of TB added daily and continue to deliver an enterprise-class SLA ? What are various technologies behind a Cloud Storage service to support hundreds of millions users ? This tutorial covers technologies introduced by famous papers about Google File System and BigTable, Amazon Dynamo or Apache Hadoop. In addition, Parallel, Scale-out, Distributed and P2P approaches with open source and proprietary ones are illustrated as well. This tutorial adds also some key features essential at large scale to help understand and differentiate industry vendors and open source offerings.

Learning Objectives

  • Understand technology directions for large scale storage deployments
  • Be able to compare technologies
  • Learn from big internet companies about their storage choices and approaches

Seagate Kinetic Open Storage Platform

Ali Fenn, Senior Director of Advanced Storage, Seagate
James Hughes, Principal Technologist, Seagate


Abstract

As the creation of unstructured data continues to double every two years, the traditional paradigm of a hardware-centric, file-based storage infrastructure becomes increasingly inefficient and costly to manage and maintain for web-scale data centers. What if we could restructure the storage stack from the bottom up and deliver up to a 50% TCO benefit? What if object storage applications could bypass layers of storage hardware and software, connect directly to a drive, and speak to that drive in the application’s native language? What if information was an IP address away? It’s here. Learn how the Seagate Kinetic Open Storage Platform increases storage performance and density while significantly reducing the costs to deploy and manage a web-scale storage infrastructure.

VIRTUALIZATION


What’s Your Shape? 5 Steps to Understanding Your Virtual Workload

Irfan Ahmad, CTO and Co-Founder, CloudPhysics


Abstract

Is your VM is rightsized?
Is your VM is on the right datastore?
Is your VM I/O bound?

These are the kinds of questions that come up frequently when you’re doing capacity management, solving performance problems, and making procurement decisions. The root of the answer to all of them is the shape of your workload. Irfan Ahmad, the tech lead of VMware’s own Swap-to-SSD, Storage DRS and Storage I/O Control features will teach you the five steps to discovering the shape of your workload and applying that knowledge to capacity and performance decisions.

Learning Objectives

  • Is the disk workload sequential or random? How much parallelism is there?
  • What is the bottleneck resource?
  • Is this a noisy neighbor or a victim?
  • How do we figure out the shape of a workload? What are the tools and techniques we can use?
  • How do we map workloads to the right mix of storage, CPU, memory and network?

Virtualizing Storage to Accelerate Performance of Tier-1 Apps

Augie Gonzalez, Director of Product Marketing, DataCore
Dustin Fennel, Vice President and CIO, Mission Community Hospital

Abstract

Demanding business applications like databases, ERP and mail systems create bottlenecks in any storage architecture due to their rapid activity and intensive I/O and transactional requirements. To offset this, many companies buy high-end storage systems while leaving terabytes of storage unused – a large and costly mistake. Augie Gonzalez, Director of Product Marketing, DataCore, the leader in software-defined storage, will discuss the benefits of storage virtualization on tier-one applications – improved application response times and higher availability – and steps companies must take when virtualizing tier one applications. In a joint presentation, Augie and Dustin Fennel, Vice President and CIO, Mission Community Hospital, will discuss how his organization achieved their performance and uptime objectives, as well as how DataCore helped the hospital achieve a high-performance, Tier-1 storage solution at a fraction of the cost when comparing it to traditional storage strategies. The discussion will encompass the excellent results the hospital has seen from its PACS system after implementing DataCore; how a software-based storage virtualization architecture made high availability both easy and affordable; and how DataCore enables the healthcare provider to be storage agnostic, to have different tiers of storage from different vendors, and to manage everything from a single interface.


How to Achieve Agility and Redundancy in the Hybrid Cloud

Bryan Bond, Senior System Admin, eMeter, A Siemens Business
Pat O'Day, CTO, Bluelock


Abstract

In this session, Bryan Bond of eMeter talks about his real­world use case on how he’s using a VMware­based cloud to host and protect some of his most important systems and applications. He will describe how a hybrid cloud approach has enabled his organization to achieve dramatic speed and agility as well as cost efficient and highly reliable disaster recovery protection.

Learning Objectives

  • Hear a real-world use case and learn how hybrid cloud can enable global business
  • Find out how and why cloud-based disaster recovery is disrupting current disaster
  • Understand the difference between backup-as-a-service and recovery-as-a-service

Flash Hypervisor: The Savior to Storage I/O Bottlenecks?

Bala Narasimhan, Director of Products, PernixData


Abstract

The combination of hypervisors and server flash is an important but inconvenient marriage. Server flash has profound technology and programming implications on hypervisors. Conversely, various hypervisor functions make it challenging for server flash to be adopted in virtualized environments.

This talk will present specific hypervisor areas that are challenged by the physics of server flash, and possible solutions. We will discuss the motivation and use cases around a Flash Hypervisor to virtualize server flash and make it compatible with clustered hypervisor features, such as VM mobility, high availability and distributed VM scheduling. Finally, results will be presented from an example of a flash hypervisor (PernixData FVP), and its impact on data center storage design due to decoupling storage performance from capacity.

Learning Objectives

  • The role and potential of Flash in the virtualized data centers
  • Best practices for using Flash to accelerate storage performance