Data Protection and Management

Jump straight to an abstract:

The Abstracts

Introduction to Data Protection Backup to Tape, Disk and Beyond
Jason Iehl

Extending the enterprise backup paradigm with disk-based technologies allow users to significantly shrink or eliminate the backup time window.  This tutorial focuses on various methodologies that can deliver an efficient and cost effective disk-to-disk-to-tape (D2D2T) solution.  This includes the various technologies that can be used to help the ever increasing data growth and the problem of protecting it

Learning Objectives

  • Get a basic grounding in backup and restore technology including tape, disk, snapshots, deduplication, virtual tape, and replication technologies.
  • Compare and contrast backup and restore alternatives to achieve data protection and data recovery.
  • Identify and define backup and restore operations and terms.

Trends in Data Protection and Restoration Technologies
Michael Fishman

Modern backup and archival technologies are used to augment established backup and data protection methodologies to deliver improved protection levels and accelerate information and application backup and restore performance. These technologies work in concert with the existing backup paradigm. This session will discuss many of these technologies in detail. Important considerations of data protection include performance, scale, regulatory compliance, recovery objectives and cost. Technologies covered include contemporary backup, disk-based backups, snapshot and mirroring, continuous data protection and compression and deduplication applied to both storage and networking traffic reduction.

Learning Objectives

  • Understand legacy and contemporary storage and networking technologies that provide advanced data protection
  • Compare and contrast advanced data protection alternatives focusing on trade-offs
  • Gain insight into how various technologies can improve the performance of a data protection  environment

Bringing Light to the "Digital Dark Age" Preserving Information for the Long Term
Roger Cummings

Many organizations are facing the serious challenge of economically preserving and retaining access to a wide variety of digital content for dozens of years. Long-term digital information is vulnerable to issues that do not exist in a short-term or paper world, such as media and format obsolescence, bit-rot, and loss of metadata. Ironically, as the world becomes digital, we may be entering a "Digital Dark Age" in which business, public and personal assets are in ever greater danger of being lost.     The SNIA Long Term Retention (LTR) Technical Working Group works with key stakeholders in the preservation field, to develop the Self-contained Information Retention Format (SIRF) and enable applications to interpret stored data, independent of the application that originally created it. SIRF is a logical container format for the storage subsystem appropriate for the long-term storage of digital information. SIRF consists of preservation objects and a catalog containing metadata relating to the entire contents of the container as well as to the individual preservation objects and their relationships. It makes it easier and more efficient to provide many of the processes that address threats to the digital content at a lower level of the system stack and can be performed close to the data using more robust, efficient, and automatic methods. Easier, more efficient preservation processes in turn lead to more scalable and less costly preservation of digital content.     SIRF will be examined in a new European Union integrated research project, called ENSURE – Enabling kNowledge, Sustainability, Usability and Recovery for Economic Value. ENSURE creates a preservation infrastructure for commercial digital information built upon cloud storage and virtualization enablement technologies. It explores issues such as evaluating cost and value for different digital preservation solutions, automation of preservation processes, content-aware long term data protection, new and changing regulations, and obtaining a scalable affordable solution by leveraging cloud technology.    The presentation will cover use cases, requirements, and the proposed architecture for SIRF as well as its potential usage in ENSURE storage services.

Learning Objectives

  • Recognize the challenges in the long-term preservation of digital information, and understand why best practices and optimal solutions must address different levels of available time, money and effort.
  • Identify the need, use cases, requirements, and proposed architecture of SIRF. Also, review the latest activities in SNIA LTR technical working group and the development of SIRF.
  • Discuss the usage of SIRF in the ENSURE cloud infrastructure that draws on actual commercial use cases from health care, clinical trials, and financial services.

Advanced Deduplication Concepts
Thomas Rivera, Gene Nagle

Since arriving on the scene 10 years ago, the adoption of data deduplication has become widespread throughout the storage and data protection communities.  This tutorial assumes a basic understanding of deduplication and covers topics that attendees will find helpful in understanding today’s expanded use of this technology.    Topics will include trends in vendor deduplication design and practical use cases, e.g., primary storage, data protection, replication, etc

Learning Objectives

  • Have a clear understanding of current deduplication design trends.
  • Have the ability to discern between various deduplication design approaches and strengths.
  • Recognize new potential use cases for deduplication in various storage environments.

Primary Data Optimization: What it Should Be
Moderator, David Vellante.  Panelists: Larry Freeman, Steve . Kenniston, Jered Floyd, Craig Nunes

Cloud computing, big data, mobility and the social media wave are just some of the drivers pushing storage growth to overwhelming rates. As information volumes explode, storage budgets remain flat and we’ve seen an increased need to implement primary data optimization technology. Primary data optimization has been discussed thoroughly in 2011, but adoption remains mixed as practitioners need to better understand how to exploit this technology, the risks of implementation, true costs and bottom line business benefits. While the concept seems simple, data optimization reduces overall costs by locating and eliminating duplicate data, it has yet to fully play out in the market.

Panelists will discuss the state of storage optimization, how technologies such as deduplication and compression are different, how they can work together to reduce the storage costs and footprint, and impacts on performance

Learning Objectives

  • Attendees will leave the session with a clear understanding of the primary data optimization technologies available today.
  • Attendees will learn how different technologies can work in unison to drive immediate and long term cost savings.
  • Attendees will also learn how to avoid potential pitfalls of implementing primary storage optimization.