Data Storage Innovation Conference 2015 Abstracts

Break Out Sessions and Agenda Tracks Include:

Note: This agenda is a work in progress. Check back for updates on additional sessions as well as the agenda schedule.

BIG DATA

Solving Big Data Problems: Storage to the Rescue?

John Webster, Senior Partner, Evaluator Group

Abstract

This presentation reviews some major trends impacting storage for Big Data environments. These include:

  1. The conflict between traditional computing methods as practiced by enterprise IT vs web-scale computing as practiced by the likes of Google, Facebook, Twitter, etc.
  2. Dominance by opensource - primarily the Apache Software Foundation.
  3. The transformation MapReduce, batch-oriented analytics to more aggressive, real time computing platforms It then looks at some problems with currently used implementations of technology in Big Data that are worth solving from a storage perspective and suggests solutions.

 

Learning Objectives

  • How does webscale computing compare to enterprise data center computing?
  • How are the early Big Data analytics technologies progressing from batch to real time?
  • What impact does this progression from batch to real time have on Big Data storage?
  • What are the resulting issues that could be addressed within the Big Data storage environment?

Back to Top


Scaling Splunk Log Analytics without the Storage Headaches

Veda Shankar, Technical Marketing Manager, Red Hat

Abstract

Log analytics is a critical technique used in enterprises today. Businesses now have the opportunity to derive valuable insight into key processes that can help influence operational decision making, or even protect an organization against cyber-attack. In general terms, all analytics benefits from larger datasets, but managing the data can be problematic and costly. This session focuses on using software defined storage from Red Hat with Splunk, a market leader in log analytics, enabling analytics to scale without micro-managing the environment. Splunk manages event data based on age, with the older events ending up on “cold” storage. Cold storage is still searchable, so while it remains online the data horizon it represents can be used by business analytics. However, the challenge has been managing the 'iceberg' scenario that occurs across Splunk servers as data ages.

Learning Objectives

  • Software defined storage from Red Hat.
  • Data life cycle of indexed log data in Splunk.
  • Hybrid storage model using Red Hat Storage for Splunk analytics data.
  • Review Splunk benchmark kit results using the above solution.

Back to Top


Why Scale-Out Big Data Apps Need A New Scale-Out Storage

Rob Whiteley, Vice President, Marketing, Hedvig

Abstract

Whether startup or established large enterprise, no doubt your organization is dealing with unprecedented volumes of data. Access across different locations, users, sources, structures and audiences is vital to realize value from your data. Agile companies that want to drive better business decisions using big data insights must have scalable, reliable and accessible storage for their data assets.

A new storage platform is needed to manage, manipulate and analyze trillions of pieces of structured and unstructured data from many sources and data stores. The problem? Most companies are repeating the mistakes of the 90s, creating islands of big data storage for various flavors of Hadoop, NoSQL and other big data apps. Rather than siloed islands, a new, consolidated storage environment is needed to store big data without big costs. Software-defined storage enables organizations to virtualize big data apps, virtualize the underlying storage hardware, and provide a consistent, unified provisioning process for all storage assets.

Learning Objectives

  • A reference architecture for combining siloed big data apps and integrating legacy storage with modern approaches.
  • How a new approach to storage can unify multiple flavors of Hadoop and NoSQL to improve IT and storage operations.
  • Why you need tunable storage capabilities like replication, compression, and deduplication to vastly improve the economics of storing big data.
  • Lessons learned from Cassandra that can be applied to big data storage.
  • Real-world use cases of agile companies managing big data cost effectively with software-defined storage.

Back to Top


SNIA Tutorial:
Protecting Data in the Big Data World

Thomas Rivera, Senior Technical Associate, HDS

Abstract

Data growth is in an explosive state, and these "Big Data" repositories need to be protected. In addition, new regulations are mandating longer data retention, and the job of protecting these ever-growing data repositories is becoming even more daunting. This presentation will outline the challenges, methodologies, and best practices to protect the massive scale "Big Data" repositories.

Learning Objectives

  • Understand the challenges of managing and protecting "Big Data" repositories.
  • Understand the technologies available for protecting "Big Data" repositories.
  • Understand considerations and best practices for "Big Data" repositories.

Back to Top

Cloud

Optimizing the Enterprise, Lessons Learned from Deploying Private Storage Cloud

David Christel, Manager, Unisys

Abstract

A large government agency was overwhelmed by having numerous silo'ed storage environments dispersed throughout multiple datacenters. The management team embarked on a mission to optimize and streamline their storage assets. This government agency's pioneering storage-as-a-service business model and adoption of storage virtualization now provides flexibility to meets its mission demands at significantly reduced cost. Results from this groundbreaking program include:

  • 30% Cost savings
  • Up to 99% decrease in acquisition time
  • Shift from cap-ex to op-ex

During this case-study presentation, we will discuss the team's primary challenges; significant collaboration efforts that led to success; and key lessons learned during this agency's complete enterprise deployment of private storage cloud and subsequent adoption of a new paradigm for consuming storage resources.

 

Learning Objectives

  • Exposure to multi-site enterprise Private Storage Cloud deployment
  • Explore critical challenges facing Private Storage Cloud implementations
  • Learn Key lessons a large scale private storage cloud deployment

Back to Top


How Does the Cloud Fit into Active Archiving?

David Cerf, Executive President Strategy and Business Development, Crossroads Systems

Abstract

The exponential growth of unstructured data and limited storage budgets are necessitating the need for more cost effective storage architectures like active archiving. As data ages or changes its performance profile, it makes sense to move this data from primary storage to economy tiers of storage such as lower cost disk and tape. This can be done utilizing the vast resources of the cloud to create a hybrid storage solution. Active archive software enables existing file systems to seamlessly expand over flash, disk and tape giving users transparent access to all of the organization's data regardless of the storage. Innovative cloud storage solutions can now be the target for this data or act as a secondary site for replicated copies saved for disaster recovery purposes.

Learning Objectives

  • Learn how utilizing cloud storage for active archive allows you to maintain online access to data while avoiding the expenses and complexity associated with a do-it-yourself option.
  • Learn how to create hybrid active archives using object storage/cloud for primary storage and low cost media such as tape or disk for secondary storage.
  • Learn the benefits of incorporating an active archive file gateway solution with object storage or cloud to separate applications from their archived data, allowing for simpler and more efficient migrations to take place controlled by the gateway.
  • Note: This session is being proposed as a panel made up of member of the Active Archive Alliance.

Back to Top


Things to Consider When Planning for File Services in a Hybrid Cloud Environment

Bernhard Behn, Principal Technical Marketing Engineer, Avere Systems

Abstract

The traditional datacenter's borders are rapidly disappearing. The ability to run your applications in the cloud is quickly becoming a reality, but choosing between an on-premises datacenter or a private/public cloud environment is becoming a tougher decision to make. All of the plumbing required to make this happen is incredibly complex, with heavy cooperation between networking, storage and security teams. The biggest challenge to application performance in a hybrid environment is the network, more specifically, latency and throughput. Solutions that can minimize the impact of high-latency and limited-throughput will be of great value as you move towards hybrid compute and storage environments. Learn about the challenges you can expect as you start to build data services outside of your datacenter.

Learning Objectives

  • Identify the challenges that a hybrid application and/or storage environment brings
  • Discover the right infrastructure plumbing: networking, storage and security
  • How high-latency and limited-throughput will affect application performance
  • Striking the tricky balance between compute and storage, both local on-premises and in the cloud

Back to Top


Rethink Cloud Strategies for Cost Effective Enterprise Storage Management

Lazarus Vekiarides, CTO and Co-founder, ClearSky Data

Abstract

IT has always looked at the public cloud as a model approach to reduce the exorbitant costs and complexity of enterprise storage. However, affordable storage management initiatives, such as solutions based on the public cloud, are often hampered by security concerns, surprise pricing, networking and performance and latency issues. In particular, latency, which is caused by poor network connectivity and geographic distances, among other contributors, can be a huge impediment to leveraging remote clouds for efficient and affordable enterprise storage. In this session, Laz Vekiarides will explain why many of the idealized characteristics of the public storage cloud are not feasible today, and will offer insight into the challenges that are demanding a new approach to enterprise storage management.

Learning Objectives

  • Rethinking cloud strategies in the context of connectivity issues
  • The causes and effects of latency across networks
  • Limitations of gateways in the data center
  • Considerations when developing a fully effective storage management strategy
  • Considerations for using remote clouds for a more affordable storage infrastructure

Back to Top


Whoever Owns the Data Owns the Customer

Luke Behnke, Vice President Products, Bitcasa

Abstract

Cloud storage has fundamentally altered the mobile industry. Cloud services are now being integrated at every level of the stack from application to OS integration, to the chipset, diminishing the need for local storage. While the benefits of mobility, data accessibility, disaster recovery, and endless storage have been heavily touted by industry luminaries, there is one key revenue-driving idea that has largely slipped under the radar. Whoever owns the data will inevitably own the customer.

As consumers digital lives shift to the cloud their videos, images, text and voice messages etc. access to data will inevitably become one of the biggest factors in customer acquisition and loyalty.In this session, Bitcasa CEO, Brian Taptich will argue that cloud services are changing the competitive landscape of the mobile industry. In the emerging battle for data ownership app providers, OS vendors, device manufacturers, chip manufacturers and network operators will all face off against less traditional rivals. The winner will be the company that seamlessly integrates the most accessible and secure cloud services to help customers manage their data indefinitely.

Learning Objectives

  • How cloud services are changing the competitive landscape of the mobile industry.
  • Who will win the battle for data ownership - device manufacturers and app providers or less traditional rivals?
  • Why is owning user data so important for customer acquisition and loyalty?

Back to Top


Hybrid Cloud Storage with StorSimple

Mike Emard, Senior Program Manager StorSimple, Microsoft

Abstract

Mike will provide an overview of how StorSimple’s Hybrid Cloud Storage solution works and explain how it can lower storage TCO up to 60%. Open discussion on how the solution works will be encouraged. We’ll go into as much technical depth as you want.

Back to Top


SNIA Tutorial:
SNIA Tutorial: Hybrid Clouds: Bridging Private and Public Cloud Infrastructures

Alex McDonald, NetApp CTO Office, NetApp

Abstract

Every IT consumer is using cloud in one form or another, and just as storage buyers are reluctant to select single vendor for their on-premises IT, they will choose to work with multiple public cloud providers. But this desirable "many vendor" cloud strategy introduces new problems of compatibility and integration. To provide a seamless view of these discrete storage clouds, Software Defined Storage (SDS) can be used to build a bridge between them. This presentation explores how SDS, with its ability to deploy on different hardware and supporting rich automation capabilities, can extend its reach into cloud deployments to support a hybrid data fabric that spans on-premises and public clouds.

Learning Objectives

  • Gain an understanding of how SDS can make hybrid multivendor clouds a reality
  • Articulate SDS as a mechanism for building & automating stroage
  • Presenting an overview of possible cloud storage enterprise architectures

Back to Top


Dedicated Cloud Infrastructure

Ashar Baig, President, Principal Analyst and Advisor, Analyst Connection

Abstract

When it comes to cloud choices, organizations today can choose between traditional clouds that offer Virtual Machines (VMs) that are extremely easy to use but abstract disk, memory, CPU. That introduces a sizable performance penalty. The other option is Bare Metal Clouds which allow you to custom design hardware dedicated for your applications and roughly a 4x greater performance than traditional VM Clouds. Security concerns, potential of breaking regulatory compliance in a multi-tenant environment, and lower performance associated with VMs were the main reasons for the reluctance of organizations to move their data to the multi-tenant public cloud. These concerns gave birth to two distinct dedicated server offerings from cloud providers. Today, these dedicated servers are offered as dedicated virtual servers or dedicated bare metal servers. The dedicated virtual servers possess two attributes that bare metal servers do not hypervisor and multi-tenancy. The hypervisor is used to virtualize the resources of physical machines, creating multiple virtual machines on each physical server for a multi-tenant environment. Dedicated virtual servers provide complete isolation and protection of an organization's data in a multi-tenant environment from both outside intrusions and other customers sharing the same Cloud provider infrastructure (the noisy neighbor effect). Bare metal cloud services, are essentially physical servers that can be deployed on demand and billed hourly, can offer significant improvements over virtualized Infrastructure as a Service (IaaS) in performance, consistency and cost efficiency for many applications. Bare-metal cloud services combine the advantages of traditional dedicated servers within your firewall without the OpEx associated with in-house servers. The hypervisor layer is not needed in a bare metal server since resources are not being shared. Therefore, the server's entire processing power is available to the application resulting in better performance than a comparable virtualized server. Applications and workloads that which require direct access to physical hardware, such as databases and calculation-intensive applications benefit from the performance of bare metal clouds. Furthermore, workloads that should not be virtualized are strong candidates for bare metal clouds. In today's economic turbulent environment where IT budgets are constantly under the microscope, businesses are eager to graduate to exponentially more attractive products that can give them fundamental advantages in IT, resulting in increased competitiveness in their markets. Dedicated virtual servers or bare metal servers provide present fiscally attractive value propositions for enterprise computing environments that are hard to ignore.

Learning Objectives

  • What are the ideal use cases for utilizing dedicated public cloud infrastructure?
  • What are the key issues faced by enterprises who utilize shared multi-tenant cloud infrastructure?
  • What are the performance and availability considerations associated with utilizing shared multi-tenant cloud infrastructure?
  • Which applications and which workloads can benefit from BMCs and why.
  • Security and regulatory compliance considerations that are enticing organizations to choose BMCs.

Back to Top

COLD STORAGE

Yes, Virginia, There is USB Connectivity for an LTO Tape Drive!

Mauricio del Prado, Senior Engineer, IBM

Abstract

LTO Tape Drives have historically been available with only SCSI-based (parallel SCSI in the past, SAS today) and Fibre Channel (for libraries) connectivity. This presentation will demonstrate a new LTO Tape Drive enclosure with USB (and SAS) connectivity. Using LTFS SDE, one can connect this USB Tape Drive to a Windows, Mac, or Linux machine, much as you would a USB stick. Performance measurements will be shared, use cases will be discussed and the value proposition for this product will be analyzed.

Learning Objectives

  • Understand how USB connectivity for an LTO Tape Drive is achieved
  • Understand the use cases for a USB LTO Tape Drive
  • Understand the performance characteristics for a USB Tape Drive
  • Understand the value proposition for a USB LTO Tape Drive

Back to Top


Tape Storage for Cold Data Archive and Future Technologies

Osamu Shimizu, Research Engineer - Recording Media Research Laboratories, FUJIFILM Corporation
Hitoshi Noguchi, General Manager, FUJIFILM Corporation

Abstract

Tape storage is widely used not only for backup applications but also for archival applications to store growing cold data because of its inexpensive cost, long-term stability, and other advantages. Tape storage has maintained its development in capacity growth and shown a future roadmap while other storage technologies (e.g. HDD, Optical Discs) have struggled to do so. In this presentation, we will first go through a quick overview of the current state of tape storage with some technical background and share details on future prospects for increasing the capacity. In addition, the reliability and long-term stability of tape storage media are discussed.

Learning Objectives

  • Latest tape storage technologies
  • Comparison with other storage technologies
  • How to realize the incredible capacity growth
  • Reliability and long-term archival stability of tape storage
  • Future prospects for tape storage

Back to Top


Archival Disc Technology

Yasumori Hino, Manager, Panasonic Corporation
Jun Nakano, Deputy General Manager, Sony

Abstract

Optical disc characteristics and features in the areas of high reliability and wide temperature/humidity tolerance make them excellent for long term data storage. Long term backward read and write compatibility has been achieved in an over 33 year history from ranging from CD through Blu-ray formats.

New developments in optical storage hold great promise for better meeting evolving data center requirements for low cost, high reliability long term storage of cold data. While, Archival Disc already realizes 3 times larger capacity with higher transfer rates to meet the market requirements, even higher recording densities are being made possible by new cutting edge technology innovations currently under development.

Join Panasonic and Sony to learn about the jointly developed next generation of optical disk, the Archival Disc, including standardization and highlights of the technologies roadmap innovation.

Back to Top

DISASTER RECOVERY

DR in the Cloud

Ashar Baig, President, Principal Analyst and Advisor, Analyst Connection

Abstract

In case of a disaster, Business Continuity (BC) is dependent on how fast they can be back to normal IT operations. Typical hardware procurement times range from 4 - 12 weeks. In case of a disaster, most organizations may not have the new hardware immediately available to restore the data that they backed up off site. In this scenario, cloud-based Virtual Disaster Recovery (VDR) is a lifesaver.

The ability to restore physical servers to virtual servers (P2V) while preserving the granularity of restoring individual files can satisfy timely BC requirements of any organization. VDR is the game changer technology in DR and should be a key ingredient of every organizational data protection strategy.

Learning Objectives

  • Understand the nuts and bolts of cloud-powered DR and the end user cloud-powered DR requirements
  • How to construct your DR plan to capture cloud-powered VDR?
  • How much you should expect to pay for cloud-based DR
  • End user VDR requirements that should be in every RFP
  • Overview of virtual workspaces. Who offers them and how to best utilize them for Business Continuity?

Back to Top


SNIA Tutorial:
Data Protection in Transition to the Cloud

David Chapa, CTE, Seagate

Abstract

Organizations of all types and sizes are moving many, but usually not all, applications and data to public and private clouds, and the hybrid environments thus created are an increasing challenge for those responsible for data protection. There are many new services available in the cloud for backup and disaster recovery that can help, but IT managers want to avoid setting up separate data protection procedures for each of the parts of their hybrid environments.

Learning Objectives

  • Have a clear understanding of how current trends in data protection and in cloud-based computing and storage are impacting each other.
  • Gain increased knowledge regarding new cloud-based alternative approaches for data protection.
  • Have the ability to make good decisions when changes in data protection are required by new hybrid in-house plus cloud environments.

Back to Top


SNIA Tutorial:
Trends in Data Protection

Gideon Senderov, Director Advanced Storage Products, NEC

Abstract

Many disk technologies, both old and new, are being used to augment tried and true backup and data protection methodologies to deliver better information and application restoration performance. These technologies work in parallel with the existing backup paradigm. This session will discuss many of these technologies in detail. Important considerations of data protection include performance, scale, regulatory compliance, recovery objectives and cost. Technologies include contemporary backup, disk based backups, snapshots, continuous data protection and capacity optimized storage. Detail of how these technologies interoperate will be provided as well as best practices recommendations for deployment in today's heterogeneous data centers.

Learning Objectives

  • Understand legacy and contemporary storage technologies that provide advanced data protection.
  • Compare and contrast advanced data protection alternatives.
  • Gain insights into emerging Data Protection technologies.

Back to Top


SNIA Tutorial:
Advanced Data Reduction Concepts

Tom Sas, Product Manager, Hewlett-Packard

Abstract

Since arriving over a decade ago, the adoption of data deduplication has become widespread throughout the storage and data protection communities. This tutorial assumes a basic understanding of deduplication and covers topics that attendees will find helpful in understanding today

Learning Objectives

  • Have a clear understanding of current deduplication design trends
  • Have the ability to discern between various deduplication design approaches and strengths
  • Recognize new potential use cases for deduplication and other data reduction technologies in various storage environments

Back to Top


SNIA Tutorial:
Introduction to Data Protection: Backup to Tape, Disk and Beyond

Thomas Rivera, Senior Technical Associate, HDS

Abstract

Extending the enterprise backup paradigm with disk-based technologies allow users to significantly shrink or eliminate the backup time window. This tutorial focuses on various methodologies that can deliver efficient and cost effective solutions. This includes approaches to storage pooling inside of modern backup applications, using disk and file systems within these pools, as well as how and when to utilize Continuous Data Protection, deduplication and virtual tape libraries (VTL) within these infrastructures.

Learning Objectives

  • Get a basic grounding in backup and restore technology including tape, disk, snapshots, deduplication, virtual tape, and replication technologies
  • Compare and contrast backup and restore alternatives to achieve data protection and data recovery
  • Identify and define backup and restore operations and terms

Back to Top


Optical Storage - The Future of Long Term Data Preservation

Doug Ferguson, Senior Solution Consultant, Hitachi Data Systems Federal Corporation

Abstract

Information is indispensable for governments, businesses and consumers alike. Since the proliferation of the Internet, the era of data “structured and unstructured” has grown faster and more important than ever.

Unfortunately, organizations of all sizes grapple with how to collect, manage and store the explosion of digital information. Government mandates and regulatory requirements dictate how organizations archive or preserve data they generate, from five to 100 years, or forever.

This presentation will focus on how organizations implement data preservation strategies that ensures the ability to collect, store, protect, manage and utilize data effectively. The technological backbone of this strategy is optical storage, the new tier of digital preservation. Optical storage is a proven medium that is cost-effective, reliable and scalable offering an unprecedented opportunity to store data indefinitely.

Learning Objectives

  • Educate attendees on the differences between preservation and archiving, the differentiators, marketplace drivers and the importance of enterprise vs. consumer-grade optical solutions
  • How optical storage is more than a technology, but an integral part of the data storage ecosystem
  • Evolve the attendees’ understanding and thinking about data, how preservation must be planned for from the initial creation of data through its lifespan.
  • Introduce optical preservation as the newest tier in data management
  • Discuss the viability of optical solutions long-term in the marketplace, where the technology trends are moving and why organizations must consider optical versus other media types

Back to Top

DISTRIBUTED STORAGE

Is RAM the Future of Enterprise Storage? Using Distributed Fault Tolerant Memory in Virtualized Data Centers

Woon Jung, Senior Software Engineer, PernixData

Abstract

Using server-based memory to accelerate storage in virtualized data centers offers a compelling value proposition. However, using a non-volatile medium as a read and write storage acceleration tier requires a new approach. This talk will take the audience on a deep technical journey around the creation of Distributed Fault Tolerant Memory (DFTM), discussing:

  • Technical challenges using server memory for acceleration at scale
  • Distributed systems aspects that are specific to server memory
  • Other innovations required to make DFTM a tectonic shift in storage design, and the road ahead

Join enterprise storage expert and PernixData senior software engineer Woon Jung as he discusses how with the right flash virtualization technology, Random Access Memory (RAM) can be turned into an enterprise class medium for storage acceleration.

 

Learning Objectives

  • The creation of Distributed Fault Tolerant Memory (DFTM), including technical challenges involved and innovations required to make DFTM a tectonic shift in storage?
  • How to turn RAM into an enterprise class medium for storage acceleration

Back to Top


SNIA Tutorial:
Using Leading-edge Building Blocks to Deploy Scale-out Data Infrastructure

Craig Dunwoody, CTO, GraphStream Incorporated

Abstract

Every datacenter includes a set of software and hardware infrastructure building blocks assembled to provide data storage, processing, and networking resources to a set of application workloads. New types of workloads, and new Commercial Off-The-Shelf infrastructure building blocks, are being developed at an increasing rate.

These building blocks include a new generation of infrastructure software that can pool and provision hardware resources dynamically, via automation driven by policy and analytics, across a constantly changing and heterogeneous workload mix, at datacenter scale. This enables radical improvements in efficiency and effectiveness of hardware resource usage.

Using technical (not marketing) language, and without naming specific products, this presentation covers key storage-related architectural choices and practical considerations for deploying scale-out data infrastructure using the most advanced COTS building blocks.

Learning Objectives

  • Storage-infrastructure evolution: incrementally adding capacity, performance, successive hardware generations
  • Options for bringing processing closer to data
  • Options for integrating multiple storage services (e.g., object, file, block)

Back to Top


SNIA Tutorial:
Massively Scalable File Storage

Philippe Nicolas, Senior Director, Industry Strategy, Scality

Abstract

Internet changed the world and continues to revolutionize how people are connected, exchange data and do business. This radical change is one of the causes of the rapid explosion of data volume that required a new data storage approach and design. One of the common elements is that unstructured data rules the IT world. How famous Internet services we all use every day can support and scale with thousands of new users and hundreds of TB added daily and continue to deliver an enterprise-class SLA ? What are various technologies behind a Cloud Storage service to support hundreds of millions users? This tutorial covers technologies introduced by famous papers about Google File System and BigTable, Amazon Dynamo or Apache Hadoop. In addition, Parallel, Scale-out, Distributed and P2P approaches with open source and proprietary ones are illustrated as well. This tutorial adds also some key features essential at large scale to help understand and differentiate industry vendors and open source offerings.

Learning Objectives

  • Understand technology directions for large scale storage deployments
  • Be able to compare technologies
  • Learn from big internet companies about their storage choices and approaches
  • Identify market solutions and align them with various use cases

Back to Top


Next Generation Data Centers: Hyperconverged Architectures Impact On Storage

Mark OConnell, Distinguished Engineer, EMC

Abstract

A modern data center typically contains a number of specialized storage systems which provide centralized storage for a large collection of data center applications. These specialized systems were designed and implemented as a solution to the problems of scalable storage, 24x7 data access, centralized data protection, centralized disaster protection strategies, and more. While these issues remain in the data center environment, new applications, new workload profiles, and the changing economics of computing have introduced new demands on the storage system which drive towards new architectures, and ultimately towards a hyperconverged architecture. After reviewing what a hyperconverged architecture is and the building blocks in use in such architectures, there will be some predictions for the future of such architectures.

Learning Objectives

  • What is a hyperconverged architecture
  • How hyperconverged architectures differ from traditional architectures
  • What technologies are being used to build hyperconverged architectures
  • What workloads are appropriate for hyperconverged architectures

Back to Top


Untangled: Improve Efficiency with Modern Cable Choices

Dennis Martin, President, Demartek

Abstract

As storage systems become denser and data rates increase, designers and end-users are faced with a dizzying number of choices when it comes to selecting the best data cables to use between controllers and storage shelves. In this session, Dennis Martin, Demartek President, explains the difference between passive and active data cables and the benefits available in terms of airflow, distance and power consumption. In addition, active optical cables (AOC) and connectors such as CX4, QSFP+ and others will be explained. We will discuss 40Gb Ethernet, 100Gb InfiniBand, 12Gb SAS, 16Gb Fibre Channel and futures for these and other types of interfaces used for storage systems. We will also discuss future connectors such as SFP28 and QSFP28.

Learning Objectives

  • Learn when to use active data cables
  • Learn about active optical cable
  • Learn how to clean up the back-of-the-rack clutter

Back to Top

/etc

Open Source is Changing Entire Industries

Nithya Ruff, Director - Open Source Strategy, SanDisk

Abstract

Open source is transforming everything, especially how we collaborate across companies and communities to innovate and create. It provides a model of collaborative creation that can be used to create anything including art, music, better education, better hardware and better cities. Open source will be used to transform industries and the way we live more than any other technology.

SanDisk plays in all aspects of the data journey from generation of the data at the edge in smart phones and devices, to the transportation of that data to the cloud or an enterprise datacenter. Innovation is be driven by delivering the solutions needed to build high performance, small footprint, highly efficient, next generation data centers. Being a vertically integrated company from fabs to applications, SanDisk is helping drive what is needed in storage to handle the new cloud and service oriented datacenter.

Learning Objectives

  • Educate the audience members about SanDisks efforts in the Open Source industry
  • Educate the audience on the vast opportunity that Open Source brings to companies
  • Help the audience better understand Open Source and the transformation within the industry

Back to Top

FILE SYSTEMS

SNIA Tutorial:
SMB Remote File Protocol (Including SMB 3.x)

Jose Barreto, Principal Program Manager, Microsoft

Abstract

The SMB protocol has evolved over time from CIFS to SMB1 to SMB2, with implementations by dozens of vendors including most major Operating Systems and NAS solutions. The SMB 3.0 protocol, announced at the SNIA SDC Conference in September 2011, is expected to have its first commercial implementations by Microsoft, NetApp and EMC by the end of 2012 (and potentially more later). This SNIA Tutorial describes the basic architecture of the SMB protocol and basic operations, including connecting to a share, negotiating a dialect, executing operations and disconnecting from a share. The second part of the talk will cover improvements in the version 2.0 of the protocol, including a reduced command set, support for asynchronous operations, compounding of operations, durable and resilient file handles, file leasing and large MTU support. The final part of the talk covers the latest changes in the SMB 3.0 version, including persistent handles (SMB Transparent Failover), active/active clusters (SMB Scale-Out), multiple connections per sessions (SMB Multichannel), support for RDMA protocols (SMB Direct), snapshot-based backups (VSS for Remote File Shares) opportunistic locking of folders (SMB Directory Leasing), and SMB encryption.

Learning Objectives

  • Understand the basic architecture of the SMB protocol
  • Enumerate the main capabilities introduced with SMB 2.0
  • Describe the main capabilities introduced with SMB 3.0

Back to Top

GREEN STORAGE

Storage Systems Can Now Get ENERGY STAR Labels and Why You Should Care

Dennis Martin, President, Demartek

Abstract

We all know about ENERGY STAR labels on refrigerators and other household appliances. In an effort to drive energy efficiency in data centers, storage systems can now get ENERGY STAR labels through the EPA announced its ENERGY STAR Data Center Storage program. This program uses the taxonomies and test methods described in the SNIA Emerald Power Efficiency Measurement specification, which is part of the SNIA Green Storage Initiative. In this session, Dennis Martin, President of Demartek, the first SNIA Emerald Recognized Tester company, will discuss the similarities and differences in power supplies used in computers you build yourself and in data center storage equipment, 80PLUS ratings, and why it is more efficient to run your storage systems at 230v or 240v rather than 115v or 120v. Dennis will share his experiences running the EPA ENERGY STAR Data Center Storage tests for storage systems and why vendors want to get approved.

Learning Objectives

  • Learn about power supply efficiencies
  • Learn about 80PLUS power supply ratings
  • Learn about running datacenter equipment at 230v vs. 115v
  • Learn about the SNIA Emerald Power Efficiency Measurement
  • Learn about the EPA ENERGY STAR Data Center Storage program

Back to Top

HARD DRIVES

The Perennial Hard Disk Driv — The Storage Industry Workhorse

Edward Grochowski, Consultant
Thomas Coughlin, President, Coughlin Associates

Abstract

Magnetic hard disk drives continue to function as the principal technology for data storage in the computer industry, and now have attained new importance based on cloud storage architectures. Advantages for HDD arise from increased areal density, the density of stored bits per disk media as well as a lower price per Gbyte, and both are related. HDD products have been the recipient of numerous technological advances which have increased data density 500 million times since inception, sixty years ago. Today, areal density is approaching 1 Terabit per sq. inch in new drives and advanced technologies are now required to maintain this growth and competitiveness. This study will address such advances as HAMR, BPMR, shingle writing and 2D recording as well as assess progress in their implementation. The HDD of the future will be proposed which could encompass these new technologies and the impact on the storage industry as well as competitive products using both Flash memories and STT RAM products will be presented.

Learning Objectives

  • HDD Technology
  • HDD Future Design
  • HDD Competitive Technology

Back to Top


Leveraging Ubiquitous Sensors to Predict Failures and Discover Data Center Flaws

Teague Algie, Software Developer, Cleversafe

Abstract

Environmental sensors are increasingly included in devices such as CPUs, hard drives, SSDs, and other components. With so many sensors available in each server there is a plethora of unmined data that can, among other things, be used to predict faults, discover anomalies, find data center design flaws, and isolate air flow obstructions. In this presentation, we present what we learned from collecting and analyzing months of temperature sensor readings collected from hundreds of devices and thousands of drives operating within our data center. In the end, we used statistical analysis to determine what insights can be gleaned from this data as far as predicting impending failures or determining data center hotspots. Finally, we consider the cost-benefit trade offs of deploying "environmentally aware" components.

Learning Objectives

  • Which applications of aggregated sensor data work and which don't
  • Whether data from individual devices can stand-in for rack-level sensors
  • The effectiveness of device sensors in locating airflow issues and data center hotspots
  • How to eliminate the noise of variations in per-device workloads
  • How to leverage this data to push hardware to its limits without causing excessive failures

Back to Top


Health Monitoring and Predictive Analytics to Lower the TCO in a Datacenter

Christian Madsen, Engineering Manager, Seagate Technology
Andrei Khurshudov, Senior Director, Seagate Technology

Abstract

The most important components of a datacenter are the storage devices, as they store both the system's data and the customer's data. Data loss is a concern even in the case of highly-redundant distributed storage systems such as GFS or Hadoop, as multiple failures of storage devices ultimately increase the risk of the users data loss - a risk that can never be completely eliminated, but just reduced to a very low probability.

What if you could predict when something bad was about to happen to your storage devices in your data-center? What if you could find the root cause and prevent such event from happening again or, at least, reduce its probability dramatically? And what if we could help you to do it without being an expert at every aspect or storage device and storage system reliability?

In this presentation, we demonstrate how we rely on the newest software tools and advanced machine learning and analytics techniques to achieve such goals. We demonstrate how to monitor and manage tens of thousands of drives and storage systems in a data center - and do it with little impact on its operation. We offer a new way to visualize and study the datacenter storage environment and offer new predictive capabilities to help any datacenter manager reduce the time and cost spent on managing, maintaining, and troubleshooting storage.

Learning Objectives

  • How to efficiently manage thousands of drives
  • What is the state of predictive algorithms?
  • Data-center monitoring use cases

Back to Top


RAIDShield: Characterizing, Monitoring, and Proactively Protecting Against Disk Failures

Ao Ma, Principal Engineer, EMC

Abstract

Modern storage systems orchestrate a group of disks to achieve their performance and reliability goals. Even though such systems are designed to withstand the failure of individual disks, failure of multiple disks poses a unique set of challenges. We empirically investigate disk failure data from a large number of production systems, specifically focusing on the impact of disk failures on RAID storage systems. Our data covers about one million SATA disks from 6 disk models for periods up to 5 years. We show how observed disk failures weaken the protection provided by RAID. The count of reallocated sectors correlates strongly with impending failures.

With these findings we designed RAIDSHIELD, which consists of two components. First, we have built and evaluated an active defense mechanism that monitors the health of each disk and replaces those that are predicted to fail imminently. This proactive protection has been incorporated into our product and is observed to eliminate 88% of triple disk errors, which are 80% of all RAID failures. Second, we have designed and simulated a method of using the joint failure probability to quantify and predict how likely a RAID group is to face multiple simultaneous disk failures, which can identify disks that collectively represent a risk of failure even when no individual disk is flagged in isolation. We find in simulation that RAID-level analysis can effectively identify most vulnerable RAID-6 systems, improving the coverage to 98% of triple errors.

Learning Objectives

  • Empirical investigation of hard disk failures in production systems
  • Proactive protection of single hard disk
  • Proactive protection of RAID storage ssytem
  • Deployment results of the proactive protection in production system

Back to Top

HOT TOPICS

Bringing HyperScale Computing to the Enterprise : The Need for Enterprises to Overhaul Their IT Systems

Chirag Jog, VP of Engineering, MSys and Clogeny

Abstract

The nature of applications and data for the enterprise is changing. Enterprises have to - Support web-scale IT by delivering the capabilities of large cloud service providers within an enterprise IT setting. - Manage streams of large amount of data coming in real-time from customers, partners, suppliers, supply chains, web access, security, applications, the universe of things - Deploying realtime transactional and analytical systems to make near-realtime recommendations - Store massive amounts of structured and unstructured data, archive data and more importantly making the data readily accessible for applications, analytics or compliance. - At the same time, implement cost-effective solutions.

The biggest names in “Web-Scale IT” and Web 2.0 such as Amazon, Facebook and Google have already achieved new storage efficiencies by designing standard hardware that is highly optimized for their very specific software workloads while reducing equipment costs. They have built new data storage systems with high-density “peta-scale” capacities, which their applications leverage.

New paradigms in software are being introduced to take full advantage of these next generation hardware solutions or hyperscale computing solutions like micro-services, container based deployments and cluster management tools like Apache Mesos - that re-imagine multiple hosts as a single virtual host.

Applications built using the new paradigms in software will exploit these new capabilities and extend the reach and range of new enterprise applications while reducing the cost of existing applications.

Learning Objectives

  • The nature of the next generation Enterprises applications, their data requirements
  • Potential increase in IT value coming from applications exploiting low-latency, hyperscale and cloud
  • Understand the various building blocks of hyper scale computing including storage solutions like burst buffers, object storage, erasure coding, distributed file systems and databases and software solutions like Containers, Apache Mesos and micro-services that are driving the new wave of “Web-Scale IT"

Back to Top

IDC ANALYSTS BRIEFINGS

The Flash–Based Array Market

Eric Burgener, Research Director - Storage, IDC

Abstract

The market for flash-based arrays, both All Flash Arrays (AFAs) and Hybrid Flash Arrays (HFAs), is growing rapidly. In this session, we’ll take a look at the drivers of flash-based array adoption in the enterprise, review deployment models, explain required features, and discuss the evolving competitive battleground for these systems in 2015 and beyond. Attendees should walk away with a better understanding of how to select the flash-based array that best meets their requirements.

Back to Top


Adoption and Trends in Object Based Storage

Amita Potnis, Research Manager, IDC

Abstract

The 3rd platform has transformed and accelerated the pace of business. Data growth due to the four pillars of the 3rd platform - social, mobile, Big Data, and cloud - is putting unprecedented pressures on storage infrastructure. The adoption of object-based storage will continue to rise in at-scale or hyperscale deployments as well as the enterprise. This presentation will discuss trends and use cases for Object-based storage. Amita Potnis is a Research Manager within IDC's Storage practice and is resposible for producting impacful and timely research specifically in the File and Object Based Solution, Enterprise Disk Storage System tracker and Storage CIS.

Learning Objectives

  • Infrastructure for the 3rd Platform
  • Four Essential Components
  • Adoption of Object-based Storage
  • Object-based Storage use cases and trends

Back to Top


Data Protection 2015-2025: Fundamental Strategic Transformation

Phil Goodwin, Research Director, Storage Systems and Software, IDC

Abstract

Computer storage architecture is entering its third major epoch, driven by the needs of virtual computing. These changes require a fundamental transformation of data protection methods and strategies, but in ways that will make IT administrator’s lives easier with better service level delivery. This session examines how IDC believes that data protection will evolve over the next 10 years, identify key technologies and describe how IT organizations can plan appropriately.

Back to Top


Relating Information Governance to Storage Trends

Sean Pike, Program Director for Governance, Risk, and Compliance (GRC) and eDiscovery, IDC

Abstract

Current legal and regulatory climates have produced a dizzying array of standards and compliance objectives that dictate the way organizations store, access, retrieve, and destroy information. As a result, organizations are rethinking IT architecture and the alignment of IT resources to produce a unified organizational approach to data management. This session will highlight storage solutions capable of supporting the core corporate information governance mission and describe how certain technologies may actually drive change within the organization and promote healthy IT transformation.

Computer storage architecture is entering its third major epoch, driven by the needs of virtual computing. These changes require a fundamental transformation of data protection methods and strategies, but in ways that will make IT administrator’s lives easier with better service level delivery. This session examines how IDC believes that data protection will evolve over the next 10 years, identify key technologies and describe how IT organizations can plan appropriately.

Learning Objectives

  • The effect of long term storage choices on governance intiatives
  • How cloud infrastructure affects compliance
  • How legal discovery and investigation factor in to architecture

Back to Top

KEY NOTE AND FEATURED SPEAKERS

Where to Now? Considerations for Storage in the New Data Center

Camberley Bates, Managing Director and Sr. Analyst, The Evaluator Group

Abstract

In the last year we have accelerated, converged and deconstructed software from hardware, to name some of the changes being implemented in the data center. Where are we headed and what should be considered given the new storage technologies? Where do flash, hyperconverged and big data fit — just to name a few. We will take a look at these changes and important planning strategies to keep you ahead of the curve for 2015 – 16.

Back to Top


More with Less: Hardware and VM Orchestration with More Uptime, Less Futzing; More Performance, Less Hardware

Richard Kiene, Principal Engineer, Faithlife

Abstract

Faithlife was an early adopter of OpenStack in 2013, operating 150+ nodes, but an outage in November, 2014 triggered a set of decisions that have led to improved uptime and performance with less hardware and lower management costs. Struggles with OpenStack had us looking for alternatives before the outage. One of those alternatives was Joyent’s SmartDataCenter, the open source, Mozilla licensed free software that Joyent uses to run their public cloud. We’d put SmartDataCenter through some tests in the lab, but the outage forced us to ask an interesting question: would it be faster to recover the existing stack, or rebuild it on SmartDataCenter?

This session will detail what we learned from that experience, from building and operating our own private cloud across multiple data centers, and what we’d have done differently if we knew then what we know now.

Back to Top


Software-Defined Storage at Microsoft

Jose Barreto, Principal Program Manager, Microsoft

Abstract

The storage industry is going through strategic tectonic shifts. We will walk through the Microsoft’s Software Defined Storage journey – how we got started, what our customers are telling us, where we are now, and how cloud cost and scale inflection points are shaping the future. We will explore how Microsoft is channeling learnings from hosting some of the largest data centers in the planet towards private cloud storage solutions for service providers and enterprises. We will also cover storage scenarios enabled by the next version of Windows Server around low cost storage on standard hardware, Storage Replica for disaster recovery, Storage QoS, cloud scale resiliency and availability enhancements.”

Back to Top


Privacy vs Data Protection

Eric Hibbard, CTO Security and Privacy, Hitachi Data Systems

Abstract

After reviewing the diverging data protection legislation in the EU member states, the European Commission (EC) decided that this situation would impede the free flow of data within the EU zone. The EC response was to undertake an effort to "harmonize" the data protection regulations and it started the process by proposing a new data protection framework. This proposal includes some significant changes like defining a data breach to include data destruction, adding the right to be forgotten, adopting the U.S. practice of breach notifications, and many other new elements. Another major change is a shift from a directive to a rule, which means the protections are the same for all 27 countries and includes significant financial penalties for infractions. This tutorial explores the new EU data protection legislation and highlights the elements that could have significant impacts on data handling practices.

Learning Objectives

  • This tutorial will highlight the major changes to the previous data protection directive
  • Participants will understand the differences between
  • Participants will learn the nature of the Reforms as well as the specific proposed changes

Back to Top


New Approaches to Challenges Facing Enterprise ICT

Gideon Senderov, Director Advanced Storage Products, NEC

Abstract

Today’s approaches to managing and exploiting information and communication technology (ICT) are forcing new solutions to deal with the strategic issues of volume, distance and time. Join NEC to explore new approaches for managing more devices, greater distances, virtualized management of consolidated and cloud environments.

Approaches discussed will encompass software defined infrastructure with policy automation and device abstraction through virtualization techniques. We will also discuss value creation through big data analysis by analyzing not only currently available data but also not-yet surfaced data that can and will be collected from real world sources such as phones, satellites, personal devices, and medical equipment and point of sale data collection.

Join NEC to gain an understanding of how we are taking the journey from today to the future state of software defined storage, big data, and automation to allow you to enter new markets sooner and easier and increase the probability of success in existing and new ventures.

Back to Top


Case Study: How the Housing Authority of the Cherokee Nation Modernized Backup

Tonia Williams, CIO & IT Director, Housing Authority of the Cherokee Nation

Abstract

The Housing Authority of the Cherokee Nation (HACN) is dedicated to helping all Cherokee citizens with housing needs, overseeing $28.5 million in federal programs that aid Cherokee citizens with low-income rental housing, senior housing, rental assistance, college housing and more. HACNs non-federal New Home Construction program helps build affordable homes for Cherokee families. In 2012, when the Housing Authority became a separate entity from the Cherokee Nation, it was faced with the challenge of rebuilding its IT infrastructure from the ground up.

In this informative end-user case study, CIO and IT Director Tonia Williams will discuss how modernizing its backup strategy helped HACN develop an IT infrastructure that would support the current and future needs of the organization, while being mindful of the limited staff and resources available to the team.

As a result of attending this session, attendees will learn about the challenges an organization like HACN faced in attempting to rebuild its IT infrastructure from the ground up. Attendees will also learn why HACN identified a modernized data protection strategy as the starting point and backbone of the rebuild effort, and why organizations faced with a similar challenge should consider doing the same. Lastly, perhaps and most importantly, as a result of attending this session, attendees will learn about the time and cost savings HACN realized as a result of its modernized backup strategy.

Learning Objectives

  • Challenges of rebuilding the orgs IT infrastructure
  • Why the org identified backup and recovery as the project's starting point
  • Time and cost savings HACN realized as a result of its modernized backup strategy

Back to Top


The Storage Revolution

Andrea Nelson, Director of Marketing, Storage Group, Intel

Abstract

There is a storage revolution happening! From the first quantum leap in performance with solid state technology and efficiencies in capacity optimization to creating more intelligent storage through the means of virtualization and software defined storage, the storage industry is drastically changing and optimizing new products and technologies to consume. How has Intel been helping? Through processor innovations, non-volatile memory (NVM) technologies, and network and fabric. Intel is able to accelerate the move to software-defined storage. Learn more about how the industry is moving to new ways to move and storage data and what Intel is doing with new innovations to support it.

Back to Top


Data End of Life in the Cloud: Why, When and How?

Fredrik Forslund, Director - Cloud and Data Center Erasure Solutions, Blancco

Abstract

Securing client data from day one to end-of-life is crucial for cloud service providers, data centers, system integrators and other stakeholders that are storing or migrating customer data. After a brief regulatory overview on industry leaders, this session discusses how to overcome challenges with identifying and measuring security in new virtual environments and external and internal cloud providers throughout data lifecycle. The session will also cover when and how secure data erasure can be carried out for diligent data housekeeping in active storage environments at end-of-life, and present a case study of a cloud client using automated data erasure to provide enhanced security for its clients.

Learning Objectives

  • The audience will learn what the leading industry experts say about securing data in the cloud at its end point; the latest regulatory requirements for data erasure around the world; and when and how data can be securely erased to meet compliance.

Back to Top


Peering Into the e-Discovery Legal Looking Glass

Eric Hibbard, CTO Security and Privacy, Hitachi Data Systems, with Representatives of the American Bar Association

Abstract

Led by Eric Hibbard, this discussion is sponsored by the American Bar Association’s Electronic Discovery and Digital Evidence (EDDE) Committee of the Section on Science and Technology Law and consists of legal and technology experts as well as a member of the judiciary. The goal is to continue the cross-pollination between the legal community and the data storage industry by exploring a range of topics that include electronic discovery, current and emerging forms of digital evidence, and legal implications of emerging technology (IoT, cloud computing, software defined solutions, etc.). DSI attendees will have an opportunity to participate in the discussion by offering comments and/or questions.

Back to Top

NETWORKING

Next Generation Storage Networking for Next Generation Data Centers

Dennis Martin, President, Demartek

Abstract

With 10GigE gaining popularity in data centers and storage technologies such as 16Gb Fibre Channel making solid gains, it's time to rethink your storage and network infrastructures. Learn about futures for Ethernet such as 40GbE, 100GbE and the new 25Gb Ethernet. Get an update on 32Gb Fibre Channel, 12Gb SAS and other storage networking technologies. We will touch on some technologies such as NVMe, USB 3.1 and Thunderbolt 2 that may find their way into datacenters later in 2015. We will also discuss cabling and connectors and which cables NOT to buy for your next datacenter build out.

Learning Objectives

  • What is the future of Fibre Channel and Ethernet Storage?
  • What I/O bandwidth capabilities are available with the new crop of servers?
  • Share some performance data from the Demartek lab

Back to Top


Next Generation Low Latency Storage Area Networks

Rupin Mohan, Sr. Manager Strategy and Planning, Hewlett-Packard
Craig Carlson, Sr. Technologist - CTO Office, QLogic Corporation

Abstract

In this session, we will present a current (FC, FCoE and iSCSI) and future state (iSER, RDMA, NVMe, and more) of the union of the next generation low latency Storage Area Networks (SAN's) and discuss how the future of SAN's protocols will look like for block, file and object storage.

Learning Objectives

  • Low Latency SAN(s)
  • Storage Protocols

Back to Top


SNIA Tutorial:
The Continued Evolution of Fibre Channel

Mark Jones, President, Fibre Channel Industry Association

Abstract

Fibre Channel has been the ubiquitous connection of choice for connecting storage within the datacenter for over fifteen years. The start of the sixth generation is being celebrated this year by introducing a staggering leap in performance and new features. We will discuss why fibre channel holds the enduring popularity it has as well as an in-depth look at the new Gen 6 features and what the future holds. We will discuss how fibre channel fits in with key datacenter initiatives such as Virtualization, the pervasive adoption of SSD

Learning Objectives

  • Attendees will learn the new features of Gen 6 Fibre Channel
  • Provide scenarios of how Fibre Channel is deployed in the datacenter
  • Discuss how the right network architecture must be implemented to prevent IO from becoming a bottleneck when deploying Flash for tier-1 storage

Back to Top


SNIA Tutorial:
PCIe Shared IO

Jeff Dodson, Hardware Architect, Avago Technologies

Abstract

PCI Express (PCIe) is the fundamental connection between a CPU

Learning Objectives

  • How to share multi function PCIe endpoint
  • PCIe hardening against errors and surprise changes
  • High availability and PCIe

Back to Top


SNIA Tutorial:
NFS, pNFS and FedFS: A Modern Datacenter Network Protocol

Alex McDonald, NetApp CTO Office, NetApp

Abstract

The NFSv4 protocol undergoes a repeated lifecycle of definition and implementation. This presentation will be based on years of experience implementing server-side NFS solutions up to NFSv4.1.

Learning Objectives

  • The impact of NFS on the virtualized data center
  • What to expect from NVSv4 major features enhancements including security, namespace, sessions and layouts

Back to Top


SNIA Tutorial:
SAS: The Fabric for Storage Solutions

Marty Czekalski, President, SCSI Trade Association
Greg McSorley, Technical Business Development Manager, Amphenol, STA Vice President

Abstract

SAS is the backbone of nearly every enterprise storage deployment, rapidly evolving, adding new features such Zone Block Commands, enhanced capabilities and offering

Learning Objectives

  • Understand the basics of SAS architecture and deployment, including its compatibility with SATA, that makes SAS the best device level interface for storage devices.
  • Hear the latest updates on the market adoption of 12Gb/s SAS and why it is significant. See high performance use case examples in a real-world environment such as distributed databases.
  • See examples of how SAS is a potent connectivity solution especially when coupled with a SAS switching solutions.

Back to Top

OBJECT STORAGE

SNIA Tutorial:
Object Storage - Key Role in Future of Big Data

Anil Vasudeva, President and Chief Analyst, IMEX Research

Abstract

In the era of Big Data managing disparate storage solutions for structured data (databases, log files, text, value based data) and unstructured data (audio, documents, emails, images, video) has become challenging for IT Organizations. A key feature of managing big data is through using Object Storage's rich metadata system, which allows you to store unstructured, semi-structured, and structured data on any combination of standard x86 server hardware and is scalable to petabytes and billions of files at the lowest ongoing TCO of ownership storage system and yet is instantly and easily query-able with full search functionality, to meet with your online business demands.

Object Storage allows users to easily store, search and retrieve data across the Internet. This is object-based storage's strengths in automating and streamlining data storage in cloud environments while store unstructured, semi-structured, and structured data on the same storage system

Learning Objectives

  • Learn how Object storage has started to play a fundamental role in adding metadata for handling predictive analytics in massive amounts of structured and unstructured big data
  • Learn how growth of unstructured causes issues that are being addressed by Object Storage in a score of online business, media and scientific applications

Back to Top


Overcoming the Challenges of Storage Scale and Cost

Rob McCammon, Director of Product Management, Cleversafe

Abstract

In the next few years, an organization's unstructured data will continue to grow at an astronomical rate about 10x every five years placing increased strain on IT teams and organizations. On average, organizations storing a petabyte of unstructured data today can expect 10 petabytes in five years or less. And organizations with 10 petabytes today can expect 100. Organizations need a next generation storage platform, which incorporates new technical approaches including object storage and erasure coding, to meet these requirements. Through real world examples, this session will explore current data storage challenges, the benefits of object storage and how to deploy an object storage system.

Learning Objectives

  • How you and your organization can scale your current IT systems to manage explosive data growth
  • How new technical approaches including object storage and erasure coding are revolutionizing petabyte to exabyte scale storage
  • How current organizations are leveraging these approaches to save as much as more than 80 percent of storage costs
  • What to consider when choosing a petabyte to exabyte scale storage solutions
  • How to implement and deploy an object storage system

Back to Top

PERFORMANCE

Lies, Damn Lies, and Performance Metrics

Barry Cooks, Vice President of Engineering, Virtual Instruments

Abstract

IT pros spend an inordinate amount of time (often under duress) trying to understand the metrics coming from their systems; unfortunately, many of these metrics reflect utilization numbers or course grained averages at best and don't truly reflect the performance being observed by the end users. For too long, storage teams have focused on methods such as brief snapshots of server activity that only demonstrate the average operations and none of the peaks and valleys in between that have the potential to severely damage the ultimate outcome. This presentation will cover how performance management has evolved over time, highlighting the common areas where storage administrators are still faltering and presenting truly valuable methods that not only capture legitimate performance metrics, but also teach the IT teams how to put those vast amounts of data to use.

Learning Objectives

  • What's wrong with the most common storage performance metrics and how these strategies can negatively impact a business
  • How storage systems can be optimized with newer methods that present more accurate views of performance beyond simply utilization
  • How storage administrators can extract the data from their data centers that will actually benefit their company's operations and end user offerings

Back to Top


The Buzz About Flash Storage and Why Performance Testing is Critical: How to Implement SNIA Flash Storage Testing Methodologies for Real-World Environments

Peter Murray, Technical Evangelist, Load DynamiX

Abstract

Measuring the performance of flash storage arrays involves more than just measuring speeds and feeds using common I/O tools that were designed for measuring single disks. Many of today's flash-based storage arrays implement sophisticated compression, deduplication and pattern reduction processing to minimize the amount of data written to flash memory in order to reduce storage capacity requirements and extend the life of flash memory. Such technologies can have a significant impact on performance.

Because more powerful and accurate testing is needed to effectively measure the performance and capacity of flash-based storage, SNIA has created a technical working group to establish a testing methodology for solid state arrays. Introducing complex data patterns that effectively stress data reduction technologies are an important part of the technical working group's work. Measuring performance without preconditioning the arrays, without data reduction enabled, or by using tools that offer a limited set of data patterns falsely overstates modern flash array performance.

Learning Objectives

  • How to implement a more robust testing methodology into a real-world flash storage environment using workload modeling and performance profiling
  • What pattern-based performance measurement involves, including methods for stressing compression, deduplication and pattern reduction processing in flash storage arrays based on real-world application workload representations
  • How an effective performance measurement and validation solution, run at scale, can deliver an accurate view of the performance of flash-based storage arrays to help vendors scale systems and customers purchase more effectively.

Back to Top


SNIA Tutorial:
Utilizing VDBench to Perform IDC AFA Testing

Michael Ault, Oracle FlashSystem Consulting Manager, IBM

Abstract

IDC has released a document on testing all-flash-arrays (AFA) to provide a common framework for judging AFAs from various manufacturers. This parpa provides procedures scripts and examples to perform the IDC test framework utilizing the free tool VDBench on AFAs to provide a common set of results for comparison of multiple AFAs suitability for cloud or other network based storage.

Learning Objectives

  • Undertand the requirements of IDC testing
  • Provide guidelines and scripts for use with VDBench for IDC tests
  • Demonstrate a Framework for evaluating multiple AFAs using IDC guidlines

Back to Top


vSphere Performance

Johnathan Paul, Senior R&D Expert, Siemens Medical Solutions

Abstract

This is an in-depth tutorial on vSphere 5.x performance analysis which includes a detailed description of the Core Four, the challenges of doing end-to-end performance analysis, and many use cases and Best Practices for deploying and understanding virtualization.

Learning Objectives

  • Virtualization Basics
  • Performance Analysis
  • Best Practices

Back to Top


Flash Cache in the Data Center-The Future is Now

Wayne Lam, Chairman/CEO, Cirrus Data Solutions

Abstract

Adding Cache to enhance storage performance is not new, but typically it is a month long project from analyzing performance issues to deployment of the solution, and with lots of downtime in order to deploy. Here is a really cool way to deploy Flash-based cache at the SAN level that no changes and zero downtime: true Plug-and-Cache!

Learning Objectives

  • A quick tutorial on Fibre Channel port spoofing: the bad and the good
  • The Transparent Datapath Intercept using FC Double-Spoofing
  • The zero-downtime deployment of any FC SAN appliance: like Flash Cache

Back to Top

SECURITY

Securing Your Data for the Journey to the Clouds

Liwei Ren, Senior Software Architect, Trend Micro

Abstract

In the era of cloud computing, data security is one of the concerns for adopting cloud applications. In this talk, we will investigate a few general data security issues caused by cloud platforms: (a) Data security & privacy for the residence in cloud when using cloud SaaS or cloud apps; (b) Data leaks to personal cloud apps directly from enterprise networks; (c) Data leaks to personal cloud apps indirectly via BYOD devices.

Multiple technologies do exist for solving these data security issues. They are CASB , Cloud Encryption Gateway, Cloud DLP, and even traditional DLP. Those products or services are ad-hoc in nature. In long term, general cloud security technologies such as FHE (fully homomorphic encryption) or MPC (multi-party computation) should be implemented when they become practical.

Learning Objectives

  • Major cloud security problems.
  • Practical technologies for cloud data security problems.
  • How a few technologies work together to provide a total solution for cloud data security.
  • Cloud data security technologies in the future.

Back to Top


SNIA Tutorial:
Implementing Stored-Data Encryption

Michael Willett, Storage Security Strategist, Samsung

Abstract

Data security is top of mind for most businesses trying to respond to the constant barrage of news highlighting data theft, security breaches, and the resulting punitive costs. Combined with litigation risks, compliance issues and pending legislation, companies face a myriad of technologies and products that all claim to protect data-at-rest on storage devices. What is the right approach to encrypting stored data? The Trusted Computing Group, with the active participation of the drive industry, has standardized on the technology for self-encrypting drives (SED): the encryption is implemented directly in the drive hardware and electronics. Mature SED products are now available from all the major drive companies, both HDD (rotating media) and SSD (solid state) and both laptops and data center. SEDs provide a low-cost, transparent, performance-optimized solution for stored-data encryption. SEDs do not protect data in transit, upstream of the storage system. For overall data protection, a layered encryption approach is advised. Sensitive data (eg, as identified by specific regulations: HIPAA, PCI DSS) may require encryption outside and upstream from storage, such as in selected applications or associated with database manipulations.

Learning Objectives

  • The mechanics of SEDs, as well as application and database-level encryption
  • The pros and cons of each encryption subsystem
  • The Overall Design of a Layered Encryption Approach

Back to Top


Hackers, Attack Anatomy & Security Trends

Ted Harrington, Executive Partner, Independent Security Evaluators

Abstract

Attacks against enterprises and their technology vendors are facilitated by the current rapid adoption of embedded systems, cloud solutions, and web based platforms. These attacks often undermine the very monetization, scalability and user experience goals for which these systems were designed and deployed. As malicious hackers advance their techniques at a staggering pace, often ren-dering current defense tactics obsolete, so too must security practitioners obsess over deploying progressive techniques. Pre-sented by the elite organization of white hat hackers most widely known for being first to break the iPhone, this session will ana-lyze the anatomies of real world attacks against high profile systems, ranging from the well known Target breach, to Texas In-struments RFID, to Apple products, and more. It will extract lessons from these attack anatomies to provide a framework to account for these modern attackers, articulate industry context, and supply attendees with key takeaways, including immediately actionable guidance.

Learning Objectives

  • Discipline Division: Security Separated from Functionality
  • Perspective Matters: White Box vs. Black Box
  • Defense Priorities: Secure Assets, not Just Perimeters
  • Timing Security: Build It In, Not Bolt It On
  • Procedural Duration: Security as an Ongoing Process

Back to Top


Interoperable Key Management for Storage

Tony Cox, Director Business Development, Strategy & Alliances, Cryptsoft

Abstract

The OASIS Key Management Interoperability Protocol (KMIP) has been around for more than five years now and has been broadly adopted by industry. Practical experience from leading interoperability events and plugfests and working with the SNIA Storage Security Industry Forum on implementing the KMIP conformance testing program form the basis of this presentation.

Also covered is an in-depth analysis of how various storage vendors take advantage of KMIP to deliver on the interoperable key management promise for storage products, including practical examples and recommendations to acheive higher levels of market awareness.

Learning Objectives

  • In-depth knowledge of the core of the OASIS KMIP
  • Awareness of requirements for practical interoperability
  • Awareness of the various SNIA SSIF conformance testing options

Back to Top


Encrypted Storage: Self-Encryption versus Software Solutions

Michael Willett, Storage Security Strategist, Samsung

Abstract

The Trusted Computing Group has defined and standardized Self-Encrypting Storage/Drives (SED) and the whole drive industry, including HDD and SSD, has implemented those standards. SEDs are steadily replacing non-SEDs in customer storage-requisition cycles. Such customers are weighing the relative merits of hardware-based self-encryption versus software-based solutions. Practical experience and the pro/con of making the transition to SEDs will be shared in this session.

Learning Objectives

  • Review compliance requirements for stored-data encryption
  • Understand the concept of self-encryption
  • Compare hardware versus software based encryption
  • Examine practical experience of implementing stored data encryption

Back to Top


A Customer’s Point of View to Self Encrypting Disks Drives and Software Based Disk Encryption

Douglas Spindler, IT Instructor, City College San Francisco

Abstract

Last year I attended my first SNIA conference. In one of the sessions I learned about a technology I had never heard of before, Self Encrypting Disks. Having worked for year in the data center of a large medical center where we have petabytes of disk storage and use disk encryption I left the conference puzzled as to why none of the vendors we purchase products from had ever mentioned SEDs. Since last year’s SNIA I have asked over a hundred disk vendors and IT processionals what they know about SEDs and software based disk encryption. In this session I will share the results of my findings.

Learning Objectives

  • A consumer’s understanding of the technology of Self Encrypting Disks
  • A consumer’s comparison of software based disk encryption with Self Encrypting Disks
  • SED packaging - Consumers don't even know what they are buying
  • Disk industry, Please do a better job of informing consumers about your products

Back to Top


SNIA Tutorial:
Storage Security Best Practices

Eric Hibbard, CTO Security and Privacy, Hitachi Data Systems

Abstract

Many organizations face the challenge of implementing protection and data security measures to meet a wide range of requirements, including statutory and regulatory compliance. Often the security associated with storage systems and infrastructure has been missed because of limited familiarity with the storage security technologies and/or a limited understanding of the inherent risks to storage ecosystems. The net result of this situation is that digital assets are needlessly placed at risk of compromise due to data breaches, intentional corruption, being held hostage, or other malicious events.

Both SNIA and ISO/IEC are combating this situation by providing materials that can be used to address storage security issues. In the case of ISO/IEC, the materials are contained in a new International Standard that seeks to provide detailed technical guidance on the protection (security) of information where it is stored and to the security of the information being transferred across the communication links; it includes the security of devices and media, the security of management activities related to the devices and media, the security of applications and services, and security relevant to end-users.

This session introduces the major storage security issues, outlines the guidance, and introduces the new draft standard.

Learning Objectives

  • General introduction to storage security issues
  • Identifies key elements of the storage security guidance
  • Provides an overview of the ISO/IEC 27040 standard

Back to Top

SOFTWARE DEFINED STORAGE

Software Defined Storage at the Speed of Flash

Carlos Carrero, Technical Product Manager, Symantec
Rajagopal Vaideeswaran, Principal Software Engineer, Symantec

Abstract

The quest for a modern Data center is to increase performance, flexibility and reduce dependency on Tier 1 storage to optimize cost. However, addressing performance and cost benefit requirements together is a bigger technology challenge. This session will demonstrate how Software Defined Storage can outperform expensive and proprietary all-flash array solutions with 2U of commodity hardware. During the session, we will illustrate how Software Defined Storage can unlock the capabilities of internal storage to provide a high performance, always on application infrastructure and management solution. A specific OLTP configuration that achieved over 1 million transactions per minute with lower latencies than high cost arrays will be covered in detail. With this being software defined storage in a truly converged infrastructure, other workload optimizations utilizing the same commodity hardware will also be discussed.

Learning Objectives

  • Understand how Software Defined Storage can unlock in-server flash technologies
  • How to run highly available applications without shared storage need
  • Create a high performance infrastructure using commodity HW at a fraction of the cost

Back to Top


Using Software-Defined Storage to Provide a Complete BC, DR and Data Protection

Ibrahim Rahmani, Director of Product Marketing, DataCore

Abstract

Surveys have consistently shown that companies are unprepared for events that will cause outages in their infrastructure. Whether it is a virus outbreak, a data center outage or a regional disaster, companies need to put systems and procedures in place to deal with these emergencies. Software-defined Storage has shown great promise in creating a multi-layered solution that will help companies ensure their data is available and accessible.

Learning Objectives

  • Understand the different events that will affect your data
  • Learn about the the options to protect your data from outages
  • Put it together into a complete solution

Back to Top


Deploying VDI with Software-Defined Storage and Hyper-convergence

Michael Letschin, Director of Product Management Solutions, Nexenta

Abstract

Discuss how VDI deployments have typically been built with traditional storage arrays and isolated compute solutions, and the various alternative solutions available today. The deployment of hyper-converged solutions combines compute and storage, resulting in a hardware vendor lock-in. Software-Defined Storage combined with compute nodes provides hardware agnostic VDI architectures. Join us to explore options resulting in fully integrated hyper-convergence solutions allowing VDI to be defined by the business and not by a hardware vendor.

Learning Objectives

  • How to deploy VDI in a hyper-converged implementation
  • How to expand storage for VDI outside of your existing infrastructure and into the data center
  • Definition of hyper-converge

Back to Top


Take a Technical Deep Dive Into Hyper-convergence and Learn How It Can Simplify IT and Support Various Virtual Workloads

Kiran Sreenivasamurthy, Director of Product Management, Maxta

Abstract

Traditional hyper-converged solutions are sold as appliances while traditional software-defined storage has separated compute from storage. Maxta offers a new paradigm with a scale-out storage solution that is both hyper-converged and software-defined, offering customers a highly-scalable infrastructure that is also very economical to deploy on commodity hardware. Maxta MaxDeploy gives customers the flexibility to choose any x86 server, any storage hardware, and any hypervisor for a scale-out, hyper-converged solution. This maximizes cost savings by reduces IT management effort and reducing or eliminating the need for traditional SAN or NAS.

Learning Objectives

  • Learn how a software-defined model lowers costs compared to a traditional hyper-converged solution.
  • Review VDI benchmark results showing the Maxta solution outperforms two other popular hyper-converged solutions running on similar hardware.
  • Understand how Maxta MaxDeploy supports any hypervisor on any combination of storage devices.
  • Discuss pros and cons of a pure software-defined model vs. an appliance.

Back to Top

SOLID STATE STORAGE

Accelerating Real-Time, Big Data Applications

Bob Hansen, VP Systems Architecture, Apeiron Data Systems

Abstract

The scale out, large server count cluster applications that have long dominated the mega data centers are going main stream. Driven by open source software and using white box hardware, “big, fast data” applications such as NoSQL and Hadoop are rejecting traditional HDD based storage solutions in favor of very high performance DRAM and flash based data stores. This presentation discusses how the current storage architecture for real time big, fast data applications will be forced to evolve over time. The talk will conclude with technology overview of a new, highly scalable, flash based storage architecture that delivers double the performance of in-server SSDs while providing all of the benefits of external, virtualized, shared storage.

Learning Objectives

  • Understand storage requirements for scale-out, big, fast data applications
  • Understand how in-box DRAM and flash address these requirements today
  • Understand how these storage requirements and solutions will change over time
  • Be introduced to a new, world class, flash based, external storage architecture

Back to Top


The Future of Flash in the Data Center: A Crash Course on Flash-Based Architectures and Next-Gen Design

Pending Speaker Confirmation

Abstract

To make Flash pervasive across the data center, businesses have to start thinking beyond implementing traditional storage devices and media ad hoc. Infrastructure architects need to now recognize that deploying Flash supports organizational goals beyond just accelerating applications in the data center by minimizing the total costs of central storage and forcing an evaluation of performance in the data center. HGST will show how looking at an entire data center in terms of both performance and capacity and subsequently optimizing Flash and associated storage devices to operate as performance-optimized architecture will better support business growth goals moving forward and truly make real-time decisions making around operational data a reality.

Learning Objectives

  • Show how Flash begins a process of optimizing data center infrastructure as a whole to be geared for performance
  • Demonstrate how data centers will need to be optimized for both performance and capacity across systems, and not just for single applications
  • Explain how performance-optimized architecture makes real-time decision making about operational data a reality

Back to Top


Creating Higher Performance Solid State Storage with NVMe

J Metz, R&D Engineer, Cisco

Abstract

Non-Volatile Memory Express (NVMe) is a new standard aimed at getting more performance out of solid state storage (flash memory). It uses the PCIe bus, which provides much higher throughput than disk interfaces such as SAS or SATA. It also has a strong ecosystem offering extensive support and a sound development base. Designers can achieve high throughput in distributed systems by transporting NVMe over Ethernet fabrics, including RDMA-oriented RoCE and iWARP as well as traditional storage fabrics like FCoE.

Learning Objectives

  • Understand what is NVMe
  • Understand the uses and applications of NVMe
  • Understand the future of NVMe development

Back to Top


The SNIA NVM ProgrammingTWG and the NVM Revolution

Walt Hubis, Owner, Hubis Technical Associates

Abstract

This presentation provides and introduction to the SNIA Non-Volatile Memory (NVM) Technical Working Group and the current activities leading to software architectures and methodologies for new NVM technologies. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.

Learning Objectives

  • Understand the impact of NVM on data storage.
  • Gain familarity with the scope and value ot the SNIA NVM Programming Model in software development.
  • Explore current software methodologies for dealing with NVM.
  • Learn where current NVM technologies are headed and how software will deal with these changes.

Back to Top


SNIA Tutorial:
Solid-State Deployments - Recommendations for POC's

Russ Fellows, Sr. Partner, Evaluator Group

Abstract

Solid-state storage systems are now being deployed for tier-1 applications, particularly in virtual application environments.

A pre-selection proof of concept or on-site vendor comparison can be a valuable tool as part of the selection process. However, it is critical to all parties involved that the POC is setup and run correctly and efficiently.

This session will review our recommendations after recently running a POC and reviews from several POC validations for solid-state storage in virtual application environments. We talk specifically about the tools and technologies required to perform a meaningful and valid POC efficiently.

Learning Objectives

  • POC Process - How to avoid alienating your CIO or your vendors
  • POC Technologies - Choosing the tools, technologies and tests to utilize
  • POC Evaluation - Communicating valid and meaningful conclusions from your POC

Back to Top


Considerations to Accurately Measure Solid State Storage Systems

Leah Schoeb, Storage Solutions and Performance Manager, Intel

Abstract

Solid state storage arrays have been a way to keep up with performance demands of today's critical applications coupled with capacity optimization technologies like deduplication and compression to greatly increase the efficient use of storage. Advances in deduplication and compression algorithms have created significant space and cost savings. Data reduction technologies have become a mandatory part of the modern storage infrastructure and therefore are an essential part of new solid state storage array designs. This means that when measuring performance with these new arrays with data reduction technologies built-in it is required to utilize tools and methodologies to accurately model, at the very least, the deduplication and compression characteristics of the data sets, access patterns, metadata characteristics, and data streams used in these measurements.

The Solid State Storage System (S4) TWG was created for the purpose of addressing the unique performance behavior of solid state storage systems. The TWG is creating a specification which provide guidance to accurately measure the performance of enterprise solid state systems as oppose to devices. These specifications are vendor agnostic and will support all major solid state storage system technologies.

Learning Objectives

  • Address the specific needs for solid state storage systems
  • Understand the importance of data content and data streams
  • How system wide data managment affects performance

Back to Top


Application Acceleration Using Flash Memory

Saeed Raja Director of Product Marketing and Management, SanDisk

Abstract

NoSQL databases, which provide a mechanism for storing and retrieving data that is modeled other than the tabular relations in relational databases, have become increasingly popular amongst a wide variety of users and applications. MongoDB is a cross-platform document-oriented NoSQL database and has been adopted as backend software by a number of major websites and services, including Craigslist, eBay, Foursquare, SourceForge, Viacom, and the New York Times, among others. But the performance of MongoDB improves significantly when flash storage is used to replace traditional hard disk drives (HDD). Using software-enabled flash will further improve this performance and also lead to server consolidation and lower operating costs due to lower power consumption of SSD storage.

Learning Objectives

  • How using software-enabled flash with help improve the performance of NoSQL databases

Back to Top


3D RRAM: Towards Next Generation High Capacity Low Latency Storage Systems

Sylvain Dubois, Vice President of Strategic Marketing and Business Development, Crossbar, Inc

Abstract

Widely touted as the most promising alternative to traditional non-volatile Flash memory technologies, Resistive RAM will provide data storage systems with unprecedented performance improvements.

Resistive RAMs CMOS compatibility, manufacturing simplicity, crosspoint array architecture, 3D stackability and scalability to the latest technology nodes are the required characteristics to pave the way for terabyte storage-on-a-chip to become a reality. As a non block-oriented memory architecture with small page alterability, high retention and reliability, RRAM is disrupting the legacy memory and storage layering in data centers. The true transformation to next generation high capacity low latency storage systems is requiring a revolutionary new approach to solid-state storage devices and their interconnected processors.

Solid-state storage devices that are able to leverage superior characteristics of 3D RRAM technology will provide significant performance acceleration, simplification and reduction in system overhead and cost. This transformation will enable hundreds of terabytes in a small form factor to be accessed at high speed, high throughput and high IOPS, while consuming less power at lower cost.

Learning Objectives

  • Learn about 3D RRAM technology characteristics and product features
  • Learn about data storage system benefits data storage system benefits

Back to Top


How Fast is Fast? Block IO Performance on a RAM Disk

Eden Kim, CEO, Calypso Testers

Abstract

In memory applications are becoming increasingly desirable due to the possibilities of increased speed and lower latencies. NVDIMM Type N SSDs (in memory block IO with back up to non volatile NAND flash) and memory mapped load and store can be expected to perform close to, if not in excess of, block IO storage on a RAM disk. In this talk, performance is compared between a linux drive RAM disk and traditional PCIe x8 SSDs and 12Gb/s SAS SSDs for SNIA benchmark tests as well as a database OLTP workload.

Learning Objectives

  • How to measure RAM disk performance
  • What factors affect RAN disk block IO performance
  • How does performance scale
  • Comparison of RAM disk performance to PCIe & SAS

Back to Top

STORAGE MANAGEMENT

Storage Strategy Development Part I: Understanding Needs and Developing Requirements

Randy Kerns, Senior Strategist, Evaluator Group

Abstract

This two part session will explain best practices in developing a storage strategy to manage and store information.

Part 1: Understanding Needs and Developing Requirements The first part will focus on understanding the needs for an individual environment. Examining the current environment and how information is used and retained is the first step followed by investigating changes that will be necessitated in the future covering the next three to five years. Equally important is understanding the business dynamics which includes the organization and controls for information processing, outside influences and demands, and financial realities. From this basis of knowledge, the method for developing a set of current requirements is outlined.

Back to Top


Storage Strategy Development Part 2: Evaluating, Planning and Selling the Strategy 

Randy Kerns, Senior Strategist, Evaluator Group

Abstract

This two part session will explain best practices in developing a storage strategy to manage and store information.

Part 2: Evaluating, Planning and Selling the Strategy The second part of the session will build a structure for developing a strategy. Based on the requirements, the process of considering possible solutions is a factor in the strategy work. This will include evaluation of technology and products as parts of those solutions and methods to perform the evaluations will be discussed. With that completed, the steps of planning the implementation and selling the strategy as a set of projects will be outlined with recommendations on approaches that have been successful.

Back to Top


When Data Pools Become Data Lakes: Managing and Storing Massive Quantities of Data

Lance Broell, Product Marketing Manager, DataDirect Networks

Abstract

Keeping pace with the demands that unstructured data growth places on today's storage infrastructures, and doing it in the face of diminishing budgets, has become a massive challenge for IT organizations everywhere. Data pools have become data lakes and, as a result, there has been increased interest in and adoption of object storage platforms. New interfaces are offering more choice for customers looking to seamlessly integrate cloud storage applications. This approach can help organizations with large archives, media repositories and web data stores easily implement a private or hybrid cloud, or transition to or from the Amazon S3 cloud, without the time and overhead associated with modifying existing cloud applications.

Learning Objectives

  • Learn how object storage enables the implementation of private or hybrid cloud strategy to manage massive data sets cost efficiently and securely, while seamlessly connecting to third-party software applications and technology partners that use the S3 API.
  • Learn how object storage can help you manage and protect large-scale data with reduced complexity.
  • Learn how to implement a broader range of software and hardware solutions to improve overall ROI and business agility.

Back to Top


DevOps and Storage Management – Improving Storage Delivery through Continuous Improvement and Automation

Derek Stadnicki, Partner, DiscreteIO

Abstract

A lot has been said recently in operational circles around the topic of DevOps. How can we make our operations teams more agile? How can we do more with less? How can we deliver faster to our customers? This talk intends to put a DevOps spin on storage management. It intends to look at various ways that DevOps principles can be put to use within a storage operations team by adopting a strong customer focus, encouraging collaboration, implementing continuous improvement and introducing automation.

Learning Objectives

  • Define DevOps How to start defining storage services
  • Define continuous improvement and how to impliment
  • What to automate and how to start
  • How to get more time to do the tasks that really matter

Back to Top


SNIA Tutorial:
Containers: Future of Virtualization & SDDC

Anil Vasudeva, President and Chief Analyst, IMEX Research

Abstract

Docker's Containers by packaging applications into portable, hardware-isolated containers are breathing fresh life into a lightweight and portable approach to virtualization architecture. A container-based approach, in which applications can run in isolation, without relying on a separate operating system per application, could save huge amounts of hardware resources. In effect, containers will act as the new generation of OS Hypervisor. containers will open the door to an additional level of virtualization to maximize consolidation. They even have the potential to provide a transformative catalyst to adoption of open-source software in Software Defined Data Centers [SDDC]. One of Docker's appeal has been in helping developers quickly get their applications from concept to production. This architecture is starting to see an explosive growth with adoption being promised by several leading virtualization vendors and Cloud Service Providers.But there are a few barriers and technical limitations to overcome for containers - such as container's inability to provide a virtual instance of windows on a Linux server - before they have an expanded universal adoption.

Learning Objectives

  • What are containers? How do they function? Where do they fit in the SDDC Stack?
  • What are their pros and cons versus existing data center virtualization architectures? What impact will they have on the existing VM based architecture in the SDDC stack

Back to Top