2016 Tutorials - USENIX FAST Abstracts

The Abstracts

SMB Remote File Protocol (including SMB 3.x)
Tom Talpey

The SMB protocol evolved over time from CIFS to SMB1 to SMB2, with implementations by dozens of vendors including most major Operating Systems and NAS solutions. The SMB 3.0 protocol had its first commercial implementations by Microsoft, NetApp and EMC by the end of 2012, and many other implementations exist or are in-progress. The SMB3 protocol is currently at 3.1.1 and continues to advance. This SNIA Tutorial begins by describing the history and basic architecture of the SMB protocol and its operations. The second part of the tutorial covers the various versions of the SMB protocol, with details of improvements over time. The final part covers the latest changes in SMB3, and the resources available in support of its development by industry.

Learning Objectives

  • Understand the basic architecture of the SMB protocol family
  • Enumerate the main capabilities introduced with SMB 2.0/2.1
  • Describe the main capabilities introduced with SMB 3.0 and beyond

Fog Computing and its Ecosystem
Ramin Elahi

In relation to “cloud computing” , it is bringing the computing & services to the edge of the network. Fog provides data, compute, storage, and application services to end-users. The distinguishing Fog characteristics are its proximity to end-users, its dense geographical distribution, and its support for mobility. Services are hosted at the network edge or even end devices such as set-top-boxes or access points. Thus, it can alleviate issues the IoT (Internet of Things) is expected to produce such as reducing service latency, and improving QoS, resulting in superior user-experience. Fog Computing supports emerging Internet of Everything (IoE) applications that demand real-time/predictable latency (industrial automation, transportation, networks of sensors and actuators). Thanks to its wide geographical distribution the Fog paradigm is well positioned for real time big data and real time analytics. Fog supports densely distributed data collection points, hence adding a fourth axis to the often mentioned Big Data dimensions (volume, variety, and velocity)

Converged Storage Technology
Liang Ming

At first, we will introduce the current status and pain point of Huawei distrituted storage technology and then, the next generation of key-value converged storage solution will be presented. Following, we will discuss the conception of key-value storage and show what we have done to promote the key-value standard.

Next, we will show how do we build our block service, file service and object service based on the same kay-value pool. Converged Storage Technology. At last, the future of storage technoly for VM and container will be discussed. The audience of topic should be the engineers of storage technology. And we want to discuss about converged storage technology with storage peers.

Learning Objectives

  • Convergence, Consolidation, and Virtualization of Infrastructure, Storage Devices, and Servers
  • Deployment: Tutorial address use-cases typical deployment or operational considerations focused on the use

Utilizing VDBench to Perform IDC AFA Testing
Michael Ault

IDC has released a document on testing all-flash-arrays (AFA) to provide a common framework for judging AFAs from various manufacturers. This parpa provides procedures scripts and examples to perform the IDC test framework utilizing the free tool VDBench on AFAs to provide a common set of results for comparison of multiple AFAs suitability for cloud or other network based storage.

Learning Objectives

  • Undertand the requirements of IDC testing
  • Provide guidelines and scripts for use with VDBench for IDC tests
  • Demonstrate a Framework for evaluating multiple AFAs using IDC guidlines

Practical Online Cache Analysis and Optimization
Carl Waldspurger
Irfan Ahmad


The benefits of storage caches are notoriously difficult to model and control, varying widely by workload, and exhibiting complex, nonlinear behaviors. However, recent advances make it possible to analyze and optimize high-performance storage caches using lightweight, continuously-updated miss ratio curves (MRCs). Previously relegated to offline modeling, MRCs can now be computed so inexpensively that they are practical for dynamic, online cache management, even in the most demanding environments.

After reviewing the history and evolution of MRC algorithms, we will examine new opportunities afforded by recent techniques. MRCs capture valuable information about locality that can be leveraged to guide efficient cache sizing, allocation, and partitioning, in order to support diverse goals such as improving performance, isolation, and quality of service. We will also describe how multiple MRCs can be used to track different alternatives at various timescales, enabling online tuning of cache parameters and policies.

Learning Objectives

  • Storage cache modeling and analysis.
  • Efficient cache sizing, allocation, and partitioning.
  • Online tuning of commercial storage cache parameters and policies.

Object Drives: A New Architectural Partitioning
Mark Carlson

A number of scale out storage solutions, as part of open source and other projects, are architected to scale out by incrementally adding and removing storage nodes. Example projects include:

  • Hadoop’s HDFS
  • CEPH
  • Swift (OpenStack object storage)

The typical storage node architecture includes inexpensive enclosures with IP networking, CPU, Memory and Direct Attached Storage (DAS). While inexpensive to deploy, these solutions become harder to manage over time. Power and space requirements of Data Centers are difficult to meet with this type of solution. Object Drives further partition these object systems allowing storage to scale up and down by single drive increments.

This talk will discuss the current state and future prospects for object drives. Use cases and requirements will be examined and best practices will be described.

Learning Objectives

  • What are object drives?
  • What value do they provide?
  • Where are they best deployed?

Privacy vs Data Protection: The Impact of EU Data Protection Legislation
Thomas Rivera

After reviewing the diverging data protection legislation in the EU member states, the European Commission (EC) decided that this situation would impede the free flow of data within the EU zone. The EC response was to undertake an effort to "harmonize" the data protection regulations and it started the process by proposing a new data protection framework. This proposal includes some significant changes like defining a data breach to include data destruction, adding the right to be forgotten, adopting the U.S. practice of breach notifications, and many other new elements. Another major change is a shift from a directive to a rule, which means the protections are the same for all 27 countries and includes significant financial penalties for infractions. This tutorial explores the new EU data protection legislation and highlights the elements that could have significant impacts on data handling practices.

Learning Objectives

  • Highlight the major changes to the previous data protection directive.
  • Review the differences between “Directives” versus “Regulations”, as it pertains to the EU legislation.
  • Learn the nature of the Reforms as well as the specific proposed changes – in both the directives and the regulations.