2019 SDC EMEA Abstracts

Main Stage Abstracts

Event Underwriter Presentation 

Persistent Memory - the New Hybrid Species

Dr. Thomas Willhalm


Hybrid animals uniquely combine the characteristics of two different species. Similarly, persistent memory combines the characteristics of storage and memory in a single memory tier. Intel Optane DC Persistent Memory is becoming broadly available later this year. In this presentation, we will explore the implications that this new type of memory is having on software, when memory can be manipulated with file operations or the content of files can be directly accessed with load and store instructions.

Controller Trends: Architecture options from UFS to Enterprise SSD

Neil Werdmuller


Controllers for managing flash memory vary from small in memory controllers, through eMMC/UFS controllers for embedded memories through consumer SSD controllers to Enterprise SSD controllers. High-end controllers are now able to run Linux to support open source protocols and to run Computational Storage and machine learning workloads. The technologies for these controllers have to manage the NAND, but also, power efficiently, add increasing levels of diverse compute capabilities. Arm partners develop storage controllers for all of these markets and the various architecture and trade-offs are discussed.

Learning outcomes

1. Explore the diverse architecture options for SSD controllers

2. Examine the impact of design choices on SSD controller performance and cost

3. What is needed to run Linux on the drive for Computational Storage

Premier Partner Presentation

Preparing your storage for handling even more capacity - again

Yosef Shatsky 


While capacity planning and management is an old problem, there are several new challenges to be addressed that require significant enhancements to the storage controller. One example is the increasing amount of storage in relation to the available amount of RAM. This session will cover several challenges and solutions that will help keep your controller prepared for tomorrow’s capacity.


Track 1 Abstracts


Multidimensional Testing – A Quality Assurance approach for storage networking

Yohay Lasri


This presentation will cover QA methodologies when it comes to testing a Storage solution on the network level. 

The term "Multidimensional QA" derives from the diversity of Storage solutions which is a production of numerous factors being it software or hardware configurations. The role of the QA department is to select the right subset of the above factors which will best cover the range of relevant Storage solutions. Choosing right tools is yet another challenge. We will share The Visuality Systems’ QA discipline as well as tool reviews and, if time allows, some figures.

Learning outcomes

1. Factors that define Storage configurations.

2. Storage classification from QA point of view.

3. How to choose relevant configurations in order to develop the best Test Plan.


Improved Access to NAS, Windows, Mac and the Cloud from Linux - Review of Recent Progress in SMB3

Steven French


Over the past year exciting improvements to the Linux kernel support for SMB3 have made it one of the most active file systems on Linux. Dramatically improved performance of large files access, especially with the integration of RDMA support (“SMB Direct” support), improved direct i/o support and also with many metadata and compounding optimizations which help access to large directories, especially in the cloud (like Azure Files). In addition support for directory leases (and many other caching improvements) has also helped performance. The ability to do new workloads, more efficient compounding of complex operations for improved performance, changes to more easily recover from failures, improvements to DFS/Global Namespace support, improved metadata handling, many security enhancements, and changes to the default protocol dialects, have made this a great year for improvements to Linux SMB3/SMB3.11 support. In addition, the POSIX extensions to the SMB3.11 protocol have greatly improved with testing over the past year, especially from Linux and Samba, and are leading to additional workloads being possible now over SMB3.11. This presentation will demonstrate and describe some of the new features and progress over the past year in accessing modern servers and the cloud (Azure) via SMB3/SMB3.11 using Linux clients, as well as how to configure this optimally.  

Learning outcomes

1. What new features are available in Linux now to access the Cloud, Samba, Macs and NAS filers via SMB3 and how do I use them? And what types of workloads is SMB3 now much better at in Linux?

2. How should you configure your Linux client to optimally access various target servers?

3. What is the outlook for new features to expect over the next few kernel releases (e.g. 4.21 and 4.22)?

NVMe/TCP is here for all of your hyperscale storage needs

Muli Ben-Yehuda


In November 2018, NVMe.org has ratified the NVMe/TCP standard in record time. TCP/IP is the most widely used network protocol of them all, well-known and widely- implemented in every data center.  NVMe/TCP brings the power of NVMe over Fabrics to TCP/IP networks by mapping NVMe  commands and data movement onto TCP. NVMe/TCP provides performance and latency that are comparable with RDMA without requiring any network changes. Nevertheless, by going over a lossy IP network,  any NVMe/TCP implementation must deal with certain network issues such as packet loss and retransmissions.  If you attend this talk, you can expect to dive in and come out on the other side (of the stream connection) with a pretty good understanding of NVMe/TCP. Furthermore, you will also learn how Lightbits' NVMe/TCP implementation elevates the awesomeness that is NVMe/TCP to a whole new level.

Learning outcomes

1. Understand NVMe/TCP

2. Understand the difference between NVMe-oF (over RDMA) and NVMe-oF (over TCP/IP)

3. Understand hyperscale storage needs

Deep-dive into ZUFS: Developing a PM-based file-system in user-space

Shachar Sharon


Persistent-Memory (PM) based file systems are key to bringing the low-latency value of PM hardware to applications and businesses. Developing an in-kernel file system is expensive on the one hand, while traditional Linux FUSE bridge results in high-latency access on the other hand. ZUFS (Zero-copy User-space File-System) is an open-source framework, which is designed from the ground up to enable PM-based file systems in user space, while still maintaining low latency and byte-addressable access. Its architecture has evolved over the past two years into mature, production ready code, which is optimized for modern multi-core CPUs.

Learning outcomes

1. Describe ZUFS internals, and how it achieves zero-copy and low latency.

2. Guidelines for developing a PM-based file system in user space using ZUFS.

3. Understand the performance tradeoffs between kernel and user space implementations.

LINSTOR: Open Source Storage Volume Management for big clusters

Philipp Reisner


LINSTOR is storage orchestration software for practical management of large storage clusters. E.g. Cloud providers needing block volumes for IaaS services. Its purpose is to provide block storage volumes on Linux hosts on demand.
LINSTOR is pure control path software. It configures existing software components, for example the Linux software components MD Raid, LVM, ZFS, DRBD, NVMe-oF target and NVMe-oF initiator to construct volumes as requested. It can also operate with storage targets that are not pure Linux environments. An example is that LINSTOR fully supports NVMe-oF clusters that are Swordfish-API compliant. Intel is a user of that functionality. It offers the user the possibility to define policies for storage allocation. An example might be: Two synchronous copies in the local data in different racks, but same fire compartment, plus one asynchronous replica in a different data center.
LINSTOR comes with a CSI driver for Kubernetes, a driver in OpenStack Cinder, OpenNebula and XenServer. LINSTOR is an OpenSource project on GitHub.

Learning outcomes

1. Learn about the data-path components in the Linux storage subsystem: LVM, MD Raid, ZFS, DRBD ...

2. Learn that LINSTOR is a control path software solution that orchestrate data-path storage components

3. Learn about Software Defined Storage built from Open Source components

Integrating Storage Systems into Active Directory with winbind

Volker Lendecke


Most environments use Active Directory as their primary authentication and authorization source. Users and groups are stored there. Any storage system must authenticate and authorize users in some way. Samba's winbind provides a solution to seamlessly integrate with Active Directory using the same mechanisms a native Windows client uses. It provides an API to authenticate users and retrieve authorization information like gorup memberships of authenticated users. Also, it can integrate into any kind of mappings scheme of Windows and Unix principals, and from there it can integrate Windows users into the Unix user database.

This talk will give an overview of the API that storage vendors and integrators can use to access winbind's services. This API is licensed LGPL and not GPL, so it does not put licensing restrictions on the storage software using it.

Learning outcomes

1. Active Directory Authentication Mechanisms

2. Windows/Unix ID-mapping

3. Practical API description for accessing Active Directory


Azure Files: Making architectural changes in an always available service

David Goebel


For the over three years Microsoft Azure has provided a completely managed SMB3 file server in the cloud.  Leveraging the Continuous Availability features of SMB3, the customer experience is an always available and reliable file share.  As we push to add the most demanded new features, the complexity and caution required to do this in a transparent and safe way presents fundamentally new kinds of challenges due to the scale of Azure’s public cloud.
As background, the talk will begin with the architecture of Azure Files, which is based on Azure tables and blobs under the hood, not a conventional file system -- let alone NTFS.  Specific attention will be paid to those aspects that provide the seamless availability and reliability in spite of the constant din of hardware underneath it suffering failures and needing replacement.
An overview of recently added new feature will be used as a segue to delve into the engineering challenges of making significant changes and additions to data schemas and the code that manipulates it, while not disturbing access to those many petabytes of data, or breaking the semantics that applications depend on.

Learning outcomes

1. Learn how an SMB3 server can be built on top of something other than a conventional file system.

2. See how imperfect hardware can be used to provide (near) perfect availability.

3. Understand how methodical plodding patience, mixed with some engineering sleight of hand, can achieve impressive architectural changes with zero downtime.





Track 2 Abstracts

High performance POSIX: Deep learning for object storage

Effi Ofer


The advents of big data-sets and high-speed GPUs are fueling the growth in analytics, machine learning and deep learning techniques. In this talk we explore how to run high throughput analytics on data that resides in inexpensive object storage without the need to pre-load or stage the data. We will show how to leverage s3fs, an open source FUSE based file-system gateway for object storage to run deep learning analytics and demonstrate how this can be done effectively while providing sufficient throughput to high performance analytic engines and dedicated accelerators, such as GPUs, FPGAs, and Tensor Processing Units.

Learning outcomes

1. Enabling transparent access to object storage to POSIX workloads without application changes

2. Achieving high throughput while lowering storage costs using inexpensive cloud based object storage

3. Keeping high speed GPUs busy in in machine learning workloads using object storage

Computational SSD's

Rakesh Cheerla


Two key trends are driving the need for computational SSDs. The first trend is the data-heavy nature of modern workloads. The best machine learning algorithms will perform poorly in the absence of large amounts of high quality data. Similarly, analytics weather video, log or database all require a massive amount of data. The second key trend is the diversity of compute workloads like machine learning, big data analytics, and streaming video in the modern datacenter, none of which are a great fit for the CPU instruction set.  Scaling today’s infrastructure cost-effectively requires smart adaptive inline storage processing. This talk highlights the Computational SSD built on an adaptive and intelligent platform.  We review Xilinx SDAccel platform and customer products across a broad range of applications such as data analytics and video processing that are driving innovation in the data-centric era.

Learning outcomes

1. Why do computational SSDs make business sense?

2. Discuss architectural alternatives, and how to further open-source innovation for this category of products.

3. Discuss SNIA Computational Storage WG activities (along with others e.g. Eideticom, NGD, Scaleflux, Samsung etc.)

Distributed Block Storage with RDMA

Mikhail Malygin


The presentation covers architecture and implementation details of distributed block storage based on RDMA interconnect and Reed Solomon codes. It will cover challenges of distributed data protection on high capacity drives and solution aimed to provided distributed, performant, fault tolerant and capacity efficient data protection. Described solution is build as a set of linux device mapper drivers and utilises RDMA for inter nodes communication. Speaker will highlight aspects associated with RDMA programming in kernel as well as insights on development device mapper modules and dealing with block level framework of linux kernel. 

Learning outcomes

1. Methods and techniques building distributed block storage with high capacity HDD

2. Specifics of in kernel programming with RDMA and block layer

Gen-Z and Storage Class Memory (SCM) integration and path forward

Parmeshwr Prasad Vishwakarma


A Gen-Z supported system definitely will have Memory Management Unit (MMU). Processor MMU will be attached with DDR protocol engine and Gen-Z protocol engine. Further Gen-Z protocol can retrieve data from DRAM, SCM, DRAM + SCM or DRAM + Flash devices. A media controller supporting different media types and mix illustrates a variety of Gen-Z memory modules that support any type and mix of media types including DRAM, SCM, and Flash. Media is abstracted through the Component Media Structure which describes the memory component’s attributes and capabilities, and controls its operation. Abstraction breaks the processor-memory interlock which provides numerous technical, economic, and customer benefits. Gen-Z uses OpClasses to connect Memory Modules. Corresponding OpClasses used to generate request and response packets. A memory component can supports one or more interfaces / links to provide connectivity. Depending upon the supported OpClasses, these links can connect directly to a requester or to a switch. Media controller logic is used to execute a request packet and to generate a response. If the media controller supports a responder ZMMU, it will transparently access the logic / tables to validate request packet access, locate Responder-specific resources, etc.

Learning outcomes

1. SCM and Gen-Z integration

2. Innovation in Gen-Z space

3. Industry adoption of Gen-Z


Or Lapid


With the release of the world’s first QLC SSD, the Micron 5210 ION, Micron’s team has been developing ways to further maximise its value with different workloads and enterprise applications. The Micron 5210 SSD is optimised for read intensive workloads which is widely used in Business intelligence and decision support systems (BI/DSS) on Microsoft SQL Server. That is where the value of QLC SSDs really shines.
In this presentation, I’ll present how combining 5210-ION QLC SSD together with NVDIMM-N will deliver more system level performance over traditional configurations.

Learning outcomes

1. Introduction to QLC NAND Flash usage in Enterprise SSDs.

2. NVDIMM-N impact on performance.

3. Micron case study of MSSQL with QLC and NVDIMMs.

NVMe over Fabrics - From the Enterprise Storage View

Dr. Anat Eyal


NVMe over Fabrics (NVMe-oF) extends the benefits of low latency and high efficiency of the NVMe technology across fabric networks, to support the sharing and use of NVMe storage at a large scale.
As a relatively new standard, NVMe-oF still needs to evolve and reach the level of maturity required by enterprise storage applications. In addition, a wide adoption by the storage ecosystem is required in order to create a truly transformational effect within the storage industry.
In this session, we provide an overview of the NVMe-oF protocol, its performance advantages, and storage consolidation potential. We shall also discuss distributed storage array implementation aspects and requirements.

Learning outcomes

1. NVMe over Fabrics (NVMe-oF) overview and benefits

2. Enterprise use cases

3. Evolution of standards, ecosystem development, and enterprise requirements