Rethinking Benchmarks for Non-Volatile Memory Storage Systems

webinar

Author(s)/Presenter(s):

Professor Ethan L. Miller

Library Content Type

Presentation

Library Release Date

Focus Areas

Abstract

As storage systems based on non-volatile memory (NVM) increasingly supplant disk-based systems for performance-critical data, users need to understand how these systems will perform on their data. Traditional storage benchmarks were aimed at individual storage devices capable of hundreds of IOs per second (IOPS), with only the largest systems capable of hundreds of thousands of IOPS. Today, NVM-based systems can exceed a million IOPS, but this performance is often dependent on the content of the data as well as the access patterns.

This talk will describe the challenges for benchmarks posed by the transition to NVM, and propose potential solutions to these challenges. These challenges include generating "realistic" data in terms of both compressibility and deduplication, issues created by the need to generate realistic IO request distributions at rates of millions of IOPS, and techniques for pre-populating a storage device with data in a way that matches real-world usage. Making benchmarks repeatable is also a concern, since most existing approaches rely on independent executions that may be merged in different ways that can impact performance.