Caching on PMEM: an Iterative Approach

Library Content Type:
Publish Date: 
Tuesday, September 22, 2020
Event Name: 
Event Track:

With PMEM boasting a much higher density and DRAM-like performance, applying it to in-memory caching such as memcached seems like an obvious thing to try. Nonetheless, there are questions when it comes to new technology. Would it work for our use cases, in our environment? How much effort does it take to find out if it works? How do we capture the most value with reasonable investment of resource? How can we continue to find a path forward as we make discoveries? At Twitter, we took an iterative approach to explore cache on PMEM. With significant early help from Intel, we started with simple tests in memory mode in a lab environment, and moved on to app_direct mode with modifications to Pelikan (, a modular open-source cache backend developed by Twitter. With positive results from the lab runs, we moved the evaluation to platforms that more closely represent Twitter’s production environment, and uncovered interesting differences. With better understanding of how Twitter’s cache workload behaves on the new hardware, and our insight into Twitter’s cache workload in general, we are proposing a new cache storage design called Segcache that, among other things, offers flexibility with storage media and in particular is designed with PMEM in mind. As a result, it achieves superior performance and effectiveness when running either on DRAM or PMEM. The whole exploration was made easier by the modular architecture of Pelikan, and we added a benchmark framework to support the evaluation of storage modules in isolation, which also greatly facilitated our exploration and development.

Learning Objectives

Demonstrate the feasibility of using PMEM for caching and meeting production requirements.,Provide a case study on how software companies can approach and adopt new technology like PMEM iteratively.,Provide observations and suggestions on how to promote a more integral hardware/software design cycle.