Sorry, you need to enable JavaScript to visit this website.

Revamping Block-Level I/O Caching for Emerging Tiered Storage

Abstract

Caching optimizes storage systems by combining small, fast cache tiers with larger, slower capacity tiers to deliver high performance at reduced costs. NAND-based flash SSDs are commonly used in cache tiers due to their density, affordability, and low power consumption.

Recent storage technology advancements have expanded block-level caching potential. While HDDs struggle with capacity and performance demands, high-density QLC SSDs are becoming prevalent despite reduced endurance (1,000-3,000 P/E cycles versus 100,000 for SLC SSDs). Ultra-fast SCM devices and CXL-SSDs, e.g., Intel Optane, Kioxia XL-Flash, and Samsung CMM-H, introduce a new storage tier with near-DRAM performance and exceptional write endurance, ideal for reducing I/O pressure on high-density flash SSDs.

However, simple drop-in replacements fail to realize full performance potential due to substantial software management overhead. A critical challenge involves negative impacts on QLC device endurance, exacerbated by increasing indirection unit (IU) sizes. As SSD capacity grows, vendors increase IU size from 4 KB to 64 KB to manage mapping table costs. Block-level caching systems generate small, random writes misaligned with larger IU sizes due to cache evictions and small cache line sizes, causing significant write amplification and degrading QLC SSD endurance.

To address these challenges, we propose EMSCache, a fast and efficient in-kernel, block-level I/O caching system for emerging tiered storage architectures using ultra-fast SCM and CXL-SSD devices as cache tiers and NAND flash SSDs as capacity tiers. EMSCache reduces software overhead to maximize ultra-fast cache tier performance benefits while extending SSD-based capacity tier lifespan.