Abstract
The Value NVMe Client SSD segment has been introduced with advent of Host Memory Buffer (HMB) technology in NVMe. In this, as a replacement to the on chip DRAM, Host memory is used & this has helped customers to take the plunge into the world of SSDs at a lesser cost. The current generation of HMB SSD’s predominantly use this Host buffer to park the Logical-to-Physical (L2P) mapping during the device runtime. The general perception is that the performance will be on par with DRAM devices as the access to HMB over PCIe is fast enough to meet the customer needs.
In this paper, we intend to break the paradigm about the HMB-SSD’s performance. By means of an in depth analysis into factors and parameters that can make a difference. The study probes into three major areas. Device State; fresh or sustained, Device configurations; Variation in Number of Queue, Queue depth, HMB Size allocation from Host, & the Work load impacts. The type of work load includes industry standard methods for Sequential and random write/read operations. Also granular variations that probes to find what works best & worst for a HMB based NVMe SSD, compared to its DRAM counterpart. This paper additionally looks into HMB-SSD as standalone entity, to derive the type of fine tuning that host can leverage in-terms of device configuration & optimally make the best use of HMB based SSD depending on its use cases.