AI is driving the need for more data movement between devices, boxes, and racks. This data transport requires higher bandwidths which puts strain on the interconnects as well as host and device designs. While you could tackle these problems on your own, instead come and ask questions of our expert panel from the SFF Technical Work Group to see if they can address your concerns. This will be an open Q&A where we want to hear your concerns so we can address them through industry aligned solutions.
Building Content Delivery Networks, or CDNs, requires distributed, localized collections of compute, memory and storage. CDNs are built out of groups of servers at a variety of locations with various tiers and types of caches. Modern CDN caches present a huge array of variable configurations.
With Flexible Data Placement (FDP) from NVM Express® (NVMe) finalized and increasing in ecosystem momentum, it has become clear that implementation choices are becoming a real differentiator among FDP drive configurations. Some customers leverage years of Multi-Streams deployment to migrate onto large RU sizes within a single RG. Some customers may be eager to mirror their experience with Open Channel SSDs by requesting small RGs and RU sizes. Yet another customer base might be examining FDP from a history of working with high Zone counts translating into large RUH counts.