SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Building Content Delivery Networks, or CDNs, requires distributed, localized collections of compute, memory and storage. CDNs are built out of groups of servers at a variety of locations with various tiers and types of caches. Modern CDN caches present a huge array of variable configurations.
Magnition sponsored a university research project to study configurations and algorithms that are used in CDNs to optimize response speed and cost of the front-end caches. By testing the configurations and algorithms through simulations, over 100,000 variants were examined and compared using traffic traces provided by genuine CDN companies. These simulations were able to test what would have been years worth of traffic in the course of roughly a week.
This session discusses the process of simulating specific CDN configurations. We demonstrate how modular components and algorithms are sampled, share some of the results we found, and talk about some of the anomalies we stumbled across along the way. We also demonstrate how to use real-world CDN traces to build realistic scenarios and show graphical representations of the results.
You won't want to miss this session.
Content Delivery Networks are intricate and difficult to tune accurately.
Simulations allow a vast experimental space to be examined in reasonable time.
There is quite a bit of room to improve on CDN performance and cost from baseline production setups.
Simplyblock builds a distributed lock storage system, which is fully containerized, runs on most virtualization platforms and clouds and is currently optimized for aws environments. Simplyblock is built to fully utilize local nvme IOPS and provide low storage access latency (less than 200 microseconds over the network). Virtual volumes are highly available and protected by erasure coding. Simplyblock clusters are self-balancing and highly scaleable. For this purpose, we use a distributed data placement algorithm. In this session, we plan to discuss the common challenges of a distributed architecture for reliable, self-balancing storage scale-out, specific challenges in the context of low latency and high IOPS density, nvmf-tcp and nvmf multipathing, and present the design of new algorithm to overcome them.
Building Content Delivery Networks, or CDNs, requires distributed, localized collections of compute, memory and storage. CDNs are built out of groups of servers at a variety of locations with various tiers and types of caches. Modern CDN caches present a huge array of variable configurations.
Magnition sponsored a university research project to study configurations and algorithms that are used in CDNs to optimize response speed and cost of the front-end caches. By testing the configurations and algorithms through simulations, over 100,000 variants were examined and compared using traffic traces provided by genuine CDN companies. These simulations were able to test what would have been years worth of traffic in the course of roughly a week.
This session discusses the process of simulating specific CDN configurations. We demonstrate how modular components and algorithms are sampled, share some of the results we found, and talk about some of the anomalies we stumbled across along the way. We also demonstrate how to use real-world CDN traces to build realistic scenarios and show graphical representations of the results.
You won't want to miss this session.
AI is driving the need for more data movement between devices, boxes, and racks. This data transport requires higher bandwidths which puts strain on the interconnects as well as host and device designs. While you could tackle these problems on your own, instead come and ask questions of our expert panel from the SFF Technical Work Group to see if they can address your concerns. This will be an open Q&A where we want to hear your concerns so we can address them through industry aligned solutions.
By 2040, data centers could account for up to ~14% of all carbon emissions worldwide. Flash SSDs contribute to data center carbon emissions due to their limited endurance. The common practice of over-provisioning Flash SSDs to control write amplification, improve lifetime and optimize for total cost of ownership of data centers is inefficient and contributes to the carbon footprint. Considering the increased focus on data center climate impact and net-zero carbon goals, improving Flash SSD lifetime and utilization are key to reducing carbon emissions. Targeted data placement on SSDs is known to reduce write amplification and improve lifetime. The latest data placement technology - NVMe Flexible Data Placement (FDP), shows great promise in reducing carbon emissions with minimal engineering effort. Providing control to applications to place their data with NVMe FDP results in more informed data placement strategies with minimal modifications to the application stack. Consequently, this leads to reduced SSD write amplification and enables better utilization. In this talk, we explore NVMe FDP’s future in Data Center Sustainability by showcasing its ability to reduce embodied (endurance) and operational (power) carbon emissions for systems at scale. We use several example workloads and CacheLib - a deployed system at scale, to illustrate the impact and reduction in carbon emissions with NVMe FDP.
With Flexible Data Placement (FDP) from NVM Express® (NVMe) finalized and increasing in ecosystem momentum, it has become clear that implementation choices are becoming a real differentiator among FDP drive configurations. Some customers leverage years of Multi-Streams deployment to migrate onto large RU sizes within a single RG. Some customers may be eager to mirror their experience with Open Channel SSDs by requesting small RGs and RU sizes. Yet another customer base might be examining FDP from a history of working with high Zone counts translating into large RUH counts. This presentation will provide customers with some high level guidance and background while engaging with SSD vendors. By understanding the drive impacts of such FDP configuration choices, customers and vendors can arrive at the best system solution for varying use-cases.