Erasure Coding is increasingly adopted in on-premises environments to enable data reduction for critical applications while maintaining high durability and fault tolerance. Compared to traditional replication, implementing Erasure Coding at scale introduces unique challenges, particularly around performance, consistency, and garbage management. An asynchronous, MapReduce-like approach to encoding data allows for eventual encoding while maximizing storage savings. This talk will explore the architectural considerations, trade-offs between resiliency and efficiency, and the practical aspects of using flexible strip sizes and handling immutable and mutable data. We will also delve into techniques to manage garbage efficiently, ensuring space reclamation without impacting user experience.
Generative AI (GenAI) and Retrieval-Augmented Generation (RAG) solutions require efficient methods to store, retrieve, and analyze massive datasets. A key enabler is the vector database (VectorDB), which converts raw content—such as text, images, or logs—into high-dimensional embeddings for rapid similarity searches. These workloads involve frequent embedding generation, indexing, and retrieval, placing heavy demands on storage systems to handle large I/O and maintain performance under concurrency.
This paper presents an optimal approach for deploying VectorDB on an enterprise storage array equipped with deduplication, effectively reducing redundancy and operational costs. By focusing on how vector embeddings are physically stored, we show how deduplication can minimize disk usage without sacrificing query speed. Furthermore, our solution integrates seamlessly with GenAI-RAG pipelines, providing scalable indexing, fault tolerance, and robust consistency.
To illustrate real-world impact, we examine a scenario of real-time customer feedback analysis, where organizations leverage VectorDB to draw insights, classify sentiment, and deliver rapid responses—benefiting from the storage array’s advanced data reduction. Our findings reveal that deduplication substantially lowers capacity overhead for repeated or overlapping embeddings, enabling faster model training and inference.
Overall, this work demonstrates how an enterprise-grade, deduplication-enabled storage layer can optimize VectorDB performance for GenAI-RAG workloads, empowering large-scale, real-time analytics with improved efficiency and cost-effectiveness. The resulting infrastructure is robust and high-performing, addressing the growing needs of AI-driven applications while providing a reliable foundation for future innovations.
KIOXIA is introducing an application-orchestrated solution to offload data redundancy, storage services, and computation to SSDs. This proposed offload scales out, where performance scales with the number of SSDs. The offload is extremely flexible and can easily be adopted by existing redundancy applications. This saves CPU cores while reducing DRAM bandwidth and TCO.
Caching optimizes storage systems by combining small, fast cache tiers with larger, slower capacity tiers to deliver high performance at reduced costs. NAND-based flash SSDs are commonly used in cache tiers due to their density, affordability, and low power consumption.
Recent storage technology advancements have expanded block-level caching potential. While HDDs struggle with capacity and performance demands, high-density QLC SSDs are becoming prevalent despite reduced endurance (1,000-3,000 P/E cycles versus 100,000 for SLC SSDs). Ultra-fast SCM devices and CXL-SSDs, e.g., Intel Optane, Kioxia XL-Flash, and Samsung CMM-H, introduce a new storage tier with near-DRAM performance and exceptional write endurance, ideal for reducing I/O pressure on high-density flash SSDs.
However, simple drop-in replacements fail to realize full performance potential due to substantial software management overhead. A critical challenge involves negative impacts on QLC device endurance, exacerbated by increasing indirection unit (IU) sizes. As SSD capacity grows, vendors increase IU size from 4 KB to 64 KB to manage mapping table costs. Block-level caching systems generate small, random writes misaligned with larger IU sizes due to cache evictions and small cache line sizes, causing significant write amplification and degrading QLC SSD endurance.
To address these challenges, we propose EMSCache, a fast and efficient in-kernel, block-level I/O caching system for emerging tiered storage architectures using ultra-fast SCM and CXL-SSD devices as cache tiers and NAND flash SSDs as capacity tiers. EMSCache reduces software overhead to maximize ultra-fast cache tier performance benefits while extending SSD-based capacity tier lifespan.
As organizations expand their on-premise object storage deployments across multiple, geographically distributed sites, they increasingly require a unified interface to access and manage their global unstructured data seamlessly, regardless of where it resides. Establishing a global namespace across several AWS S3-compatible object stores presents distinct challenges, including managing the global state associated with the global namespace, ensuring interoperability with any S3-compatible platform, supporting all existing object storage features, etc. This session will delve into the architectural consideration and design decisions involved in building Object Store Federation to address these complexities.
Erasure Coding is increasingly adopted in on-premises environments to enable data reduction for critical applications while maintaining high durability and fault tolerance. Compared to traditional replication, implementing Erasure Coding at scale introduces unique challenges, particularly around performance, consistency, and garbage management. An asynchronous, MapReduce-like approach to encoding data allows for eventual encoding while maximizing storage savings. This talk will explore the architectural considerations, trade-offs between resiliency and efficiency, and the practical aspects of using flexible strip sizes and handling immutable and mutable data. We will also delve into techniques to manage garbage efficiently, ensuring space reclamation without impacting user experience.