SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
We present RAINBLOCK, a public blockchain that achieves high transaction throughput without modifying the proof-of-work consensus. The chief insight behind RAINBLOCK is that while consensus controls the rate at which new blocks are added to the blockchain, the number of transactions in each block is limited by I/O bottlenecks. Public blockchains like Ethereum keep the number of transactions in each block low so that all participating servers (miners) have enough time to process a block before the next block is created. By removing the I/O bottlenecks in transaction processing, RAINBLOCK allows miners to process more transactions in the same amount of time. RAINBLOCK makes two novel contributions: the RAINBLOCK architecture that removes I/O from the critical path of processing transactions (txs), and the distributed, multi-versioned DSM-TREE data structure that stores the system state efficiently. We evaluate RAINBLOCK using workloads based on public Ethereum traces (including smart contracts). We show that a single RAINBLOCK miner processes 27.4K txs per second (27× higher than a single Ethereum miner). In a geo-distributed setting with four regions spread across three continents, RAINBLOCK miners process 20K txs per second.
This is an update on the activities in the OCP Storage Project.
Enterprises are rushing to adopt AI inference solutions with RAG to solve business problems, but enthusiasm for the technology's potential is outpacing infrastructure readiness. It quickly becomes prohibitively expensive or even impossible to use more complex models and bigger RAG data sets due to the cost of memory. Using open-source software components and high-performance NVMe SSDs, we explore two different but related approaches for solving these challenges and unlocking new levels of scale: offloading model weights to storage using DeepSpeed, and offloading RAG data to storage using DiskANN. By combining these, we can achieve (a) more complex models running on GPUs that it was previously impossible to use, and (b) greater cost efficiency when using large amounts of RAG data. We'll talk through the approach, share benchmarking results, and show a demo of how the solution works in an example use case.
Drawing from recent surveys of the end user members of the HPC-AI Leadership Organization (HALO), Addison Snell of Intersect360 Research will present the trends, needs, and "satisfaction gaps" for buyers of HPC and AI technologies. The talk will focus primarily on the Storage and Networking modules of the survey, with some highlights from others (e.g. processors, facilities, cloud) as appropriate. Addison will also provide overall market context of the total AI or accelerated computing market at a data center level, showing the growth of hyperscale AI, AI-focused clouds, and national sovereign AI data centers, relative to the HPC-AI and enterprise segments, which are experiencing diminishing influence in a booming market.
Chiplets have become a near-overnight success with today’s rapid-fire data center conversion to AI. But today’s integration of HBM DRAM with multiple SOC chiplets is only the very beginning of a larger trend in which multiple incompatible technologies will adopt heterogeneous integration to connect new memory technologies with advanced logic chips to provide both significant energy savings and vastly-improved performance at a reduced price point. In this presentation analysts Tom Coughlin and Jim Handy will explain how memory technologies like MRAM, ReRAM, FRAM, and even PCM will eventually displace the DRAM HBM stacks used with xPUs, on-chip NOR flash and SRAM, and even NAND flash in many applications. They will explain how DRAM’s refresh mechanism and NAND and NOR flash’s energy-hogging writes will give way to much cooler memories that will be easier to integrate within the processor’s package, how processor die sizes will dramatically shrink through the use of new memory technologies to replace on-chip NOR and SRAM, and how the UCIe interface will allow these memories to compete to bring down overall costs. They will also show how the approach will not only reduce the purchase price per teraflop, but also how the energy costs per teraflop will also improve.