Chiplets have become a near-overnight success with today’s rapid-fire data center conversion to AI. But today’s integration of HBM DRAM with multiple SOC chiplets is only the very beginning of a larger trend in which multiple incompatible technologies will adopt heterogeneous integration to connect new memory technologies with advanced logic chips to provide both significant energy savings and vastly-improved performance at a reduced price point.
Compute, memory, storage, and connectivity demands are forcing the industry to adapt as it meets the expanding needs of cloud, edge, enterprise, 5G, and high-performance computing. UCIe — Universal Chiplet Interconnect Express — is an open industry standard founded by the leaders in semiconductors, packaging, IP suppliers, foundries, and cloud service providers to address customer requests for more customizable package-level integration.
For the past three decades, PCI-SIG® has delivered a succession of industry-leading PCI Express® (PCIe®) specifications that remain ahead of the increasing demand for a high-bandwidth, low-latency interconnect for compute-intensive systems in diverse market segments, including data centers, Artificial Intelligence and Machine Learning (AI/ML), high-performance computing (HPC) and storage applications. In early 2022, PCI-SIG released the PCIe 6.0 specification to members, doubling the data rate of the PCIe 5.0 specification to 64 GT/s (up to 256 GB/s for a x16 configuration).
Modern AI systems usually require diverse data processing and feature engineering at a tremendous scale and employ heavy and complex deep learning model that requires expensive accelerators or GPUs. This leads to the typical design of running data processing and AI on two separate platforms, which leads to severe data movement issues and creates big challenges for efficient AI solutions.
AI/ML is not new, but innovations in ML models development have made it possible to process data at unprecedented speeds. Data scientists have used standard POSIX file systems for years, but as the scale and need for performance have grown, many face new storage challenges. Samsung has been working with customers on new ways of approaching storage issues with object storage designed for use with AI/ML. Hear how software and hardware are evolving to allow unprecedented performance and scale of storage for Machine Learning.
We present RAINBLOCK, a public blockchain that achieves high transaction throughput without modifying the proof-of-work consensus. The chief insight behind RAINBLOCK is that while consensus controls the rate at which new blocks are added to the blockchain, the number of transactions in each block is limited by I/O bottlenecks. Public blockchains like Ethereum keep the number of transactions in each block low so that all participating servers (miners) have enough time to process a block before the next block is created.
The next generation of automobiles moves to the adoption of PCIe for data communications in vehicles, and the JEDEC Automotive SSD solution enables a high performance, high reliability solution for this shared centralized storage. Features such as SR/IOV highlight the requirements of these computers on wheels with multiple SoC functions for vehicle control, sensors, communications, entertainment, and artificial intelligence.