Networked Storage

One of the challenges for computational storage is getting flexible and powerful compute close enough to the storage to make it worthwhile. FPGAs have potential but are hard to program and not very flexible. Traditional CPU complexes have a large footprint and lack the parallel processing abilities ideal for AI/ML applications. Data Processing Units (DPUs) tightly coupled with GPUs are the answer. The DPU integrates a CPU and hardware accelerators for IO, and storage into a single chip.

Updates from SNIA Forum, Initiative, and Committee Chairs

  • Data Protection & Privacy Committee
  • Networking Storage Forum
  • Cloud Storage Technologies Initiative
  • Compute, Memory, and Storage Initiative
  • Green Storage Initiative
  • Storage Management Initiative
  • Regional Updates

Updates from SNIA Technical Work Group Chairs (

  • Blockchain Storage TWG
  • Cloud Storage TWG
  • Computational Storage TWG
  • Green Storage TWG
  • I/O Traces, Tools and Analysis (IOTTA) TWG
  • Linear Tape File System (LTFS) TWG
  • Object Drive TWG
  • Scaleable Storage Management (SSM) TWG
  • Security TWG
  • SFF TA TWG
  • Smart Data Accelerator Interface (SDXI) TWG
  • Solid State Storage TWG
  • Zoned Storage TWG

Pages

Subscribe to RSS - Networked Storage