SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Many businesses like ours are challenged with on-demand infrastructure provisioning at the speed of software while the datacenter footprint is diminishing. Pure Storage as a company is getting out of the datacenter center business and are adopting more of a OpEx cost model with strict legal and data compliance guidelines for many of our business application pipelines. We chose to go with a programmable Infrastructure and Connected Cloud architecture with open APIs. Integrated model with Kubernetes on bare metal hosts that can burst to cloud with native storage layer providing a flexible infrastructure with predictable performance for core and edge applications. The storage layer in the infrastructure stack provides data management capabilities using Stork. Stork provides built-in high availability, data protection, data security and compliance with multi-cloud mobility. Automating the entire stack with k8s manifest helped us to provision and modify quickly on-demand the entire infrastructure stack.
Many businesses like ours are challenged with on-demand infrastructure provisioning at the speed of software while the datacenter footprint is diminishing. Pure Storage as a company is getting out of the datacenter center business and are adopting more of a OpEx cost model with strict legal and data compliance guidelines for many of our business application pipelines. We chose to go with a programmable Infrastructure and Connected Cloud architecture with open APIs. Integrated model with Kubernetes on bare metal hosts that can burst to cloud with native storage layer providing a flexible infrastructure with predictable performance for core and edge applications. The storage layer in the infrastructure stack provides data management capabilities using Stork. Stork provides built-in high availability, data protection, data security and compliance with multi-cloud mobility. Automating the entire stack with k8s manifest helped us to provision and modify quickly on-demand the entire infrastructure stack.
Presenting paravirtualized devices to virtual machines has historically required specialized drivers to be present in the guest operating system. The most popular example is virtio-blk or virtio-scsi. These devices can be constructed using either the host system operating system (KVM, for example, can emulate virtio-blk and virtio-scsi devices for use by the guest) or by a separate user-space process (the vhost-user protocol can connect to these targets, typically provided by SPDK). However, only Linux currently ships with virtio-blk and virtio-scsi drivers built-in. The BIOS does not typically have drivers available, making it impossible to boot from these devices, and operating systems like Windows need to have drivers installed separately. In this talk, we'll cover a new, standardized protocol for virtual machines to use to communicate with other processes that can be used to emulate any PCI device called vfio-user. This protocol will be supported by QEMU in an upcoming release. We'll then over how SPDK has used this new protocol to present paravirtualized NVMe devices to guests, allowing the guest BIOS and guest OS to load their existing NVMe drivers without modification to use these disks. We'll then close with some benchmarks demonstrating the extremely low overhead of this virtualization.
Data centers are under pressure. Competing needs for asset homogenization, supporting specialized workloads and delivering predictable performance lead to deployment tradeoffs that drive down data center utilization and drive up costs. Businesses need to continue to innovate, but budgets rarely reflect as such. A "one Cloud fits all" is expensive or simply does not work, but the same flexibility of the Cloud is essential as workloads repatriate to containers on bare metal. These factors are driving organizations towards adopting composable infrastructure to drive more value and flexibility. In this session, we describe an existing solution whose architecture and approach reflects the challenges of dis-aggregation. This presentation includes the design decisions made while looking at these problems and the resulting architecture that delivers on the promise of composable infrastructure.
Traditional Storage Node consists of Compute, Networking and Storage elements. In this case, the entire node is a single failure domain and as such both data and meta data are maintained in storage. Emergence of CXL allows us to re-think the traditional storage node architecture. In future, the storage (behind CXL IO) and metadata memory (behind a CXL memory) can be disaggregated locally or across a bunch of storage nodes to improve the availability of the data. Further, memory persistence can be achieved at a granular level using CXL memory devices. Future extensions to CXL with fabric like attributes have potential to further extend the data replication capabilities of the storage platform. In this talk, we will discuss the various platform architecture options that are emerging for the storage node and how they can change the face of traditional storage node organization.