SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Anthony Constantine is a Distinguished Member of Technical Staff responsible for Storage Standards at Micron. He is very active in SNIA as co-chair of the SFF TWG, an author for several EDSFF specifications, and author or contributor to several other SFF TWG specifications. He is also a past member of the SNIA Technical Council. In addition, Anthony contributes to PCI-SIG, JEDEC, NVMe, and Open Compute Platform (OCP). Anthony has over 25 years of experience in the technology industry with an expertise in memory, storage, physical interfaces, low power technologies, and form factors. He earned a BS in Electrical Engineering from UC Davis.
High-performance computing applications, web-scale storage systems, and modern enterprises increasingly have the need for a data architecture that will unify at the edge, and in data centers, and clouds. These organizations with massive-scale data requirements need the performance of a parallel file system coupled with a standards-based solution that will be easy to deploy on machines with diverse security and build environments.
Standards-Based Parallel Global File System - No Proprietary Clients
The Linux community, with contributions from Hammerspace, has developed an embedded parallel file system client as part of the NFS protocol. With NFS 4.2, standard Linux clients now can read and write directly to the storage, and scale out performance linearly for both IOPS and throughput, saturating the limits of both storage and network infrastructures. Proprietary software is no longer needed to create a high-performance parallel file system, as NFS is built into open standards and included into Linux distributions. NFS 4.2 is a commercially driven follow-on to pNFS concepts.
Today’s data architectures span multiple types of storage systems at the edge, in the data center, and in the cloud. With the rise of data orchestration systems that place data on the appropriate storage, in the optimal geographic location, NFS 4.2 is a must-have technology to deliver high-performance workflows working with distributed data sets.
Automated Data Orchestration - Across Any Storage System
Hammerspace developed and contributed the Flexible Files technology to make it possible to provide uninterrupted access to data by applications and users while orchestrating data movement even on live files across incompatible storage tiers from different vendors and multiple geographic locations.
Flexibles
Files, along with mirroring, built-in real-time performance telemetry, and attribute delegation (to name a few) are put to work in a global data environment to non-disruptively recall layouts, which enables live data access and data integrity to be maintained, even as files are moved or copied. This has enormous ramifications for enterprises as it can eliminate the downtime traditionally associated with data migrations and technology upgrades. Enterprises can combine this capability with software, such as a metadata engine, that can virtualize data across heterogeneous storage types, and automate the movement and placement of data according to IT-defined business objectives.
Building a Global Data and Storage Architecture
Hammerspace brings NFSv4.2 (in addition to SMB and NFSv3) connectivity to its parallel global file system to build a standards-based, high-performance file system that spans existing and multiple otherwise incompatible storage systems from any vendor as well as across decentralized locations. In this way it can intelligently and efficiently automate orchestration of data to applications, compute clusters, or users that need it, enabling global access for analysis, distributed workloads, or to run AI-driven insights.
As the one of the inventors of NVMe at Fusion-io, David has long been a thought leader in the SSD space. Large steps forward were made in data processing when NVMe was embedded in the server, bringing large quantities of high-performance data into direct contact with processing. However, data driven workload requirements have since changed, dramatically: 1) Larger quantities of data are being created, analyzed, and processed; 2) Demands for performance are not just growing, but, accelerating; and, 3) Most importantly, data needs to be more usable. Workflows, AI and ML engines, and applications need data generated at the edge, in datacenters, and in the cloud to be aggregated and shared over a network or a fabric. This has become critical in today’s decentralized workflows that are rapidly on the rise with the massive adoption of cloud computing, the increase in edge locations, and the large number of data driven employees working in remote locations. David will discuss why NVM over Fabric was destined to have minimal adoption and will put forth the assertion that we need to embed NFS in the device.
The rapid evolution of GPU computing in the Enterprise has led to unprecedented demand for robust and scalable data platforms. This session will explore the critical role that standardized frameworks and protocols play in optimizing data creation, processing, collaboration and storage within these advanced computing environments. Attendees will gain insights into how adopting standards can enhance interoperability, move data to compute, facilitate efficient data exchange, and ensure seamless integration across diverse systems and applications. By leveraging case studies and real-world implementations, the session will demonstrate how standards-based approaches can drive significant performance improvements and operational efficiencies in HPC and AI data architectures.