Storage Trends in AI: Your Questions Answered

SNIA STA Community

Apr 22, 2025

title of post

As AI workloads grow in complexity and scale, choosing the right storage infrastructure has never been more critical. In the “Storage Trends in AI 2025” webinar, experts from SNIA, the SNIA SCSI Trade Association community, and ServeTheHome discussed how established technologies like SAS continue to play a vital role in supporting AI’s evolving demands—particularly in environments where reliability, scalability, and cost-efficiency matter most. The session also covered emerging interfaces like NVMe and CXL, offering a full-spectrum view of what’s next in AI storage. Below is a recap of the audience Q&A, with insights into how SAS and other technologies are rising to meet the AI moment.

Q: Could you explain how tail latency tied to data type?

A: Tail latency is more closely tied to service level agreements (SLAs). In AI platforms, tail latency isn’t typically a key concern when it comes to model calculations.

Q: For smaller customers, what are best practices for mixing Normal Workloads (like regular VM Infra) with AI workloads? Do they need to separate those workloads on different storage solutions?

A: There are no "small customers," there are only "small workloads." You right-size the bandwidth and server memory/storage for the workload. The trade-off has to do with time, because if you're creating a negotiation for resources and utilization then you either must dedicate the resources that you have available or you must accept the extended time necessary for completion.

Q: Storage technology is moving towards NVMe and with less focus on spinning drives. How is SAS being used for AI workloads?

A: SAS still plays a critical role in the ingest phase of AI workloads, where large data volumes make it a key enabler.

Q: What are power usage budgets looking like for these workloads?

A: That's an excellent question but unfortunately it is vendor-specific because of the number of variables involved.

Q: Are the AI servers using all the on chip PCIe lanes. Ex: 5th Gen Xeon 80 lane PCIe 5.0 per CPU? I would guess the servers have 4 or 8 CPUs. So 8 x 80 lanes = 640 PCIe lanes per AI server. 

A: Short answer is no. No system uses 100% of the lanes for one type of workload. Usually there is a dedicated allocation for specific workloads that is a subset of the total available bandwidth. Efficiency of bandwidth is another challenge that is being tackled by both vendors and industry organizations.

Q: Is your example of GPU / CPU memory transfers a use for CXL? 

A: It certainly can be.

Q: Are any of the major storage OEMs using 24G SAS on the back-end?

A: Yes, both Dell and HPE offer AI targeted servers with SAS and NVMe storage options, among other options.

Q: Question for Patrick: There is a group of industry veterans who think smaller, hyper-focused, and hyper-specialized AI models will add more value for enterprises than large, general purpose, ever-hallucinating ChatGPT-style models. What are your thoughts on this? If scenario pans out, wouldn't "AI servers" be overbuilt? The enterprise arrays, along with advancements in Computational Storage, would lean towards all-flash or flash+disk storage setup?

A: As AI gets better, it ends up becoming more useful, and the overall demand goes up. For example, is training and operating humanoid robots a value for a manufacturing enterprise? If so, we do not have the compute to do that at scale yet. Likewise, functions like finance, procurement, and others should eventually all be automated. If we need many specialized smaller models, then they still need to be trained, customized, and inference run on an ongoing basis. Re-training on new data or techniques will continue. New application spaces will open up. Also, (retrieval-augmented generative (RAG) is able to be incorporated in open-source solutions and that cuts down on hallucinations.

Overall, this is an area where, when you think of the scope of what needs to be done to achieve the automation goals, there is nowhere near enough compute for either training or inference. Even once something is automated, then the next question is how a company gets a competitive advantage by doing something better that will require more work. If we look back in 10 years, it is unlikely everything will be solved, but I imagine trying to explain to my son in 15 years how everyone had to drive themselves “back in my day.” There are folks that deny this is going to happen today, but I have also driven either by Waymo or Tesla FSD 30 minutes or more every day I am home for the last six months. The inhibitor to adoption is regulatory at this point.

I think disk is still going to be dominant for lower-cost storage when the main metric is $/PB. Flash is needed to feed the big AI systems, so it is really a question of whether the performance of flash overcomes the price delta. Also, the larger capacity SSDs can save cost not just in terms of $/TB of the media but also in the connectivity costs. computational storage removes the need to move data, and so there is a decent chance we will see that not just in the persistent layer but also in the memory layer.

Q: Where do you see FC in the AI infrastructure?

A: Fibre Channel isn’t particularly relevant to this discussion. It has a role in networking, but it's not specific to AI infrastructure.

Q: Is the interface going to change from 24G SAS to 24G+ SAS? Is 29pin counts still SAS?

A: Both 24G SAS and 24G+ SAS operate on the SAS-4 physical layer, with no changes in the core interface. However, 24G+ SAS introduces new connector options. The existing 24G SAS connectors are fully compatible, and a new internal connector (SFF-TA-1016) has been introduced for 24G+ SAS, which is also backward compatible with 24G SAS.

 

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Fosters Industry Knowledge of Collaborative Standards Engagements

SNIA CMS Community

Nov 26, 2024

title of post
November 2024 was a memorable month to engage with audiences at The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) 24 and Technology Live! to provide the latest on collaborative standards development and discuss high performance computing, artificial intelligence, and the future of storage. At SC24, seven industry consortiums participated in an Open Standards Pavilion to discuss their joint activities in memory and interconnect standards, storage standards, networking fabric standards, and management and orchestration. Technology leaders from DMTF, Fibre Channel Industry Association, OpenFabrics Alliance, SNIA, Ultra Accelerator Link Consortium, Ultra Ethernet Consortium, and Universal Chiplet Interconnect Express™ Consortium shared how these standards are collaborating to foster innovation as technology trends accelerate. CXL® Consortium, NVM Express®, and PCI-SIG® joined these groups in a lively panel discussion moderated by Richelle Ahlvers, Vice Chair SNIA Board of Directors, on their cooperation in standards development.     With the acceleration of AI and HPC technologies, industry standards bodies are essential to guarantee interoperability and facilitate faster deployment. Collaboration among industry standards groups fosters the development and deployment of advanced HPC solutions, driving innovation, collaboration, and efficiency. Joint activities from these industry associations include memory and interconnect standards, storage standards, networking fabric standards, and management and orchestration. During SC24, SNIA engaged with analysts, partners, member companies, manufacturers, and end users to provide updates on their latest technical activities. SNIA Compute, Memory, and Storage Initiative discussed computational storage work from SNIA and NVM Express and new opportunities for audiences interested in CXL to program CXL memory modules in the SNIA Innovation Lab. SNIA Swordfish® discussed their collaboration with DMTF Redfish, OFA Sunfish, NVMe, and CXL on a unified approach to open storage management. SNIA SFF Technology Affiliate presented their technical releases in SSD E1, E3, U.2 and M.2 form factor standards. The SNIA STA Forum showcased 24G SAS products highlighting the technology's cutting-edge capabilities, discussing the benefits of SAS for high-performance computing environments and SAS's critical role in delivering reliability, scalability, and performance for modern data-driven applications. Check out our post-event video on LinkedIn! At Technology Live! in London, the SNIA STA Forum shared insights with editors, analysts and influencers on the future of storage. [caption id="attachment_3945" align="alignleft" width="300"] Photo Credit: A3 Communications[/caption] STA Chair Cameron T. Brett attended the event to represent SNIA, fostering an informed and balanced conversation within the industry. Later in the day, he delivered a comprehensive update on the latest advancements in SAS technology, including 24G+ SAS developments, recent tech enhancements, and the updated SAS Roadmap. His presentation also highlighted new market data, and explored innovative applications such as SAS in space. A lively discussion around AI and its transformative impact on storage further demonstrated SAS's ability to meet the demands of emerging technologies. These conversations reinforced SAS's vital role in shaping next-generation data infrastructures. Watch the YouTube video of Cameron Brett’s SAS Presentation. Looking forward to even more engagements in 2025! If you have not already, subscribe to SNIA Matters for the latest on ongoing SNIA activities and events!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Computing in Space: Pushing Boundaries with Off-the-Shelf Tech

STA Forum

Oct 21, 2024

title of post

Can commercial off-the-shelf technology survive in space? This question is at the heart of Hewlett Packard Enterprise's Spaceborne Computer-2 Project. Dr. Mark Fernandez (Principal Investigator for Spaceborne Computer Project, Hewlett Packard Enterprise) and Cameron T. Brett (Chair, SNIA STA Forum) discuss this project with SNIA host Eric Wright, and you can watch that full video here, listen to the podcast, or you can read on to learn more. 

By utilizing enterprise SAS and NVMe SSDs, they are revolutionizing edge computing in space on the International Space Station (ISS). This breakthrough is accelerating experiment processing, like DNA analysis, from months to minutes, and significantly improving astronaut health monitoring and safety protocols. 

The Role of SAS in Space Serial Attached SCSI (SAS) technology, known for its high performance and reliability, has been a cornerstone in this mission. SAS has been evolving for over 30 years, offering enhancements in speed, functionality, and durability. For this project, the combination of Value SAS (single-port, cost-effective, low power SAS drives) and Enterprise SAS SSDs (high-performance, dual-port drives) provided the perfect balance of reliability, power efficiency, and speed. On the ISS, resilience is critical. The drives must withstand high radiation levels and cosmic events while continuing to operate with minimal or no disruption. SAS drives, especially Enterprise-class SSDs, bring proven durability with built-in redundancy and error-correction capabilities. SAS technology offers sophisticated monitoring tools that allow the Spaceborne Computer-2 to track the health of drives daily, flagging potential issues before they become catastrophic. This self-healing functionality ensures that storage continues without failure, even in the harshest environments. 

Resilience and Sustainability in Space Beyond performance, SAS storage also plays a key role in sustainability. By extending the lifecycle of hardware through software enhancements and redundant configurations, the project reduces the need for frequent hardware replacements—a crucial advantage in space, where replacing components is far more challenging than on Earth. This philosophy of extending hardware life through SAS technology mirrors trends in earthbound enterprise applications, where companies seek to lower costs and emissions by maximizing the use of existing infrastructure. 

Real-World Applications: DNA Analysis and Astronaut Safety One of the project's key successes has been reducing the time for DNA sequencing onboard the ISS from months to minutes, allowing astronauts' health to be monitored daily. Another achievement was the development of AI software that analyzes astronauts’ glove conditions post-spacewalk, reducing the time needed for analysis from five days to just 45 seconds. These advancements are made possible by the combined power of SAS SSDs, which store massive data sets, and NVMe drives, which accelerate processing. 

Looking Ahead: The Future of SAS in Space As space exploration advances, the Spaceborne Computer-2 Project continues to evolve, and SAS technology is expected to continue playing a prominent role. With Spaceborne Computer-3 on the horizon, this next iteration promises twice the storage capacity and GPU capabilities, while using half the electrical power. The new generation of SAS SSDs, tailored specifically for the extreme conditions of space, will continue to push the boundaries of what commercial technology can achieve beyond Earth's atmosphere.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Unveiling the Power of 24G SAS: Enabling Storage Scalability in OCP Platforms

STA Forum

Oct 2, 2024

title of post

By Cameron T. Brett & Pankaj Kalra In the fast-paced world of data centers, innovation is key to staying ahead of the curve. The Open Compute Project (OCP) has been at the forefront of driving innovation in data center hardware, and its latest embrace of 24G SAS technology is a testament to this commitment. Join us as we delve into the exciting world of 24G SAS and its transformative impact on OCP data centers. 

OCP's Embrace of 24G SAS The OCP datacenter SAS-SATA device specification, a collaborative effort involving industry giants like Meta, HPE, and Microsoft, was first published in 2023. This specification laid the groundwork for the integration of 24G SAS technology into OCP data centers, marking a significant milestone in storage innovation. 

The Rise of SAS in Hyperscale Environments While SAS has long been associated with traditional enterprise storage, its adoption in hyperscale environments is less widely known. However, SAS's scalability, reliability, and manageability have made it the storage interface of choice for hyperscale and enterprise data centers, powering some of the largest and most dynamic infrastructures in the world. 

The Grand Canyon Storage Platform At the Open Compute Project Global Summit conference in October 2022, the Grand Canyon storage platform made its debut, showcasing the capabilities of 24G SAS technology. This cutting-edge system, built around a 24G SAS storage architecture, offers high storage capacity, with either SAS or SATA hard disk drives (72 slots), designed to meet the ever-growing demands of modern data centers. 

Exploring the Scalability of SAS One of the key advantages of SAS is its unparalleled scalability, capable of supporting thousands of devices seamlessly. This scalability, combined with SAS's reliability and manageability features, makes it the ideal choice for data centers looking to expand their storage infrastructure while maintaining operational efficiency. 

Delving Deeper into SAS Technology For those eager to learn more about SAS technology, a visit to the SNIA STA Forum provides a wealth of resources and information. Additionally, exploring videos on the SAS Playlist on SNIAVideo YouTube Channel offers insights into the capabilities and applications of this innovative storage interface. 

Conclusion As OCP continues to drive innovation in data center hardware, the embrace of 24G SAS technology represents a significant step forward in meeting the evolving needs of modern data centers. By harnessing the power of SAS, OCP data centers are poised to achieve new levels of scalability, reliability, and performance, ensuring they remain at the forefront of the digital revolution. Follow us on X and LinkedIn to stay updated on the latest developments in 24G SAS technology.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to Serial Attached SCSI (SAS)