Ethernet in the Age of AI Q&A

Raguraman Sundaram

Jan 11, 2025

title of post

 AI is having a transformative impact on networking. It’s a topic that the SNIA Data, Storage & Networking Community covered in our live webinar, “Ethernet in the Age of AI: Adapting to New Networking Challenges.” The presentation explored various use cases of AI, the nature of traffic for different workloads, the network impact of these workloads, and how Ethernet is evolving to meet these demands. The webinar audience was highly engaged and asked many interesting questions. Here are the answers to them all. Q. What is the biggest challenge when designing and operating an AI Scale out fabric? A. The biggest challenge in designing and operating an AI scale-out fabric is achieving low latency and high bandwidth at scale. AI workloads, like training large neural networks, demand rapid, synchronized data transfers between thousands of GPUs or accelerators. This requires specialized interconnects, such as RDMA, InfiniBand, or NVLink, and optimized topologies like fat-tree or dragonfly to minimize communication delays and bottlenecks. Balancing scalability with performance is critical; as the system grows, maintaining consistent throughput and minimizing congestion becomes increasingly complex. Additionally, ensuring fault tolerance, power efficiency, and compatibility with rapidly evolving AI workloads adds to the operational challenges. Unlike standard data center networks, AI fabrics handle intensive east-west traffic patterns that require purpose-built infrastructure. Effective software integration for scheduling and load balancing is equally essential. The need to align performance, cost, and reliability makes designing and managing an AI scale-out fabric a multifaceted and demanding task. Q. What are the most common misconceptions about AI scale-out fabrics? A. The most common misconception about AI scale-out fabrics is that they are the same as standard data center networks. In reality, AI fabrics are purpose-built for high-bandwidth, low-latency, east-west communication between GPUs, essential for workloads like large language model (LLM) training and inference. Many believe increasing bandwidth alone solves performance issues, but factors like latency, congestion control, and topology optimization (e.g., fat-tree, dragonfly) are equally critical. Another myth is that scaling out is straightforward—adding GPUs without addressing communication overhead or load balancing often leads to bottlenecks. Similarly, people assume all AI workloads can use a single fabric, overlooking differences in training and inference needs. AI fabrics also aren’t plug-and-play; they require extensive tuning of hardware and software for optimal performance. Q. How do you see the future of AI Scale-out fabrics evolving over the next few years? A. AI scale-out fabrics is going to have more and more Ethernet. Ethernet-based fabrics, enhanced with technologies like RoCE (RDMA over Converged Ethernet), will continue to evolve to deliver the low latency and high bandwidth required for large-scale AI applications, particularly in training and inference of LLMs. Emerging standards like Ethernet 800GbE and beyond will provide the throughput needed for dense, GPU-intensive workloads. Advanced congestion management techniques, such as DCQCN, Multipathing, Packet trimming etc, will improve performance in Ethernet-based fabrics by reducing packet loss and latency. Ethernet's cost-effectiveness, ubiquity, and compatibility with hybrid environments will make it a key enabler for AI scale-out fabrics in both cloud and on-premises deployments. The convergence of CXL over Ethernet may eventually enable memory pooling and shared memory access across components within scale-up systems, supporting the increasing memory demands of LLMs. The need for having Ethernet for scale-up is going to be on the rise as well. Q. What are the best practices for staying updated with the latest trends and developments? Can you recommend any additional resources or readings for further learning? A. There are several papers and research articles on the internet, some of them are listed in the webinar slide deck. Following Ultra Ethernet Consortium and SNIA are the best ways to learn about networking related updates. Q. Is NVLink a standard? A. No, NVLink is not an open standard. It is a proprietary interconnect technology developed by NVIDIA. It is specifically designed to enable high-speed, low-latency communication between NVIDIA GPUs and, in some cases, between GPUs and CPUs in NVIDIA systems. Q. What's the difference between collections and multicast? A. It is tempting to think that collections and multicast are similar, for example the collectives like Broadcast. But they are in principle different and address different requirements. Collections are high-level operations for distributed computing, while multicast is a low-level network mechanism for efficient data transmission. Q. What's the support lib/tool/kernel module for enabling Node1 GPU1-> Node2 GPU2->GPU fabric -> Node2 GPU2? It seems some Host level knowledge, not TOR level. A. Yes, the topology discovery and optimal path for routing the GPU messages from the source depends on the Host software and is not TOR dependent. The GPU applications end up using the MPI APIs for communication between the nodes in the cluster. These MPI APIs are made aware of the GPU topologies by the respective extension libraries provided by the GPU vendor. For instance, NVIDIA's NCCL and AMD's RCCL libraries provide option to mention static GPU topology in the system through an XML file (via NCCL_TOPO_FILE or RCCL_TOPO_FILE) that can be loaded when initializing the stack. The MPI GPU aware library extensions from NVIDIA/AMD would then leverage this provided topology information to send the messages to the appropriate GPU. An example NCCL topology is here: https://github.com/nebius/nccl-topology/blob/main/nccl-topo-h100-v1.xml. There are utilities such as nvidia-smi/rocm-smi that are used in the initial discovery. The automatic topology detection and calculation of optimal paths for MPI could be made available as part of GPU vendor's CCL library as well. For instance, NCCL provides such functionality by reading the /sys from the host and building PCI topology of GPU/NICs. The SNIA Data, Storage & Networking Community provides vendor-neutral education on a wide range of topics. Follow us on LinkedIn and @SNIA for upcoming webinars, articles, and content.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Fosters Industry Knowledge of Collaborative Standards Engagements

SNIA CMS Community

Nov 26, 2024

title of post
November 2024 was a memorable month to engage with audiences at The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) 24 and Technology Live! to provide the latest on collaborative standards development and discuss high performance computing, artificial intelligence, and the future of storage. At SC24, seven industry consortiums participated in an Open Standards Pavilion to discuss their joint activities in memory and interconnect standards, storage standards, networking fabric standards, and management and orchestration. Technology leaders from DMTF, Fibre Channel Industry Association, OpenFabrics Alliance, SNIA, Ultra Accelerator Link Consortium, Ultra Ethernet Consortium, and Universal Chiplet Interconnect Express™ Consortium shared how these standards are collaborating to foster innovation as technology trends accelerate. CXL® Consortium, NVM Express®, and PCI-SIG® joined these groups in a lively panel discussion moderated by Richelle Ahlvers, Vice Chair SNIA Board of Directors, on their cooperation in standards development.     With the acceleration of AI and HPC technologies, industry standards bodies are essential to guarantee interoperability and facilitate faster deployment. Collaboration among industry standards groups fosters the development and deployment of advanced HPC solutions, driving innovation, collaboration, and efficiency. Joint activities from these industry associations include memory and interconnect standards, storage standards, networking fabric standards, and management and orchestration. During SC24, SNIA engaged with analysts, partners, member companies, manufacturers, and end users to provide updates on their latest technical activities. SNIA Compute, Memory, and Storage Initiative discussed computational storage work from SNIA and NVM Express and new opportunities for audiences interested in CXL to program CXL memory modules in the SNIA Innovation Lab. SNIA Swordfish® discussed their collaboration with DMTF Redfish, OFA Sunfish, NVMe, and CXL on a unified approach to open storage management. SNIA SFF Technology Affiliate presented their technical releases in SSD E1, E3, U.2 and M.2 form factor standards. The SNIA STA Forum showcased 24G SAS products highlighting the technology's cutting-edge capabilities, discussing the benefits of SAS for high-performance computing environments and SAS's critical role in delivering reliability, scalability, and performance for modern data-driven applications. Check out our post-event video on LinkedIn! At Technology Live! in London, the SNIA STA Forum shared insights with editors, analysts and influencers on the future of storage. [caption id="attachment_3945" align="alignleft" width="300"] Photo Credit: A3 Communications[/caption] STA Chair Cameron T. Brett attended the event to represent SNIA, fostering an informed and balanced conversation within the industry. Later in the day, he delivered a comprehensive update on the latest advancements in SAS technology, including 24G+ SAS developments, recent tech enhancements, and the updated SAS Roadmap. His presentation also highlighted new market data, and explored innovative applications such as SAS in space. A lively discussion around AI and its transformative impact on storage further demonstrated SAS's ability to meet the demands of emerging technologies. These conversations reinforced SAS's vital role in shaping next-generation data infrastructures. Watch the YouTube video of Cameron Brett’s SAS Presentation. Looking forward to even more engagements in 2025! If you have not already, subscribe to SNIA Matters for the latest on ongoing SNIA activities and events!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

title of post
Our recent SNIA Data, Networking & Storage Forum (DNSF) webinar, “AI Storage: The Critical Role of Storage in Optimizing AI Training Workloads,” was an insightful look at how AI workloads interact with storage at every stage of the AI data pipeline with a focus on data loading and checkpointing. Attendees gave this session a 5-star rating and asked a lot of wonderful questions. Our presenter, Ugur Kaynar, has answered them here. We’d love to hear your questions or feedback in the comments field. Q. Great content on File and Object Storage, Are there any use cases for Block Storage in AI infrastructure requirements? A. Today, by default, AI frameworks cannot directly access block storage, and need a file system to interact with block storage during training. Block storage provides raw storage capacity, but it lacks the structure needed to manage files and directories. Like most AI frameworks, PyTorch depends on a file system to manage and access data stored on block storage. Q. Do high speed networks make some significant enhancements to I/O and checkpointing process? A. High-speed networks enable faster data transfer rates and the I/O bandwidth can be better utilized which can significantly reduce the time required to save checkpoints. This minimizes downtime and helps maintain system performance. However, it is important to keep in mind that the performance of checkpointing depends on both the storage network and the storage system. It’s essential to maintain a balance between the two for optimal results. If the network is fast but the storage system is slow, or vice versa, the slower component will create a bottleneck. This imbalance can lead to inefficiencies and longer checkpointing times. When both the network and storage systems are balanced, data can flow smoothly between them. This maximizes throughput, ensuring that data is written to storage as quickly as it is transferred over the network.  Q. What is the rule of thumb or range of storage throughput per GPU? A. Please see answer below. Q. What is the typical IO performance requirements for AI training in terms of IOs per second, bytes per second? A. The storage throughput per GPU can vary based on the specific workload and the performance requirements of the AI model being trained. Models processing text data typically need throughput ranging from a few MB/s to hundreds of MB/s per GPU. In contrast, more demanding models that handle image or video data require higher throughput, often around a few GB/s, due to the larger sample sizes. According to the latest MLPerf Storage benchmark results, the 3D-Unet medical image segmentation model requires approximately 2.8 GB/s storage throughput to maintain 90% utilization of H100 GPUs. Benchmark MLPerf Storage | MLCommons V1.1 Results Storing the checkpoint data for large models requires significant storage throughput in the range of several GB/s per GPU Q. Do you see in this workflow a higher demand for throughput from the storage layer?   Or with random operations more demand for IOPs?    How do the devices have to change to accommodate AI? A. Typically, random IO operations have higher  IOPs requirements than throughput. Q. How frequently are checkpoints created and then immediately read from? Is there scope for a checkpointed cache for such immediate reads? A. In AI training, checkpoints are usually generated at set intervals, which can differ based on the specific needs of the training process. For large-scale models, checkpoints might be saved every few minutes or after a certain number of iterations/steps to minimize data loss. Immediate reads from these checkpoints often happen when resuming training after a failure or during model evaluation for validations checks. Implementing a checkpoint cache can be highly advantageous given the frequency of these operations. By storing the most recent checkpoint data, such a cache can facilitate quicker recovery, reduce wait times, and enhance overall training efficiency. Q. How does storage see the serialized checkpointing write? Is it a single thread/job to an individual storage device? A. The serialized checkpoint data is stored as large sequential blocks by a single writer. Q. Do you see something like CXL memory helping with checkpointing?    A. Eventually. CXL provides two primary use cases: extending and sharing system memory, and integrating accelerators into the CPU’s coherent link. From an AI checkpointing perspective, CXL could act as an additional memory tier for storing checkpoints. Q. Great presentation, thanks. Can you touch on how often these large models are checkpointed. Is it daily? Hourly? Also, what type of storage is preferred for checkpointing — SSD or HDD, or a mix of both — and are the checkpoints saved indefinitely? I’m trying to understand if checkpointing is a storage pain point with regard to having enough storage capacity on hand? A. For checkpointing, the preferred storage type is high speed flash storage (NVMe SSD) due to its high performance during the training. The frequency of checkpointing for large AI models can vary based on several factors, including the model size, training duration, and the specific requirements of the training process. Therefore, it is difficult to generalize. For example, Meta has reported that they perform checkpointing at 30-minute intervals for recommendation models. (nsdi22-paper-eisenman.pdf) However, a common guideline is to keep the time spent on checkpointing to less than 5% of your training time. This ensures that the checkpointing process does not significantly impact the overall training efficiency while still providing sufficient recovery points in case of interruptions. If the model runs multiple epochs, checkpoints are typically saved after each epoch, which is one complete pass through the training dataset. This practice ensures that you have recovery points at the end of each epoch. Another common approach to do checkpointing at regular intervals (certain number of iterations or steps), such as every 500 iterations, for example: (Storage recommendations for AI workloads on Azure infrastructure (IaaS) – Cloud Adoption Framework | Microsoft Learn). Q. Why is serialization needed in checkpointing?  A. Serialization ensures that the model’s state, including its parameters and optimizer states, is captured in a consistent manner. By converting the model’s state into a structured format, serialization allows for efficient storage and retrieval.  Q. What is the difference between tensor cores and Matrix Multiplication Accelerator (MMA) Engines? A. Tensor Cores are highly specialized for AI and deep learning tasks, providing significant performance boosts for these specific workloads, while MMA Engines are more general-purpose and used across a broader range of applications.  Q. Since the checkpoints are so large, do a lot of AI environments utilize tape to keep multiple copies of checkpoints? A. During training, checkpoints are typically written to redundant fast storage solutions like all-flash arrays. This ensures that the expensive GPUs are not left idle, waiting for data to be written or read. The checkpoint data is replicated to ensure durability and prevent data loss. Tape storage, on the other hand, is more suitable for archival purposes. Tapes can be used to store checkpoints long-term due to its cost-effectiveness, durability, and scalability. While it’s not ideal for the high-speed demands of active training, it excels in preserving data for future reference or compliance reasons. Q. Do you think S3/object will adopt something like RDMA for faster access to read/write data directly to GPU memory? A. Currently, there is no RDMA support on S3. However, the increasing usage of object storage indicates that object storage solutions will adopt similar optimization approaches as file systems like RDMA for faster access to read/write data.  Q. Are checkpoints stored after training, or are they deleted automatically? A. Checkpoints are typically stored after training to allow for model recovery, fine-tuning, or further analysis. They are not deleted automatically unless explicitly configured to do so. This storage ensures that you can resume training from a specific point if needed, which is especially useful in long or complex training processes. Keep up with all that is going on at the SNIA Data, Networking & Storage Forum, follow us on LinkedIn and X @SNIA     The post Storage for AI Q&A first appeared on SNIA on Data, Networking & Storage.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Pratik Gupta

Apr 25, 2024

title of post
Moving well beyond “fix it when it breaks,” AIOps introduces intelligence into the fabric of IT thinking and processes. The impact of AIOps and the shift in IT practices were the focus of a recent SNIA Cloud Storage Technologies Initiative (CSTI) webinar, “AIOps: Reactive to Proactive – Revolutionizing the IT Mindset.” If you missed the live session, it’s available on-demand together with the presentation slides, at the SNIA Educational Library. The audience asked several intriguing questions. Here are answers to them all: Q. How do you align your AIOps objectives with your company’s overall AI usage policy when it is still fairly restrictive in terms of AI use and acceptance? A.There are a lot of misconceptions on company policies and also what constitutes AI and the actual risk. So, there are several steps you can take:
  • Understand the policy and intent
  • Focus on low risk and high value use cases, for example, data used in IT management is often low risk and high value – e.g. metrics, or number of incidents or events
  • Start with a well-controlled and small environment and show value
  • Be transparent and demonstrate transparency. Even put human in the loop for a while.
  • Maintain data governance – responsible data handling.
  • Use industry’s best practices.
Q. What are the best AIOps tools in the market? A. There are many tools that claim to be an AIOps tool. But as the webinar shows, there is no single good tool and there will never be one best tool. It depends on what problem you are trying to solve.
  • Step 1: Identify the areas of the software development life cycle (SDLC) that you are focused on
  • Step 2: Identify the problem areas
  • Step 3: identify the tools that can help catch the problems earlier and solve them
 Q. What kind of coding and tool experience is needed for AIOps? A. Different parts of the lifecycle require different levels of experience with coding or tools. Many don’t need any coding experience. However, a number of them require a thorough understanding of processes and best practices in software development or IT management to use them effectively. Q. How can a DevOps engineer upskill to AIOps? A. It is very easy for a DevOps engineer to upskill to use AIOps tools. A lot of these capabilities are available as open source. It is best to start experimenting with open-source tools and see their value. Second, focus on a smaller section of the problem (looking at the lifecycle) and then identify the tools that solve that problem. Free tiers, open-source tools, and even manual scripts help upskill without buying these tools. A lot of on-line course sites like Udemy are now offering AIOps classes as well. Q. What are examples of existing AI cloud cost optimization tools? There are 2 types of cloud cost optimization tools
  • ITOps tools – automate actions to optimize cost
  • FinOps tools – analyze and recommend actions to optimize cost.
The analysis tools are good at identifying issues but fall short of actually providing value unless you manually take action. The tools that automate provide value immediately but need greater buy in from the organization to allow a tool to take action. Some optimization tools available: Turbonomic from IBM, others are from Flexera, Apptio, Densify, AWS cost explorer, Azure Cost Management + Billing, some are built into the cloud providers. Q. Can you please explain runbooks further? A.Runbooks are a sequence of actions often coded as scripts that are used to automate the action or remediation in response to a problem or incident. These are pre-defined procedures. Usually, they are built out of a set of manual actions an operator takes and then codifies in the form of a procedure and then code.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A for Accelerating Gen AI Dataflow Bottlenecks

Erik Smith

Mar 25, 2024

title of post
Generative AI is front page news everywhere you look. With advancements happening so quickly, it is hard to keep up. The SNIA Networking Storage Forum recently convened a panel of experts from a wide range of backgrounds to talk about Gen AI in general and specifically discuss how dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. If you missed this session, “Accelerating Generative AI: Options for Conquering the Dataflow Bottlenecks,” it’s available on-demand at the SNIA Educational Library. We promised to provide answers to our audience questions, and here they are. Q: If ResNet-50 is a dinosaur from 2015, which model would you recommend using instead for benchmarking? A: Setting aside the unfair aspersions being cast on the venerable ResNet-50, which is still used for inferencing benchmarks 😊, we suggest checking out the MLCommons website. In the benchmarks section you’ll see multiple use cases on Training and Inference. There are multiple benchmarks available that can provide more information about the ability of your infrastructure to effectively handle your intended workload. Q: Even if/when we use optics to connect clusters, there is a roughly 5ns/meter delay for the fiber between clusters. Seems like that physical distance limit almost mandates alternate ways of programming optimization to ‘stitch’ the interplay between data and compute? A: With regards to the use of optics versus copper to connect clusters, signals propagate through fiber and copper at about the same speed, so moving to an all-optical cabling infrastructure for latency reduction reasons is probably not the best use of capital. Also, even if there were a slight difference in the signal propagation speed through a particular optical or copper based medium, 5ns/m is small compared to switch and NIC packet processing latencies (e.g., 200-800 ns per hop) until you get to full metro distances. In addition, the software latencies are 2-6 us on top of the physical latencies for the most optimized systems. For AI fabrics data/messages are pipelined, so the raw latency does not have much effect. Interestingly, the time for data to travel between nodes is only one of the limiting factors when it comes to AI performance limitations and it’s not the biggest limitation either. Along these lines, there’s a phenomenal talk by Stephen Jones (NVIDIA) “How GPU computing works” that explains how latency between GPU and Memory impacts the overall system efficiency much more than anything else. That said, the various collective communication libraries (NCCL, RCCL, etc) and in network compute (e.g., SHARP) can have a big impact on the overall system efficiency by helping to avoid network contention. Q: Does this mean that GPUs are more efficient to use than CPUs and DPUs? A: GPUs, CPUs, AI accelerators, and DPUs all provide different functions and have different tradeoffs. While a CPU is good at executing arbitrary streams of instructions through applications/programs, embarrassingly parallelizable workloads (e.g., matrix multiplications which are common in deep learning) can be much more efficient when performed by GPUs or AI accelerators due to the GPUs’ and accelerators’ ability to execute linear algebra operations in parallel. Similarly, I wouldn’t use a GPU or AI accelerator as a general-purpose data mover, I’d use a CPU or an IPU/DPU for that. Q: With regards to vector engines, are there DPUs, switches (IB or Ethernet) that contain vector engines? A: There are commercially available vector engine accelerators but currently there are no IPUs/DPUs or switches that provide this functionality natively. Q: One of the major bottlenecks in modern AI is GPU to GPU connectivity. Ex. NVIDIA uses a proprietary GPU-GPU interconnect, At DGX-2 the focus was on 16 GPUs within a single box with NVSwitch, but then with A100 NVIDIA pulled this back to 8GPUs. But then expanded on that to a super-pod and a second level of switching to get to 256GPUS. How does NVlink, or other proprietary GPU to GPU interconnects address bottlenecks? And why has industry focused on an 8 GPU deployment vs a 16 GPU deployment resolution, given that LLMs are not training on 10's of thousands of GPUs? A: GPU-GPU interconnects all addresses bottlenecks in the same way that other high-speed fabrics do. GPU-GPU have direct connections featuring large bandwidth, optimized interconnect (point to point or parallel paths), and lightweight protocols. These interconnects have so far been proprietary and not interoperable across GPU vendors. The number of GPUs in a server chassis is dependent on many practical factors, e.g., 8 Gaudis per server leveraging standard RoCE ports provides a good balance to support training and inference. Q: How do you see the future of blending of memory and storage being enabled for generative AI workloads and the direction of "unified" memory between accelerators, GPUs, DPUs and CPUs? A: If by unified memory, you mean centralized memory that can be treated like a resource pool and be consumed by GPUs in place of HBM or by CPUs/DPUs in place of DRAM, then we do not believe we will see unified memory in the foreseeable future. The primary reason is latency. To have a unified memory would require centralization. Even if you were to constrain the distance (i.e., between the end-devices and the centralized memory) to be a single rack, the latency increase caused by the extra circuitry and physical length of the transport media (at 5ns per meter) could be detrimental to performance. However, the big problem with resource sharing is contention. Whether it be congestion in the network or contention at the centralized resource access point (interface), sharing resources requires special handling that will be challenging in the general case. For example, with 10 “compute” nodes attempting to access a pool of memory on a CXL Type 3 device, many of the nodes will end up waiting an unacceptably long period of time for a response. If by unified memory, you mean creating a new “capacity” tier of memory that is more performant than SSD and less performant than DRAM, then CXL Type 3 devices appear to be the way the industry will address that use case, but it may be a while before we see mass adoption. Q: Do you see the hardware design to more specialized into the AI/ML phases (training, inference, etc.)? But today's enterprise deployments you can have the same hardware performing several tasks in parallel. A: Yes, not only have specialized HW offerings (e.g., accelerators) already been introduced (such as in consumer laptops combining CPUs with inference engines), but also specialized configurations that have been optimized for specific use cases (e.g., inferencing) to be introduced as well. The reason is related to the diverse requirements for each use case. For more information, see the OCP Global Summit 23 presentation “Meta’s evolution of network AI” (specifically starting at time stamp 4:30). They describe how different use cases stress the infrastructure in different ways. That said, there is value in accelerators and hardware being able to address any of the work types for AI so that a given cluster can run whichever mix of jobs is required at a given time. Q: Google leaders like Amin Vahdat have been casting doubts on the possibility of significant acceleration far from the CPU. Can you elaborate further on positioning data-centric compute in the face of that challenge? A: This is a multi-billion-dollar question! There isn’t an obvious answer today. You could imagine building a data processing pipeline with data transform accelerators ‘far’ from where the training and inferencing CPU/accelerators are located. You could build a full “accelerator only” training pipeline if you consider a GPU to be an accelerator not a CPU. The better way to think about this problem is to consider that there is no single answer for how to build ML infrastructure. There is also no single definition of CPU vs accelerator that matters in constructing useful AI infrastructure solutions. The distinction comes down to the role of the device within the infrastructure. With emerging ‘chiplet’ and similar approaches we will see the lines and distinctions blur further. What is significant in what Vahdat and others have been discussing: fabric/network/memory construction plus protocols to improve bandwidth, limit congestion, and reduce tail latency when connecting the data to computational elements (CPU, GPU, AI accelerators, hybrids) will see significant evolution and development over the next few years.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A for Accelerating Gen AI Dataflow Bottlenecks

Erik Smith

Mar 25, 2024

title of post
Generative AI is front page news everywhere you look. With advancements happening so quickly, it is hard to keep up. The SNIA Networking Storage Forum recently convened a panel of experts from a wide range of backgrounds to talk about Gen AI in general and specifically discuss how dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. If you missed this session, “Accelerating Generative AI: Options for Conquering the Dataflow Bottlenecks,” it’s available on-demand at the SNIA Educational Library. We promised to provide answers to our audience questions, and here they are. Q: If ResNet-50 is a dinosaur from 2015, which model would you recommend using instead for benchmarking? A: Setting aside the unfair aspersions being cast on the venerable ResNet-50, which is still used for inferencing benchmarks 😊, we suggest checking out the MLCommons website. In the benchmarks section you’ll see multiple use cases on Training and Inference. There are multiple benchmarks available that can provide more information about the ability of your infrastructure to effectively handle your intended workload. Q: Even if/when we use optics to connect clusters, there is a roughly 5ns/meter delay for the fiber between clusters. Seems like that physical distance limit almost mandates alternate ways of programming optimization to ‘stitch’ the interplay between data and compute? A: With regards to the use of optics versus copper to connect clusters, signals propagate through fiber and copper at about the same speed, so moving to an all-optical cabling infrastructure for latency reduction reasons is probably not the best use of capital. Also, even if there were a slight difference in the signal propagation speed through a particular optical or copper based medium, 5ns/m is small compared to switch and NIC packet processing latencies (e.g., 200-800 ns per hop) until you get to full metro distances. In addition, the software latencies are 2-6 us on top of the physical latencies for the most optimized systems. For AI fabrics data/messages are pipelined, so the raw latency does not have much effect. Interestingly, the time for data to travel between nodes is only one of the limiting factors when it comes to AI performance limitations and it’s not the biggest limitation either. Along these lines, there’s a phenomenal talk by Stephen Jones (NVIDIA) “How GPU computing works” that explains how latency between GPU and Memory impacts the overall system efficiency much more than anything else. That said, the various collective communication libraries (NCCL, RCCL, etc) and in network compute (e.g., SHARP) can have a big impact on the overall system efficiency by helping to avoid network contention. Q: Does this mean that GPUs are more efficient to use than CPUs and DPUs? A: GPUs, CPUs, AI accelerators, and DPUs all provide different functions and have different tradeoffs. While a CPU is good at executing arbitrary streams of instructions through applications/programs, embarrassingly parallelizable workloads (e.g., matrix multiplications which are common in deep learning) can be much more efficient when performed by GPUs or AI accelerators due to the GPUs’ and accelerators’ ability to execute linear algebra operations in parallel. Similarly, I wouldn’t use a GPU or AI accelerator as a general-purpose data mover, I’d use a CPU or an IPU/DPU for that. Q: With regards to vector engines, are there DPUs, switches (IB or Ethernet) that contain vector engines? A: There are commercially available vector engine accelerators but currently there are no IPUs/DPUs or switches that provide this functionality natively. Q: One of the major bottlenecks in modern AI is GPU to GPU connectivity. Ex. NVIDIA uses a proprietary GPU-GPU interconnect, At DGX-2 the focus was on 16 GPUs within a single box with NVSwitch, but then with A100 NVIDIA pulled this back to 8GPUs. But then expanded on that to a super-pod and a second level of switching to get to 256GPUS. How does NVlink, or other proprietary GPU to GPU interconnects address bottlenecks? And why has industry focused on an 8 GPU deployment vs a 16 GPU deployment resolution, given that LLMs are not training on 10’s of thousands of GPUs? A: GPU-GPU interconnects all addresses bottlenecks in the same way that other high-speed fabrics do. GPU-GPU have direct connections featuring large bandwidth, optimized interconnect (point to point or parallel paths), and lightweight protocols. These interconnects have so far been proprietary and not interoperable across GPU vendors. The number of GPUs in a server chassis is dependent on many practical factors, e.g., 8 Gaudis per server leveraging standard RoCE ports provides a good balance to support training and inference. Q: How do you see the future of blending of memory and storage being enabled for generative AI workloads and the direction of “unified” memory between accelerators, GPUs, DPUs and CPUs? A: If by unified memory, you mean centralized memory that can be treated like a resource pool and be consumed by GPUs in place of HBM or by CPUs/DPUs in place of DRAM, then we do not believe we will see unified memory in the foreseeable future. The primary reason is latency. To have a unified memory would require centralization. Even if you were to constrain the distance (i.e., between the end-devices and the centralized memory) to be a single rack, the latency increase caused by the extra circuitry and physical length of the transport media (at 5ns per meter) could be detrimental to performance. However, the big problem with resource sharing is contention. Whether it be congestion in the network or contention at the centralized resource access point (interface), sharing resources requires special handling that will be challenging in the general case. For example, with 10 “compute” nodes attempting to access a pool of memory on a CXL Type 3 device, many of the nodes will end up waiting an unacceptably long period of time for a response. If by unified memory, you mean creating a new “capacity” tier of memory that is more performant than SSD and less performant than DRAM, then CXL Type 3 devices appear to be the way the industry will address that use case, but it may be a while before we see mass adoption. Q: Do you see the hardware design to more specialized into the AI/ML phases (training, inference, etc.)? But today’s enterprise deployments you can have the same hardware performing several tasks in parallel. A: Yes, not only have specialized HW offerings (e.g., accelerators) already been introduced (such as in consumer laptops combining CPUs with inference engines), but also specialized configurations that have been optimized for specific use cases (e.g., inferencing) to be introduced as well. The reason is related to the diverse requirements for each use case. For more information, see the OCP Global Summit 23 presentation “Meta’s evolution of network AI” (specifically starting at time stamp 4:30). They describe how different use cases stress the infrastructure in different ways. That said, there is value in accelerators and hardware being able to address any of the work types for AI so that a given cluster can run whichever mix of jobs is required at a given time. Q: Google leaders like Amin Vahdat have been casting doubts on the possibility of significant acceleration far from the CPU. Can you elaborate further on positioning data-centric compute in the face of that challenge? A: This is a multi-billion-dollar question! There isn’t an obvious answer today. You could imagine building a data processing pipeline with data transform accelerators ‘far’ from where the training and inferencing CPU/accelerators are located. You could build a full “accelerator only” training pipeline if you consider a GPU to be an accelerator not a CPU. The better way to think about this problem is to consider that there is no single answer for how to build ML infrastructure. There is also no single definition of CPU vs accelerator that matters in constructing useful AI infrastructure solutions. The distinction comes down to the role of the device within the infrastructure. With emerging ‘chiplet’ and similar approaches we will see the lines and distinctions blur further. What is significant in what Vahdat and others have been discussing: fabric/network/memory construction plus protocols to improve bandwidth, limit congestion, and reduce tail latency when connecting the data to computational elements (CPU, GPU, AI accelerators, hybrids) will see significant evolution and development over the next few years.   The post Q&A for Accelerating Gen AI Dataflow Bottlenecks first appeared on SNIA on Data, Networking & Storage.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Hidden Costs of AI Q&A

Erik Smith

Mar 14, 2024

title of post
At our recent SNIA Networking Storage Forum webinar, “Addressing the Hidden Costs of AI,” our expert team explored the impacts of AI, including sustainability and areas where there are potentially hidden technical and infrastructure costs. If you missed the live event, you can watch it on-demand in the SNIA Educational Library. Questions from the audience ranged from training Large Language Models to fundamental infrastructure changes from AI and more. Here are answers to the audience’s questions from our presenters. Q: Do you have an idea of where the best tradeoff is for high IO speed cost and GPU working cost? Is it always best to spend maximum and get highest IO speed possible? A: It depends on what you are trying to do If you are training a Large Language Model (LLM) then you’ll have a large collection of GPUs communicating with one another regularly (e.g., All-reduce) and doing so at throughput rates that are up to 900GB/s per GPU! For this kind of use case, it makes sense to use the fastest network option available. Any money saved by using a cheaper/slightly less performant transport will be more than offset by the cost of GPUs that are idle while waiting for data. If you are more interested in Fine Tuning an existing model or using Retrieval Augmented Generation (RAG) then you won’t need quite as much network bandwidth and can choose a more economical connectivity option. It’s worth noting that a group of companies have come together to work on the next generation of networking that will be well suited for use in HPC and AI environments. This group, the Ultra Ethernet Consortium (UEC), has agreed to collaborate on an open standard and has wide industry backing. This should allow even large clusters (1000+ nodes) to utilize a common fabric for all the network needs of a cluster. Q: We (all industries) are trying to use AI for everything.  Is that cost effective?  Does it cost fractions of a penny to answer a user question, or is there a high cost that is being hidden or eaten by someone now because the industry is so new? A: It does not make sense to try and use AI/ML to solve every problem. AI/ML should only be used when a more traditional, algorithmic, technique cannot easily be used to solve a problem (and there are plenty of these). Generative AI aside, one example where AI has historically provided an enormous benefit for IT practitioners is Multivariate Anomaly Detection. These models can learn what normal is for a given set of telemetry streams and then alert the user when something unexpected happens. A traditional approach (e.g., writing source code for an anomaly detector) would be cost and time prohibitive and probably not be anywhere nearly as good at detecting anomalies. Q: Can you discuss typical data access patterns for model training or tuning? (sequential/random, block sizes, repeated access, etc)? A: There is no simple answer as the access patterns can vary from one type of training to the next. Assuming you’d like a better answer than that, I would suggest starting to look into two resources:
  1. Meta’s OCP Presentation: “Meta’s evolution of network for AI” includes a ton of great information about AI’s impact on the network.
  2. Blocks and Files article: “MLCommons publishes storage benchmark for AI” includes a table that provides an overview of benchmark results for one set of tests.
Q: Will this video be available after the talk? I would like to forward to my co-workers. Great info. A: Yes. You can access the video and a PDF of the presentations slides here. Q: Does this mean we're moving to fewer updates or write once (or infrequently) read mostly storage model?  I'm excluding dynamic data from end-user inference requests. A: For the active training and finetuning phase of an AI model the data patterns are very read heavy. There is quite a lot of work done before a training or finetuning job begins that is much more balanced between read & write. This is called the “data preparation” phase of an AI pipeline. Data prep takes existing data from a variety of sources (inhouse data lake, dataset from a public repo, or a database) and performs data manipulation tasks to accomplish data labeling and formatting at a minimum. So, tuning for just read may not be optimal. Q: Fibre Channel seems to have a lot of the characteristics required for the fabric. Could a Fibre Channel fabric over NVMe be utilized to handle the data ingestion for AI component on dedicated adapters for storage (disaggregate storage)? A: Fibre Channel is not a great fit for AI use cases for a few reasons:
  • With AI, data is typically accessed as either Files or Objects, not Blocks, and FC is primarily used to access block storage.
  • If you wanted to use FC in place of IB (for GPU to GPU traffic) you’d need something like an FC-RDMA to make FC suitable.
  • All of that said, FC currently maxes out at 128GFC and there are two reasons why this matters:
    1. AI optimized storage starts at 200Gbps and based on some end user feedback, 400Gbps is already not fast enough.
    2. GPU to GPU traffic bandwidth requirements require up to 900GB/s (7200Gbps) of throughput per GPU, that’s about 56 128GFC interfaces per GPU.
Q: Do you see something like GPUDirect storage from NVIDIA becoming the standard?  So does this mean NVMe will win? (over FC or TCP?)  Will other AI chip providers have to adopt their own GPUDirect-like protocol? A: It’s too early to say whether or not GPUDirect storage will become a de facto standard or if alternate approaches (e.g., pNFS) will be able to satisfy the needs of most environments. The answer is likely to be “both”. Q: You've mentioned demand for higher throughput for training, and lower latency for inference. Is there a demand for low cost, high capacity, archive tier storage? A: Not specifically for AI. Depending on what you are doing, training and inference can be latency or throughput sensitive (sometimes both). Training an LLM (which most users will never actually attempt to do) requires massive throughput from storage for reads and writes, literally the faster the better when loading data into the GPUs or when the GPUs are saving checkpoints. An inference workload wouldn’t require the same throughput as training would but to the extent that it needs to access storage, it would certainly benefit from low latency. If you are trying to optimize AI storage for anything but performance (e.g., cost), you are probably going to be disappointed with overall performance of the system. Q: What are the presenters' views about industry trend to run workload or train a model? is it in the cloud datacenters like AWS or GCP or On-prem? A: It truly depends on what you are doing. If you want to experiment with AI (e.g., an AI version of a “Hello World” program), or even something a bit more involved, there are lots of options that allow you to use the cloud economically. Check out this collection of colab notebooks for an example and give it a try for yourself. Once you get beyond simple projects, you’ll find that using cloud-based services will become prohibitively expensive and you’ll quickly want to start running you training jobs on-prem, the downside to this is the need to manage the infrastructure elements yourself, this assumes that you can even get the right GPUs, although there are reports that supply issues are easing in this space. The bottom line is, whether or not to run on-prem or in the cloud is still a question of answering the question, can you realistically get the same ease of use and freedom from HW maintenance from your own infrastructure as you could from a CSP.  Sometimes the answer is yes. Q: Does AI accelerator in PC (recently advertised for new CPUs) have any impact/benefit on using large public AI models? A: AI accelerators in PCs will be a boon for all of us as it will enable inference at the edge. It will also allow exploration and experimentation on your local system for building your own AI work. You will, however, want to focus on small or mini models at this time. Without large amounts of dedicated GPU memory to help speed things up only the small models will run well on your local PC. That being said, we will continue to see improvements in this area and PCs are a great starting point for AI projects. Q: Fundamentally -- Is AI radically changing what is required from storage? Or is it simply accelerating some of the existing trends of reducing power, higher density SSDs, and pushing faster on the trends in computational storage, new NVMs transport modes (such as RDMA), and pushing for ever more file system optimizations? A: From the point of view of a typical enterprise storage deployment (e.g., Block storage being accessed over an FC SAN), AI storage is completely different. Storage is accessed as either Files or Objects, not as blocks and the performance requirements already exceed the maximum speeds that FC can deliver today (i.e., 128GFC). This means most AI storage is using either Ethernet or IB as a transport. Raw performance seems to be the primary driver in this space right now rather than reducing power consumption or Increasing density. You can expect protocols such as GPUDirect and pNFS to become increasingly important to meet performance targets. Q: What are the innovations in HDDs relative to AI workloads? This was mentioned in the SSD + HDD slide. A: The point of the SSD + HDD slide was to point out the introduction of SSDs:
  1. dramatically improved overall storage system efficiency, leading to a dramatic performance boost. This performance boost impacted the amount of data that a single storage port could transmit onto a SAN and this had a dramatic impact on the need to monitor for congestion and congestion spreading.
  2. didn’t completely displace the need for HDDs, just as GPUs won’t replace the need for CPUs. They provide different functions and excel at different types of jobs.
Q: What is the difference between (1) Peak Inference, (2) Mainstream Inference, (3) Baseline Inference, and (4) Endpoint Inference, specifically from a cost perspective? A: This question was answered Live during the webinar (see timestamp 44:27) the following is a summary of the responses: Endpoint inference is inference that is happening on client devices (e.g., laptops, smartphones) where much smaller models that have been optimized for the very constrained power envelope of these devices. Peak inference can be thought about as something like Chat GPT or Bings AI chatbot, where you need large / specialized infrastructure (e.g., GPUs, specialized AI Hardware accelerators). Mainstream and Baseline inference is somewhere in between where you're using much smaller models or specialized models. For example, you could have a mistral 7 billion model which you have fine-tuned for your enterprise use case of document summarization or to find insights in a sales pipeline, and these use cases can employ much smaller models and hence the requirements can vary. In terms of cost the deployment of these models for edge inference would be low as compared to peak inference like a chat GPT which would be much higher. In terms of infrastructure requirements some of the Baseline and mainstream inference models can be served just by using a CPU alone or with a CPU plus a GPU, or with a CPU plus a few GPUs, or CPU plus a few AI accelerators. CPUs available today do have built AI accelerators which can provide an optimized cost solution for Baseline and mainstream inference which will be the typical scenario in many enterprise environments. Q: You said utilization of network and hardware is changing significantly but compared to what? Traditional enterprise workloads or HPC workloads? A: AI workloads will drive network utilization unlike anything the enterprise has ever experienced before. Each GPU (of which there are currently up to 8 in a server) can currently generate 900GB/s (7200 Gbps) of GPU to GPU traffic. To be fair, this GPU to GPU traffic can and should be isolated to a dedicated “AI Fabric” that has been specifically designed for this use. Along these lines new types of network topologies are being used. Rob mentioned one of them during his portion of the presentation (i.e., the Rail topology). Those end users already familiar with HPC will find many of the same constraints and scalability issues that need to be dealt with in HPC environments also impact AI infrastructure. Q: What are the key networking considerations for AI deployed at Edge (i.e. stores, branch offices)? A: AI at the edge is a talk all on its own. Much like we see large differences between training, fine tuning, and inference in the data center, inference at the edge has many flavors and performance requirements that differ from use case to use case. Some examples are a centralized set of servers ingesting the camera feeds for a large retail store, aggregating them, and making inferences as compared to a single camera watching an intersection and using an on-chip AI accelerator to make streaming inferences. All forms of devices from medical test equipment, your car, or your phone are all edge devices with wildly different capabilities.      

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Hidden Costs of AI Q&A

Erik Smith

Mar 14, 2024

title of post
At our recent SNIA Networking Storage Forum webinar, “Addressing the Hidden Costs of AI,” our expert team explored the impacts of AI, including sustainability and areas where there are potentially hidden technical and infrastructure costs. If you missed the live event, you can watch it on-demand in the SNIA Educational Library. Questions from the audience ranged from training Large Language Models to fundamental infrastructure changes from AI and more. Here are answers to the audience’s questions from our presenters. Q: Do you have an idea of where the best tradeoff is for high IO speed cost and GPU working cost? Is it always best to spend maximum and get highest IO speed possible? A: It depends on what you are trying to do If you are training a Large Language Model (LLM) then you’ll have a large collection of GPUs communicating with one another regularly (e.g., All-reduce) and doing so at throughput rates that are up to 900GB/s per GPU! For this kind of use case, it makes sense to use the fastest network option available. Any money saved by using a cheaper/slightly less performant transport will be more than offset by the cost of GPUs that are idle while waiting for data. If you are more interested in Fine Tuning an existing model or using Retrieval Augmented Generation (RAG) then you won’t need quite as much network bandwidth and can choose a more economical connectivity option. It’s worth noting that a group of companies have come together to work on the next generation of networking that will be well suited for use in HPC and AI environments. This group, the Ultra Ethernet Consortium (UEC), has agreed to collaborate on an open standard and has wide industry backing. This should allow even large clusters (1000+ nodes) to utilize a common fabric for all the network needs of a cluster. Q: We (all industries) are trying to use AI for everything.  Is that cost effective?  Does it cost fractions of a penny to answer a user question, or is there a high cost that is being hidden or eaten by someone now because the industry is so new? A: It does not make sense to try and use AI/ML to solve every problem. AI/ML should only be used when a more traditional, algorithmic, technique cannot easily be used to solve a problem (and there are plenty of these). Generative AI aside, one example where AI has historically provided an enormous benefit for IT practitioners is Multivariate Anomaly Detection. These models can learn what normal is for a given set of telemetry streams and then alert the user when something unexpected happens. A traditional approach (e.g., writing source code for an anomaly detector) would be cost and time prohibitive and probably not be anywhere nearly as good at detecting anomalies. Q: Can you discuss typical data access patterns for model training or tuning? (sequential/random, block sizes, repeated access, etc)? A: There is no simple answer as the access patterns can vary from one type of training to the next. Assuming you’d like a better answer than that, I would suggest starting to look into two resources:
  1. Meta’s OCP Presentation: “Meta’s evolution of network for AI” includes a ton of great information about AI’s impact on the network.
  2. Blocks and Files article: “MLCommons publishes storage benchmark for AI” includes a table that provides an overview of benchmark results for one set of tests.
Q: Will this video be available after the talk? I would like to forward to my co-workers. Great info. A: Yes. You can access the video and a PDF of the presentations slides here. Q: Does this mean we’re moving to fewer updates or write once (or infrequently) read mostly storage model?  I’m excluding dynamic data from end-user inference requests. A: For the active training and finetuning phase of an AI model the data patterns are very read heavy. There is quite a lot of work done before a training or finetuning job begins that is much more balanced between read & write. This is called the “data preparation” phase of an AI pipeline. Data prep takes existing data from a variety of sources (inhouse data lake, dataset from a public repo, or a database) and performs data manipulation tasks to accomplish data labeling and formatting at a minimum. So, tuning for just read may not be optimal. Q: Fibre Channel seems to have a lot of the characteristics required for the fabric. Could a Fibre Channel fabric over NVMe be utilized to handle the data ingestion for AI component on dedicated adapters for storage (disaggregate storage)? A: Fibre Channel is not a great fit for AI use cases for a few reasons:
  • With AI, data is typically accessed as either Files or Objects, not Blocks, and FC is primarily used to access block storage.
  • If you wanted to use FC in place of IB (for GPU to GPU traffic) you’d need something like an FC-RDMA to make FC suitable.
  • All of that said, FC currently maxes out at 128GFC and there are two reasons why this matters:
    1. AI optimized storage starts at 200Gbps and based on some end user feedback, 400Gbps is already not fast enough.
    2. GPU to GPU traffic bandwidth requirements require up to 900GB/s (7200Gbps) of throughput per GPU, that’s about 56 128GFC interfaces per GPU.
Q: Do you see something like GPUDirect storage from NVIDIA becoming the standard?  So does this mean NVMe will win? (over FC or TCP?)  Will other AI chip providers have to adopt their own GPUDirect-like protocol? A: It’s too early to say whether or not GPUDirect storage will become a de facto standard or if alternate approaches (e.g., pNFS) will be able to satisfy the needs of most environments. The answer is likely to be “both”. Q: You’ve mentioned demand for higher throughput for training, and lower latency for inference. Is there a demand for low cost, high capacity, archive tier storage? A: Not specifically for AI. Depending on what you are doing, training and inference can be latency or throughput sensitive (sometimes both). Training an LLM (which most users will never actually attempt to do) requires massive throughput from storage for reads and writes, literally the faster the better when loading data into the GPUs or when the GPUs are saving checkpoints. An inference workload wouldn’t require the same throughput as training would but to the extent that it needs to access storage, it would certainly benefit from low latency. If you are trying to optimize AI storage for anything but performance (e.g., cost), you are probably going to be disappointed with overall performance of the system. Q: What are the presenters’ views about industry trend to run workload or train a model? is it in the cloud datacenters like AWS or GCP or On-prem? A: It truly depends on what you are doing. If you want to experiment with AI (e.g., an AI version of a “Hello World” program), or even something a bit more involved, there are lots of options that allow you to use the cloud economically. Check out this collection of colab notebooks for an example and give it a try for yourself. Once you get beyond simple projects, you’ll find that using cloud-based services will become prohibitively expensive and you’ll quickly want to start running you training jobs on-prem, the downside to this is the need to manage the infrastructure elements yourself, this assumes that you can even get the right GPUs, although there are reports that supply issues are easing in this space. The bottom line is, whether or not to run on-prem or in the cloud is still a question of answering the question, can you realistically get the same ease of use and freedom from HW maintenance from your own infrastructure as you could from a CSP.  Sometimes the answer is yes. Q: Does AI accelerator in PC (recently advertised for new CPUs) have any impact/benefit on using large public AI models? A: AI accelerators in PCs will be a boon for all of us as it will enable inference at the edge. It will also allow exploration and experimentation on your local system for building your own AI work. You will, however, want to focus on small or mini models at this time. Without large amounts of dedicated GPU memory to help speed things up only the small models will run well on your local PC. That being said, we will continue to see improvements in this area and PCs are a great starting point for AI projects. Q: Fundamentally — Is AI radically changing what is required from storage? Or is it simply accelerating some of the existing trends of reducing power, higher density SSDs, and pushing faster on the trends in computational storage, new NVMs transport modes (such as RDMA), and pushing for ever more file system optimizations? A: From the point of view of a typical enterprise storage deployment (e.g., Block storage being accessed over an FC SAN), AI storage is completely different. Storage is accessed as either Files or Objects, not as blocks and the performance requirements already exceed the maximum speeds that FC can deliver today (i.e., 128GFC). This means most AI storage is using either Ethernet or IB as a transport. Raw performance seems to be the primary driver in this space right now rather than reducing power consumption or Increasing density. You can expect protocols such as GPUDirect and pNFS to become increasingly important to meet performance targets. Q: What are the innovations in HDDs relative to AI workloads? This was mentioned in the SSD + HDD slide. A: The point of the SSD + HDD slide was to point out the introduction of SSDs:
  1. dramatically improved overall storage system efficiency, leading to a dramatic performance boost. This performance boost impacted the amount of data that a single storage port could transmit onto a SAN and this had a dramatic impact on the need to monitor for congestion and congestion spreading.
  2. didn’t completely displace the need for HDDs, just as GPUs won’t replace the need for CPUs. They provide different functions and excel at different types of jobs.
Q: What is the difference between (1) Peak Inference, (2) Mainstream Inference, (3) Baseline Inference, and (4) Endpoint Inference, specifically from a cost perspective? A: This question was answered Live during the webinar (see timestamp 44:27) the following is a summary of the responses: Endpoint inference is inference that is happening on client devices (e.g., laptops, smartphones) where much smaller models that have been optimized for the very constrained power envelope of these devices. Peak inference can be thought about as something like Chat GPT or Bings AI chatbot, where you need large / specialized infrastructure (e.g., GPUs, specialized AI Hardware accelerators). Mainstream and Baseline inference is somewhere in between where you’re using much smaller models or specialized models. For example, you could have a mistral 7 billion model which you have fine-tuned for your enterprise use case of document summarization or to find insights in a sales pipeline, and these use cases can employ much smaller models and hence the requirements can vary. In terms of cost the deployment of these models for edge inference would be low as compared to peak inference like a chat GPT which would be much higher. In terms of infrastructure requirements some of the Baseline and mainstream inference models can be served just by using a CPU alone or with a CPU plus a GPU, or with a CPU plus a few GPUs, or CPU plus a few AI accelerators. CPUs available today do have built AI accelerators which can provide an optimized cost solution for Baseline and mainstream inference which will be the typical scenario in many enterprise environments. Q: You said utilization of network and hardware is changing significantly but compared to what? Traditional enterprise workloads or HPC workloads? A: AI workloads will drive network utilization unlike anything the enterprise has ever experienced before. Each GPU (of which there are currently up to 8 in a server) can currently generate 900GB/s (7200 Gbps) of GPU to GPU traffic. To be fair, this GPU to GPU traffic can and should be isolated to a dedicated “AI Fabric” that has been specifically designed for this use. Along these lines new types of network topologies are being used. Rob mentioned one of them during his portion of the presentation (i.e., the Rail topology). Those end users already familiar with HPC will find many of the same constraints and scalability issues that need to be dealt with in HPC environments also impact AI infrastructure. Q: What are the key networking considerations for AI deployed at Edge (i.e. stores, branch offices)? A: AI at the edge is a talk all on its own. Much like we see large differences between training, fine tuning, and inference in the data center, inference at the edge has many flavors and performance requirements that differ from use case to use case. Some examples are a centralized set of servers ingesting the camera feeds for a large retail store, aggregating them, and making inferences as compared to a single camera watching an intersection and using an on-chip AI accelerator to make streaming inferences. All forms of devices from medical test equipment, your car, or your phone are all edge devices with wildly different capabilities.       The post Hidden Costs of AI Q&A first appeared on SNIA on Network Storage.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

AIOps: The Undeniable Paradigm Shift

Michael Hoard

Mar 4, 2024

title of post
AI has entered every aspect of today’s digital world. For IT, AIOps is creating a dramatic shift that redefines how IT approaches operations. On April 9, 2024, the SNIA Cloud Storage Technologies Initiative will host a live webinar, “AIOps: Reactive to Proactive – Revolutionizing the IT Mindset.” In this webinar, Pratik Gupta, one of the industry’s leading experts in AIOps, will delve beyond the tools of AIOps to reveal how AIOps introduces intelligence into the very fabric of IT thinking and processes, discussing:
  • From Dev to Production and Reactive to Proactive: Revolutionizing the IT Mindset: We’ll move beyond the “fix it when it breaks” mentality, embracing a future-proof approach where AI analyzes risk, anticipates issues, prescribes solutions, and learns continuously.
  • Beyond Siloed Solutions: Embracing Holistic Collaboration:  AIOps fosters seamless integration across departments, applications, and infrastructure, promoting real-time visibility and unified action.
  • Automating the Process: From Insights to Intelligent Action: Dive into the world of self-healing IT, where AI-powered workflows and automation resolve issues and optimize performance without human intervention.
Register here to join us on April 9, 2024 for what will surely be a fascinating discussion on the impact of AIOps.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Accelerating Generative AI

David McIntyre

Jan 2, 2024

title of post
Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. On January 24, 2024, the SNIA Networking Storage Forum (NSF) is convening a panel of experts for a discussion on how to tackle Gen AI challenges at our live webinar, “Accelerating Generative AI – Options for Conquering the Dataflow Bottlenecks,” where a broad cross-section of industry veterans will provide insight into the following:
  • Defining the Gen AI dataflow bottlenecks
  • Tools and methods for identifying acceleration options
  • Matchmaking the right xPU solution to the target Gen AI workload(s)
  • Optimizing the network to support acceleration options
  • Moving data closer to processing, or processing closer to data
  • The role of the software stack in determining Gen AI performance
This is a session you don’t want to miss! Register today to save your spot. The post Accelerating Generative AI first appeared on SNIA on Network Storage.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to Artificial Intelligence