As data demands shift from traditional enterprise workloads to massive AI/ML pipelines, the storage software stack has undergone a radical transformation. During the process of storage systems scaling from a single server to hyperscale and AI workloads, the placement and virtualization of storage software intelligence within the stack have become the dominant determinant of performance, cost, and operational agility.
In this webinar, we will try to answer the following question: Given that storage access semantics are already abstracted into files, blocks, and objects, the remaining challenge is architectural: As storage systems evolve from raw hardware to application-facing services, where in the storage software stack should virtualization be implemented to optimize performance, scalability, and resiliency across modern workloads and vastly different physical deployments?
This webinar will help the audience to:
- Have a comprehensive understanding of the storage software stack, moving from the application and logical abstraction layers all the way down to the physical interface
Gain insights on storage virtualization and networking interconnects
Understand scaling terminologies (scale-out, scale-up, scale-across)
Appreciate how the technological evolutions in the above-mentioned areas are helping in scaling storage capacity and data services, making them suitable for modern enterprise and cloud/AI applications
Participants will walk away with a clear understanding of the storage software stack, storage functionality updates, and how storage is evolving to keep pace with the advancements that cater to modern applications like HPC/AI.
In today’s evolving IT landscape, selecting the right storage architecture is critical for optimal performance, scalability, data governance, and cost-efficiency. Furthermore, AI workloads have uniquely influenced how we meet these demands from our storage infrastructure. This webinar provides a technical deep dive into three fundamental storage deployment models – on-premises, cloud, and hybrid – examining their architectures and operational trade-offs through the lens of two key concepts: indirection (accessing data through mapping layers that provide flexibility and abstraction) and redirection (rerouting data requests to enable failover, load balancing, and optimized performance).
As power densities continue to rise in servers, thermal management is becoming a critical challenge. Traditional air cooling is increasingly strained in high-density environments, driving interest in more efficient approaches. These more efficient approaches touch all aspects of the server including the SSD.
In this webinar, Anthony Constantine and Scott Shadley examine the role of liquid cooling in modern SSD deployments. The session will cover key liquid cooling methods, their advantages in heat transfer and performance, and the emerging specification changes shaping the industry.
Join us to better understand how liquid cooling is shaping the future of SSDs and what it means for your storage infrastructure.