SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Today, storage and memory hierarchies are manually tuned and sized at design time. But tomorrow’s workloads are increasingly dynamic, multi-tenant and variable. Can we build autonomous storage systems that can adapt to changing application workloads? In this session, we demonstrate how breakthroughs in autonomous storage systems research can deliver impressive gains in cost, performance, latency control and customer out-of-the-box experience. Attendees will be able to see results from the latest research and development and learn: Why static memory hierarchies leave so much performance on the floor. What is a fully autonomous storage hierarchy and how does it automatically adapt to changing application workloads? The efficiency, QoS, performance SLAs/SLOs and cost tradeoffs that fully autonomous caches and hierarchies juggle.
Today, storage and memory hierarchies are manually tuned and sized at design time. But tomorrow’s workloads are increasingly dynamic, multi-tenant and variable. Can we build autonomous storage systems that can adapt to changing application workloads? In this session, we demonstrate how breakthroughs in autonomous storage systems research can deliver impressive gains in cost, performance, latency control and customer out-of-the-box experience. Attendees will be able to see results from the latest research and development and learn: Why static memory hierarchies leave so much performance on the floor. What is a fully autonomous storage hierarchy and how does it automatically adapt to changing application workloads? The efficiency, QoS, performance SLAs/SLOs and cost tradeoffs that fully autonomous caches and hierarchies juggle.
In mechanical engineering, CAD has enabled engineers, architects, and construction to create fully-featured designs so that they can visualize the construction which enables the development, modification, and optimization of the design process. Why is this missing from the world of performance engineering? Until now, it has been seen as an intractable problem to build for the exponential difficulty of complex storage and memory hierarchies. That’s no longer the case. Join our session to learn about the engineering behind StorageLab and how it allows end-to-end storage architecture performance engineering, in real-time.
Extreme Compute needs Extreme IO. The convergence of HPC and AI are using GPUs in wider range of applications than ever before on multitude of platforms ranging from edge devices, commodity hardware to high performance supercomputers. Larger datasets enable more accurate AI models which gathers deeper information enabling enterprises to collect more and more data. This virtuous cycle is enabling the explosive demands in processing larger amounts of data and the need to reduce IO bottlenecks is greater than ever. With a strong and growing ecosystem and 1.0 GA release, GPUDirect Storage brings in a wealth of capabilities to traditional HPC applications, applications sitting at the convergence of HPC and AI and data analytics ubiquitiously. In this session, we will talk about what's new in this GA release, what value GDS brings to customers and how storage partners are helping NVIDIA grow the ecosystem and developer community.
In DellEMC Enterprise Server/Storage Validation Organization, We perform Load testing using different workloads (Web, File, FTP, Database, Mail, etc.) on Servers to identify the performance of the Systems under heavy load. Knowing how DellEMC Enterprise Systems perform under heavy load (% CPU, % Memory, % Network, % Disk) is extremely valuable and critical. This is achieved with the help of a Load Testing Tools. Load testing tools available in market comes with its own challenges like Cost, Learning Curve and Workloads Support. Here in this talk we are going to demonstrate how we have built JAAS (JMeter As A Service) Distributed WorkLoad Testing solution using Containers and opensource tools and how this solution playing a crucial role in Delivering Servers Validation efforts.
It is broadly known that in an operating system, if any file is deleted, Discard will be issued to underlying storage device. When user deletes file through Operating system, it is not physically deleted from the storage medium, as a matter of fact, this file data is marked as Invalid but remains in the unmapped address space. In another instance, when host performs over write on the previously written logical space and then this previously written memory space can be invalidated by discard operation. These all cases may create lot of fragmentation in device and eventually make the system slow i.e. user starting seeing lag in application, performance drop, high write latency etc. In order to handle this unmapped address space effectively there is provision in JEDEC specification and that is called “Sanitize”. In a nutshell, sanitize process removes the data from the unmapped address space either by performing Physical erase of all the blocks or vendor defined method. To unearth the impact of sanitize, various real time work load taken from different automotive host patterns and examined along with FTL data extracted using debugging Firmware. This study helps to understand how seasonably use of sanitize helps in reduce the latency, better user experience with applications and improve the performance etc. Sanitize utilized in accordance with storage device policy will significantly improve QoS (Quality of Service) i.e. better consistency and predictability of latency (storage response time) and performance while serving read/write commands.
Current enterprise storage devices have to service many diverse and continuously evolving application workloads (For e.g., OLTP, Big Data/Analytics and Virtualization). These workloads combined with additional enterprise storage services like deduplication, compression, snapshots, clones, replication, tiering etc. result in complex I/Os to the underlying storage. Traditional storage system tests make use of benchmarking tools, which generate a fixed and constant workload, comprised of a single or few I/O access patterns and are not sufficient for enterprise storage testing. Workload simulation-based tools, which are available in the market come with their own challenges like cost, learning curve and workload support. Hence, it has become a very big challenge to generate, debug and reproduce these workloads, which could eventually lead to many customer found defects. This arises a need for a robust testing methodology, which closely emulates production environment and helps identify issues early in testing. In our solution testing lab, we have been working on a unique test framework design which leverages software defined services and helps to uncover and reduce turnaround time for complex production issues. In our talk, we will show how we have built our test framework using containers and various open-source tools and its role in our solution testing efforts for our next generation storage products.