Any discussion about storage systems is incomplete without the mention of Throughput, IOPs, and Latency. But what exactly do these terms mean and why are they important?
Collectively, these three terms are often referred to as storage performance metrics. Performance can be defined as the effectiveness of a storage system to address I/O needs of an application or workload. Different application workloads have different I/O patterns, and with that arises different bottlenecks, so there is no “one-size fits all” in storage systems. These storage performance metrics help with storage solution design and selection based on application/workload demands
In this webinar, we’ll cover:
- What storage performance metrics mean – understanding key terminology nuances
- Why users/storage administrators should care about them
- How these metrics impact application performance
- Real-world use cases

Keeping Data Secure is one of the prime concerns of any organisation today, given that data is the new oil and attacks on data are on the rise. Therefore addressing security concerns is a key prerogative for any enterprise class block storage. We will discuss various aspects of keeping data secure on an Enterprise Class Block storage controllers and some latest trends in that space.:
- Storage architects get a view on what is involved in building a secure and robust block storage controller
- Developers will get a view on the latest trends in this space
- Solution and ecosystem partners will get a view on what security aspects need to be considered for data security for a block storage controller
Industry experts from SNIA, Applied Research, Microsoft, and Objective Analysis will participate in a panel discussion to explore trends in memory and storage, CXL, and high bandwidth memory innovations, and how they will affect our future applications. This webinar will give you a preview into the topics and sessions at MemCon on March 26-27, 2024 in Mountain View CA, where SNIA is an Association Partner. This webinar is hosted by Memcon in Partnership with SNIA.

Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute, or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels.
Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. Join this webinar to hear a panel a broad of industry veterans who discuss the topic of accelerating Gen AI.

Emerging memories are now found in multiple applications both as stand-alone chips and embedded into systems on chips (SoCs) as they replace established technologies, including SRAM, NOR flash, and DRAM. In this webinar, SNIA CMSI members and leading experts Tom Coughlin (Coughlin Associates/IEEE President) and Jim Handy (Objective Analysis) will discuss the latest developments in MRAM, ReRAM, FRAM, PCM, and other new memory technologies to explain why, how, and when these technologies will grow, and how their success will impact both the semiconductor and the capital equipment markets.

Object Storage has firmly established itself as a cornerstone of modern data centers and cloud infrastructure. Ensuring API compatibility has become crucial for object storage developers who want to benefit from the wide ecosystem of existing applications. However, achieving compatibility can be challenging due to the complexity and variety of the APIs, access control mechanisms, and performance and scalability requirements.
In this webinar, we'll highlight real-world incompatibilities found in various object storage implementations. We'll discuss specific examples of existing discrepancies, such as missing or incorrect response headers, unsupported API calls, and unexpected behavior. We’ll also describe the implications these have on actual client applications.

OK, I'll be honest, the title is a little dramatic, but ask yourself this question, how much data do you have today, and have you ever considered what impact you are having on the environment? The reality is, data centers have been shown to account for approximately 1 – 1.5% of global electricity use. In this session, we will explore what sustainability is, and I will give you a hint, it’s not just environmental. We will look at the impact digitization can have on energy consumption, and therefore the environment, and some practical applications that you can do to help address data challenges to address environmental sustainability.

The enterprise storage market is rapidly expanding to include NVMe® and NVMe-oF products pervasively. This presents the challenge: how do you manage these as part of your enterprise data center?
As the NVM Express family of specifications continue to develop, the corresponding Swordfish management capabilities are also evolving. The SNIA Swordfish® management bundle (including the specification, schema, documentation, and more) has expanded to include full NVMe and NVMe-oF technology enablement and alignment across DMTF, NVMe and SNIA for NVMe and NVMe-oF technology use cases.
In conjunction with Redfish®, Swordfish's capabilities to manage NVMe and NVMe-oF devices in the enterprise provide a seamless management ecosystem.
This presentation will introduce management of NVMe and NVMe-oF technology with SNIA Swordfish. Using an example of the SNIA Swordfish functionality, the presenters will introduce how to manage the complexity of discovery controllers with the simplified model presented to Swordfish clients.

The days of simple, static, self-contained file systems have long passed. Today, we have complex, dynamic namespaces, mixing block, file, object, key-value, queue, and graph-based resources, accessed via multiple protocols, and distributed across multiple systems and geographic sites. These complexities result in new challenges for simplifying management.
The good news is, that the SNIA Cloud Data Management Interface (CDMI™), an open ISO standard (ISO/IEC 17826:2022) for managing data objects and containers, already includes extensive capabilities for simplifying the management of complex namespaces. In this webinar, attendees will learn how to simplify namespace management – the open standards way, including namespace discovery, introspection, exports, imports and more.

AI is disrupting so many domains and industries and by doing so, AI models and algorithms are becoming increasingly large and complex. This complexity is driven by the proliferation in size and diversity of localized data everywhere, which creates the need for a unified data fabric and/or federated learning. It could be argued that whoever wins the data race will win the AI race, which is inherently built on two premises: 1) Data is available in a central location for AI to have full access to it, 2) Compute is centralized and abundant.
Edge AI though, defies these assumptions. If centralized (or in the cloud) AI is a superpower and super expert, edge AI is a community of many smart wizards. As humans, we can appreciate the power of cumulative knowledge over a central superpower. In this webinar, we will touch on:
- The value and use cases of distributed edge AI
- How data fabric on the edge differs from the cloud and its impact on AI
- Edge device data privacy trade-offs and distributed agency trends
- Privacy mechanisms for federated learning, inference, and analytics
- How interoperability between cloud and edge AI can happen
