A decade ago, the market was aggressively embracing public storage because of its agility and scalability. People are now rethinking that approach, moving toward on-premises storage with cloud consumption models. The new cloud native architecture on-premises has the promise of the traditional data center’s security and reliability with cloud agility and scalability. In this webinar, we will describe how Ceph is uniquely qualified to satisfy this architecture and how the technology community is investing to enable the vision of “Ceph, the Linux of Storage Today”.

Hear from industry experts Jeff Janukowicz, Research Vice President at IDC; Brian Beeler, Owner and Editor in Chief, StorageReview.com; and Cameron T. Brett, SNIA STA Forum Chair on new storage trends developing in the coming year, the applications and other factors driving these trends, and market data that illustrates the assertions.

Training large language models (LLMs) is a complex task which requires substantial computational resources and infrastructure. Fine-tuning LLMs for domain-specific data has emerged as a crucial technique to enhance their performance in specialized tasks and industries. In this talk we give an overview of the basic concepts of LLMs , their pre-training process, highlighting the transfer learning paradigm that forms the basis of fine-tuning.

The latest buzz around generative artificial intelligence (AI) ignores the massive costs to run and power the technology. Without any guard rails in place, what are the impacts of AI on sustainability and costs across our technology resources? This webinar will offer insights on the potentially hidden technical and infrastructure costs associated with generative AI, best practices and potential solutions to be considered.

Any discussion about storage systems is incomplete without the mention of Throughput, IOPs, and Latency. But what exactly do these terms mean and why are they important?
Collectively, these three terms are often referred to as storage performance metrics. Performance can be defined as the effectiveness of a storage system to address I/O needs of an application or workload. Different application workloads have different I/O patterns, and with that arises different bottlenecks, so there is no “one-size fits all” in storage systems. These storage performance metrics help with storage solution design and selection based on application/workload demands
In this webinar, we’ll cover:
- What storage performance metrics mean – understanding key terminology nuances
- Why users/storage administrators should care about them
- How these metrics impact application performance
- Real-world use cases

Keeping Data Secure is one of the prime concerns of any organisation today, given that data is the new oil and attacks on data are on the rise. Therefore addressing security concerns is a key prerogative for any enterprise class block storage. We will discuss various aspects of keeping data secure on an Enterprise Class Block storage controllers and some latest trends in that space.:
- Storage architects get a view on what is involved in building a secure and robust block storage controller
- Developers will get a view on the latest trends in this space
- Solution and ecosystem partners will get a view on what security aspects need to be considered for data security for a block storage controller
Industry experts from SNIA, Applied Research, Microsoft, and Objective Analysis will participate in a panel discussion to explore trends in memory and storage, CXL, and high bandwidth memory innovations, and how they will affect our future applications. This webinar will give you a preview into the topics and sessions at MemCon on March 26-27, 2024 in Mountain View CA, where SNIA is an Association Partner. This webinar is hosted by Memcon in Partnership with SNIA.

Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute, or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels.
Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. Join this webinar to hear a panel a broad of industry veterans who discuss the topic of accelerating Gen AI.

Emerging memories are now found in multiple applications both as stand-alone chips and embedded into systems on chips (SoCs) as they replace established technologies, including SRAM, NOR flash, and DRAM. In this webinar, SNIA CMSI members and leading experts Tom Coughlin (Coughlin Associates/IEEE President) and Jim Handy (Objective Analysis) will discuss the latest developments in MRAM, ReRAM, FRAM, PCM, and other new memory technologies to explain why, how, and when these technologies will grow, and how their success will impact both the semiconductor and the capital equipment markets.

Object Storage has firmly established itself as a cornerstone of modern data centers and cloud infrastructure. Ensuring API compatibility has become crucial for object storage developers who want to benefit from the wide ecosystem of existing applications. However, achieving compatibility can be challenging due to the complexity and variety of the APIs, access control mechanisms, and performance and scalability requirements.
In this webinar, we'll highlight real-world incompatibilities found in various object storage implementations. We'll discuss specific examples of existing discrepancies, such as missing or incorrect response headers, unsupported API calls, and unexpected behavior. We’ll also describe the implications these have on actual client applications.
