The automotive industry is effectively transforming the vehicle into a data center on wheels. Connectedness, autonomous driving, and media & entertainment bring in more and more storage onboard and into the networked data centers. But all the storage in (and for) a car is not created equal. There are 10s if not 100s of different processors on the car. Some are attached to storage and some are not. Each application demands different characteristics from the storage device. Let’s explore all of this in an informational journey with the industry experts from both the storage and automotive worlds.
- What’s driving growth in automotive storage?
- Special requirements for autonomous vehicles
- Where automotive data is typically stored?
- Special use cases
- Vehicle networking & compute changes and challenges

Storing objects has become commonplace. Object storage provides bulk and undifferentiated storage for unstructured data like photos, video & audio, DNA sequences, files, backups, and it can even protect against ransomware. Object access is also simplified because there are no built-in hierarchies or filesystems of objects, and no devices to manage that look like disks.
So, what’s new? Object storage has traditionally been accomplished in the software stack and is now being accomplished directly on the media. In this presentation, we’ll highlight how this is happening and discuss:
- Object storage characteristics
- The differences and similarities between object and key value storage
- Security options unique to object storage including ransomware mitigation
- Why use object storage: Use cases and applications
- Object storage and containers: Why Kubernetes’ COSI (Container Object Storage Interface)?

NVMe® IP-based SANs (including TCP, RoCE, iWARP) have the potential to provide significant benefits in application environments ranging from the Edge to the Data Center. However, before we can fully unlock NVMe IP-based SAN’s potential, we first need to overcome the NVMe over Fabrics (NVMe-oF™) discovery problem. This discovery problem, specific to IP based fabrics, can result in the need for Host administrators to explicitly configure each Host to access each of the NVM subsystems in their environment. In addition, any time an NVM Subsystem interface is added or removed, the Host administrator may need to explicitly update the configuration of impacted hosts. This process does not scale when more than a few Host and NVM subsystem interfaces are in use. Also, due to the de-centralized nature of this process, it also adds complexity when trying to use NVMe IP-based SANs in environments that require a high-degrees of automation.
For these and other reasons, several companies have been collaborating on innovations that simplify and automate the discovery process used with NVMe IP-based SANs.
During this session we will explain:
- NVMe IP-based SAN discovery problem
- The types of network topologies that can support the automated discovery of NVMe-oF Discovery controllers
- Direct Discovery versus Centralized Discovery
- An overview of the discovery protocol

SNIA develops a wide range of standards to enhance the interoperability of various storage systems. With new technologies like computational storage, standards do not exist. As companies develop solutions, questions arise. Should computational storage have standards for recommended behavior for hardware and software? Should an application programming interface be defined?
At SNIA, over 250 volunteers answered yes, and new work is being defined both within SNIA and in collaboration with other industry standards bodies. Join leaders of the Computational Storage Technical Work Group as they discuss how they define and develop standards with input from many different companies and users, what they perceive as important today and moving forward, and how you can participate.

This talk will focus on the history of “Big Data” and how it has pushed the storage envelope, eventually resulting in a seemingly perfect relationship with Cloud Storage. But local storage is the 3rd wheel in this relationship, and won’t go down easy. Can this marriage survive when Big Data is being pulled in two directions? Should Big Data pick one, or can the three of them live happily ever after? This webcast will cover:
- The impact of edge computing
- The erosion of the data center
- Managing data-on-the-fly
- Grid management
- Next-gen Hadoop and related technologies
- Supporting AI workloads
- Data gravity and distributed data

Modern data centers systems consist of hundreds of sub-systems that are all connected with optical transceivers, copper cables, and industry standards-based connectors and cages. For interconnecting storage subsystems, two things are happening: Speeds are radically increasing making the maximum reach of copper wire interconnects very short and, at the same time, increasingly larger storage systems are expanding in size and much further apart. This is making longer reach optical technologies much more popular. However, optical interconnect technologies are more costly and complex compared to copper with a plethora of new buzz-words and technology concepts.
The rate of change from the huge uptick in data demand is accelerating new product developments at an incredible pace. While much of the enterprise industry is still on 10G/40G/100GbE speeds, the next generation optics groups are already commercializing 800G with 1.6Tb transceivers in discussion! Today, it’s all about power, cost, and upgrade paths.
In this SNIA Network Storage Forum webinar we’ll cover the latest in the impressive array of data center infrastructure solutions designed to address expanding requirements for higher-bandwidth and lower-power. This will include next-generation solutions leveraging copper and optics to deliver high signal integrity, lower-latency, and lower insertion loss to achieve maximum efficiency, speed, and density.

Everyone enjoys having storage that is fast, reliable, scalable, and affordable. But it turns out different applications have different storage needs in terms of I/O requirements, capacity, data sharing, and security. Some need local storage, some need a centralized storage array, and others need distributed storage—which itself could be local or networked. One application might excel with block storage while another with file or object storage. With limited resources, it’s important to understand the storage intent of the applications in order to choose the right storage and storage networking strategy, rather than discovering the hard way that you’ve chosen the wrong solution for your application.
Artificial intelligence (AI) is a technology which itself encompasses a broad range of use cases, largely divided into training and inference. In this webcast, we’ll look at what types of storage are typically needed for different aspects of AI, including different types of access (local vs. networked, block vs. file vs. object) and different performance requirements. And we will discuss how different AI implementations balance the use of on-premises vs. cloud storage. Tune in to this SNIA Networking Storage Forum (NSF) webcast to boost your natural (not artificial) intelligence about application-specific storage.

Each SAN transport has its own way to initialize and transfer data. So how do initiators (hosts) and targets (storage arrays) communicate in Fibre Channel (FC) Storage Area Networks (SANs)?
Find out in this live webcast where Fibre Channel experts will answer:
- How do FC links activate?
- Is FC routable?
- What kind of flow control is present in FC?
- How do initiators find targets and set up their communication?
- Finally, how does actual data get transferred since that is the ultimate goal?
This session will introduce these concepts to demystify the FC SAN for the network professional.

The use of genomics in modern biology has revolutionized the speed of innovation for the discovery of medicines. The COVID pandemic response has quickened genetic research and driven the rapid development of vaccines. Genomics, however, requires a significant amount of compute and data storage to aid discovery. This session is for IT professionals who are faced with delivering and supporting IT solutions for the required compute and data storage for genomics workflows. It will feature viewpoints from both the bioinformatics and technology perspectives with a focus on some of these compute and data storage challenges.
We will discuss:
- How to best store and manage these large genomics datasets
- Methods for sharing these large datasets for collaborative analysis.
- Legal and ethical implications of storing shareable data in the cloud
- Transferring large data sets and the impact on storage and networking

Data gravity has pulled computing to the Edge and enabled significant advances in hybrid cloud deployments. The ability to run analytics from the datacenter to the Edge, where the data is created and lives, also creates new use cases for nearly every industry and company. However, this movement of compute to the Edge is not the only pattern to have emerged. How might these other use cases impact your storage strategy?
This interactive webcast by the SNIA CSTI will focus on the following topics:
- Emerging patterns of data movement and the use cases that drive them
- Cloud Bursting
- Federated Learning across the Edge and Hybrid Cloud
- Considerations for distributed cloud storage architectures to match these emerging patterns
