Organizations inevitably store multiple copies of the same data. Users and applications store the same files over and over, intentionally or inadvertently. Developers, testers and analysts keep many similar copies of the same data. And backup programs copy the same or similar files daily, often to multiple locations or storage devices. It’s not unusual to end up with some data replicated thousands of times.
So how do we stop the duplication madness? Join this webcast where we’ll discuss how to reduce the number of copies of data that get stored, mirrored, or backed up.

Whether traveling by car, plane or train, it is critical to get from here to there safely and securely. Just like you, your data must be safe and sound as it makes its journey across an internal network or to an external cloud storage device. In this webcast, we'll cover what the threats are to your data as it's transmitted, how attackers can interfere with data along its journey, and methods of putting effective protection measures in place for data in transit.

The broad adoption of 5G, Internet of things (IoT) and edge computing will reshape the nature and role of enterprise and cloud storage over the next several years. What building blocks, capabilities and integration methods are needed to make this happen?
Join this webcast for a discussion on:
- With 5G, IoT and edge computing - how much data are we talking about?
- What will be the first applications leading to collaborative data-intelligence streaming?
- How can low latency microservices and AI quickly extract insights from large amounts of data?
- What are the emerging requirements for scalable stream storage - from peta to zeta?
- How are yesterday’s object-based batch analytic processing (Hadoop) and today’s streaming messaging capabilities (Apache Kafka and RabbitMQ) work together?
- What are the best approaches for getting data from the Edge to the Cloud?

Electronic payments, once the purview of a few companies, have expanded to include a variety of financial and technology companies. Internet of Payment (IoP) enables payment processing over many kinds of IoT devices and has also led to the emergence of the micro-transaction. The growth of independent payment services offering e-commerce solutions, such as Square, and the entry of new ways to pay, such as Apple Pay mean that a variety of devices and technologies also have come into wide use.
In this talk we look at the impact of all of these new principles across multiple use cases and how it impacts not only on the consumers driving this behavior but on the underlying infrastructure that supports and enables it.

The pandemic has taught data professionals one essential thing. Data is like water when it escapes; it reaches every aspect of the community it inhabits. This fact becomes apparent when the general public has access to statistics, assessments, analysis and even medical journals related to the pandemic, at a scale never seen before.
Insight understands information in context to the degree that you can gain an understanding beyond just the facts presented and instead make reasonable predictions and suppositions about new instances of that data.
Having access to data does not automatically grant the reader knowledge of how to interpret that data or the ability to derive insight. It is even challenging to judge the accuracy or value in that data.
The skill required is known as data literacy, and in this presentation, we will look at how access to one data source will inevitably drive the need to access more.

NVMe over Fabrics technology is gaining momentum and getting more tractions in data centers, but there are three kinds of Ethernet based NVMe over Fabrics transports: iWARP, RoCEv2 and TCP. How do we optimize NVMe over Fabrics performance with different Ethernet transports?
This discussion won’t tell you which transport is the best. Instead we unfold the performance of each transport and tell you what it would take for each transport to get the best performance, so that you can make the best choice for your transport for NVMe over Fabrics solutions.

In this webcast, SNIA experts will discuss what composable infrastructure is, what prompted its development, solutions, enabling technologies, standards/products and who computational storage fits in.

RAID on CPU is an enterprise RAID solution specifically designed for NVMe-based solid state drives (SSDs). This innovative technology provides the ability to directly connect NVMe-based SSD’s to PCIe lanes and make RAID arrays using those SSD’s without the need for a RAID Host Bus Adapter (HBA). As a result, customers gain NVMe SSD performance and data availability without the need of a traditional RAID HBA.
This webcast will recall key concepts for NVMe SSDs and RAID levels and will take a deep dive into RAID on the CPU technology and the way it compares to traditional Software and Hardware RAID solutions. Learn more about this new technology and how it is implemented, and gain a clear insight into the advantages of RAID on the CPU.

Everyone knows data volumes are exploding faster than IT budgets. And customers are increasingly moving to flash storage, which is faster and easier to use than hard drives, but still more expensive. To cope with this conundrum and squeeze more efficiency from storage, storage vendors and customers can turn to data reduction techniques such as compression, deduplication, thin provisioning and snapshots. This webcast will specifically focus on data compression, which can be done at different times, at stages in the storage process, and using different techniques.

The storage industry has many applications that rely on storing data as objects. In fact, it’s the most popular way that unstructured data is accessed. At the drive level, however, the devil is in the details. Normally, storage devices store information as blocks, not objects. This means that there is some translation that goes on between the data as it is consumed (i.e., objects) and the data that is stored (i.e., blocks).
Naturally, being efficient means that there are performance boosts, and simplicity means that there are fewer things that can go wrong. Moving towards storing key value pairs that get away from the traditional block storage paradigm make it easier and simpler to access objects.
What does this mean? And why should you care? That’s what this webinar is going to cover! This presentation will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards. It will also cover the SNIA KV-API and open source work that is available to take advantage of Key Value storage.
