To counter the ever-increasing likelihood of catastrophic disruption and cost due to enterprise IT security threats, data center decision makers need to be vigilant in protecting their organization’s data. Confidential Computing is architected to provide security for data in use to meet this critical need for enterprises today.
This webcast provides insight into how data center, cloud and edge applications may easily benefit from cost-effective, real-world Confidential Computing solutions. This educational discussion will provide end-user examples, tips on how to assess systems before and after deployment, as well as key steps to complete along the journey to mitigate threat exposure. Presenting are Steve Van Lare (Anjuna), Anand Kashyap (Fortanix), and Michael Hoard (Intel), who will discuss:
- What would it take to build-your-own Confidential Computing solution?
- Emergence of easily deployable, cost-effective Confidential Computing solutions
- Real world usage examples and key technical, business and investment insights

Thanks to big data, artificial intelligence (AI), the Internet of things (IoT), and 5G, demand for data storage continues to grow significantly. The rapid growth is causing storage and database-specific processing challenges within current storage architectures. New architectures, designed with millisecond latency, and high throughput, offer in-network and storage computational processing to offload and accelerate data-intensive workloads.
Join technology innovators as they highlight how to drive value and accelerate SSD storage through the specialized implementation of key value technology to remove inefficiencies through a Data Processing Unit for hardware acceleration of the storage stacks, and a hardware-enabled Storage Data Processor to accelerate compute-intensive functions. By joining, you will learn why SSDs are a staple in modern storage architectures. These disaggregated systems use just a fraction of computational load and power while unlocking the full potential of networked flash storage.

As noted in our panel discussion “What is Confidential Computing and Why Should I Care?,” Confidential Computing is an emerging industry initiative focused on helping to secure data in use. The efforts can enable encrypted data to be processed in memory while lowering the risk of exposing it to the rest of the system, thereby reducing the potential for sensitive data to be exposed while providing a higher degree of control and transparency for users.
As computing moves to span multiple environments from on-premises to public cloud to edge, organizations need protection controls that help safeguard sensitive IP and workload data wherever the data resides. In this live webcast we’ll cover:
- How Confidential Computing works in multi-tenant cloud environments
- How sensitive data can be isolated from other privileged portions of the stack
- Applications in financial services, healthcare industries, and broader enterprise applications
- Contributing to the Confidential Computing Consortium

In our second installment of our Storage Technologies & Practices Ripe for Refresh, we will cover older HDD device interfaces and file systems. Advice will be given on how to replace these in production environments as well as why these changes are recommended. Also, we will be covering protocols that you should consider removing from your networks, either older versions of protocols where only newer versions should be used, or protocols that have been supplanted by superior options and should be discontinued entirely.
Finally, we will look at physical networking interfaces and cabling that are popular today but face an uncertain future as networking speeds grow ever faster.

In the "arms race" of security, new defensive tactics are always needed. One significant approach is Confidential Computing: a technology that can isolate data and execution in a secure space on a system, which takes the concept of security to new levels. This SNIA Cloud Storage Technologies Initiative (CSTI) webcast will provide an introduction and explanation of Confidential Computing and will feature a panel of industry architects responsible for defining Confidential Compute. It will be a lively conversation on topics including:
- The basics of hardware-based Trusted Execution Environments (TEEs) and how they work to enable confidential computing
- How to architect solutions based around TEEs
- How this foundation fits with other security technologies
- Adjacencies to storage technologies

In modern analytics deployments, latency is the fatal flaw that limits the efficacy of the overall system. Solutions move at the speed of decision, and microseconds could mean the difference between success and failure against competitive offerings. Artificial Intelligence, Machine Learning, and In-Memory Analytics solutions have significantly reduced latency, but the sheer volume of data and its potential broad distribution across the globe prevents a single analytics node from efficiently harvesting and processing data.This panel discussion will feature industry experts discussing the different approaches to distributed analytics in the network and storage nodes. How does the storage providers of HDDs and SSD view the data creation and movement between the edge compute and the cloud? And how can computational storage be a solution to reduce data movement?

With ever increasing threat vectors both inside and outside the data center, a compromised customer dataset can quickly result in a torrent of lost business data, eroded trust, significant penalties, and potential lawsuits. Vulnerabilities exist at every point when scaling out NVMe, which require data to be secured every time it leaves a server or the storage media, not only when leaving the data center. NVMe over Fabrics is poised to be the one of the most dominant transports of the future and securing and validating the vast amounts of data that would traverse this fabric is not just prudent, but paramount.
Join the webcast to hear industry experts discuss current and future strategies to secure and protect your mission critical data. You will learn:
- Industry trends and regulations around data security
- Potential threats and vulnerabilities
- Existing security mechanisms and best practices
- How to secure NVMe in flight and at rest
- Ecosystem and market dynamics
- Upcoming standards

In the ongoing evolution of the datacenter, a popular debate involves how storage is allocated and managed. There are three competing visions about how storage should be done; those are Hyperconverged Infrastructure (HCI), Disaggregated Storage, and Centralized Storage.
IT architects, storage vendors, and industry analysts argue constantly over which is the best approach and even the exact definition of each. Isn’t Hyperconverged constrained? Is Disaggregated designed only for large cloud service providers? Is Centralized storage only for legacy applications?
Tune in to debate these questions and more:
- What is the difference between centralized, hyperconverged, and disaggregated infrastructure, when it comes to storage?
- Where does the storage controller or storage intelligence live in each?
- How and where can the storage capacity and intelligence be distributed?
- What is the difference between distributing the compute or application and distributing the storage?
- What is the role of a JBOF or EBOF (Just a Bunch of Flash or Ethernet Bunch of Flash) in these storage models?
- What are the implications for data center, cloud, and edge?
Join us for another SNIA Networking Storage Forum Great Storage Debate as leading storage minds converge to argue the definitions and merits of where to put the storage and storage intelligence.

The storage industry is working on ways to meet the demand for the very high throughput required for the volume of transactions per second in Blockchain operations.
There have been numerous advancements in Field Programmable Gate Array (FPGA) and Application Specific Integrated Circuit (ASIC) logics to improve the number of transactions per second for Blockchain operations. But these FPGA/ASIC improvements will not be sufficient for increasing the demand of hardware resources required for Blockchain. Smart network interface cards (NICs) offload low-level functions from server CPUs, dramatically increasing network and application performance, offloading all network related processing.
In this webcast, you will learn:
- The features of a Smart Network Interface Card (SMART-NIC) and how this will improve Blockchain transactions
- Why using SCM (what does the acronym stand for?) is ideal for in-memory databases
- Advantages of direct data movement without involving filesystems
- Benefits of using SCM to improve Blockchain transactions

In the world of cloud services development, it's necessary to gain an edge on the myriad of competition facing your product or service. Volume and variety are not just characteristics of cloud data, but also of the software needed to deliver accurate decisions. While a variety of software techniques exist to create effective development teams, sometimes it's worthwhile to look elsewhere for additional success factors. In this webcast, we'll be focusing on adapting some of the principles of modern manufacturing to add to the development toolbox. A Continuous Delivery methodology ensures that the product is streamlined in its feature set while building constant value to the customer via the cloud. Attendees will learn the following:
- Structuring development and testing resources for Continuous Delivery
- A flexible software planning cycle for driving new features throughout the process
- A set of simple guidelines for tracking success
- Ways to ensure new features are delivered before moving to the next plan
