SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Microservices architectures enhance the ability for modern software teams to deliver applications at scale and have expanded the distributed nature of application. But as an application’s footprint grows, the challenge is to understand and control interactions among services within these environments. Enabling Service Mesh controls the communication, configuration and behavior of microservices in an application. A service mesh is a dedicated infrastructure layer for handling service-to-service communication in any microservice, public cloud or Kubernetes architecture. Istio, an open source project complimentary to Kubernetes, allows you to set up a service mesh of your own to start learning more about how it works. It offers service(request) discovery, monitoring, reliability, and security to applications.
Currently Stateful Applications contribute to the major chunk (more than 50%) of containerized application deployment for enterprise. To maintain an uninterrupted application availability for the containerized applications, backup and recover these application to be DR ready is the need and important strategy of the enterprises. One important aspect is to backup and recover in an alternative environment. With alternative environment we mean, in an hybrid environment, i.e. take an example of Cloud1 managed K8S environment being recovered into Cloud2 managed K8S environment. Today, there are multiple proprietary solutions available but an Open Source solution, which is driven by community development, will add more insight as well as flexibility. This session aims to provide the: a) Background of Data protection for stateful applications in K8S b) The strategy for developing such solutions, considerations and important aspects to look to c) Look into some of the available and upcoming Open Source solutions like Velero and future solution from SODA Foundation. And How SODA Foundation is creating a community driven development of this solution
Most of the top storage vendors are moving to storage as a service solution. Exploring all the ecosystem components including open source solutions to build a complete data management solution on top of the existing storage capabilities. The intensity of this move is accelerated as more cloud-native and hybrid use cases are in demand. In this session, we will see an overall architecture for a typical stack with key data management including data protection and monitoring across existing storage infrastructure. We will also see how this trend is shaping container and cloud-native storage. We will have a working demo on data protection/monitoring illustrating how the overall stack works with some of the open-source projects.
Microservices architectures enhance the ability for modern software teams to deliver applications at scale and have expanded the distributed nature of application. But as an application’s footprint grows, the challenge is to understand and control interactions among services within these environments. Enabling Service Mesh controls the communication, configuration and behavior of microservices in an application. A service mesh is a dedicated infrastructure layer for handling service-to-service communication in any microservice, public cloud or Kubernetes architecture. Istio, an open source project complimentary to Kubernetes, allows you to set up a service mesh of your own to start learning more about how it works. It offers service(request) discovery, monitoring, reliability, and security to applications.
The advent of cloud and the everything as a service (XaaS) model requires storage developers to rethink how their products are consumed. Organizations are looking to develop infrastructure and processes that are agnostic from any one cloud vendor, including their own on-premises datacenters. Container orchestrators (COs) like Kubernetes enable this ideal by allowing entire application deployments to be packaged up (in containers and manifests) and moved from environment to environment with relative ease. Container Storage Interface (CSI) provides a standard for exposing arbitrary block and file storage to these COs, but user experience for a particular CSI driver is heavily dependent on its implementation. Storage developers must carefully consider the key attributes of a storage product when developing a driver to ensure a cloud-like experience. Parallel file systems provide an excellent opportunity to take storage software often perceived as complex and expose advanced functionality and capabilities through a CO’s simple and familiar interfaces. In this presentation, Eric Weber and Joe McCormick will break down the thought processes and design considerations that went into developing a CSI driver for BeeGFS. We will walk through how the CSI spec’s gRPC endpoints were mapped onto BeeGFS specific functions and discuss specific CSI/CO functionality and sub-features that do or do not make sense in the context of BeeGFS. While the CSI spec is intended to be CO agnostic, we will tend to discuss our design in the context of Kubernetes. In addition to existing BeeGFS CSI driver functionality, we will also discuss potential future features in order to give our audience a well-rounded view of the available capabilities for container storage.
Continuous Integration/Continuous Delivery (CI/CD) platforms such as Jenkins, Bamboo, CircleCI and the like dramatically speed deployments by allowing administrators to automate virtually every process step. Automating the data layer is the final frontier. Today, hours are wasted manipulating datasets in the CI/CD pipeline. Allowing Kubernetes to orchestrate the creation, movement, reset, and replication of data eliminates dozens of wasted hours from each deployment, accelerating time to market by 500X or more. Using real-life examples, Jacob will describe the architecture and implementation of a true Data as Code approach that allows users to: -Automatically provision compute, networking and storage resources in requested cloud provider / BM environments; -Automatically deploy the required environment-specific quirks (specific kernels, kernel modules, NIC configurations), -Automatically deploy multiple flavours of Kubernetes distributions (Kubeadm & OCP supported today; Additional flavors as pluggable modules) -Apply Kubernetes customized deployment (Feature Gates, API server tunables, etc ) -Automate recovery following destructive testing -Replicate datasets to worker nodes instantly All based on predefined presets/profiles (with obvious customization/overrides as required by the user), across multiple cloud environments (today AWS & BM, additional environments as pluggable modules into the architecture)
Machine learning referred to as ML, is the study and development algorithms that improves with use of data -As it deals with the training data, the machine algorithm changes and grows. Most machine learning models begin with “training data” which the machine processes and begins to “understand” statistically. Machine learning models are resource intensive. To anticipate, validate, and recalibrate millions of times, they demand a significant amount of processing power. Training an ML model might slow down your machine and hog local resources. The proposed solution is to Containerize ML with NVME -putting your ML models in a container Proposed talk covers aspects of Containerizing ML with NVME and its benefits Containers are lightweight software packages that run-in isolation on the host computing environment. Containers are predictable, repetitive, and immutable and are also easy to coordinate (or “orchestrate”). Containerizing the ML workflow requires putting your ML models in a container (Docker is sufficient), then deploying it on a machine. We can create a cluster of containers with a configuration suited for machine learning requirements Artificial Intelligence (AI) at scale sets the standard for storage infrastructure in terms of capacity and performance, making it one of the most crucial factors for containers Hence storage plays an important aspect in containers
NVME is a new storage access and transport protocol for flash and next-generation solid-state drives (SSDs) provides the highest throughput , improved system level CPU utilization and fastest response times. The combination of NVMe-over Fabric [NVMEoFC] with an NVME SSD solution allows Kubernetes orchestration to scale data-intensive workloads and increase data mining speed.