Submitted by Anonymous (not verified) on

Continuous Integration/Continuous Delivery (CI/CD) platforms such as Jenkins, Bamboo, CircleCI and the like dramatically speed deployments by allowing administrators to automate virtually every process step. Automating the data layer is the final frontier. Today, hours are wasted manipulating datasets in the CI/CD pipeline. Allowing Kubernetes to orchestrate the creation, movement, reset, and replication of data eliminates dozens of wasted hours from each deployment, accelerating time to market by 500X or more. Using real-life examples, Jacob will describe the architecture and implementation of a true Data as Code approach that allows users to: -Automatically provision compute, networking and storage resources in requested cloud provider / BM environments; -Automatically deploy the required environment-specific quirks (specific kernels, kernel modules, NIC configurations), -Automatically deploy multiple flavours of Kubernetes distributions (Kubeadm & OCP supported today; Additional flavors as pluggable modules) -Apply Kubernetes customized deployment (Feature Gates, API server tunables, etc ) -Automate recovery following destructive testing -Replicate datasets to worker nodes instantly All based on predefined presets/profiles (with obvious customization/overrides as required by the user), across multiple cloud environments (today AWS & BM, additional environments as pluggable modules into the architecture)

Bonus Content
Off
Presentation Type
Presentation
Learning Objectives
  • Recognize data delays in existing pipelines
  • Implement automation to instantly reset datasets following destructive testing
  • Orchestrate instant dataset copy to worker nodes
Display Order
26
Start Date/Time
YouTube Video ID
DbPDX9H-ddY
Zoom Meeting Completed
Off
Main Speaker / Moderator
Webform Submission ID
75