Data Protection of Kubernetes workloads : A Performance engineering case study

Author(s)/Presenter(s):
Library Content Type:
Publish Date: 
Saturday, December 5, 2020
Event Name: 
Event Track:
Focus Areas:
Abstract: 

Data Protection baselining of any emerging workloads is both interesting and challenging at the same time. The majority of the customer workload migrating to Cloud-Native have Kubernetes as their orchestration engine and understanding the primary and secondary storage expectations of Kubernetes stack is critical to ensure the right assurance of Data Protection availability.
 
When CIOs around the Tech world have started to invest in moving their Legacy workload to the Cloud-Native ecosystem, it becomes important for Data Protection vendors to ensure predictable performance expectations of their recovery and restore needs. From a Performance engineering standpoint measuring the primary and protection baseline metrics is key in ensuring the customer data protection needs are met for required RPO and RTO defined. The various moving parts involved in setting up a production-like Kubernetes stack which spans both Baremetal Hosts and Virtualized workloads is challenging and quite a bumpy ride.
 
The traditional Monitoring and Observability toolset provided by OS does not quite fit well in Kubernetes world. This led us to explore and dig out the  Measurement stack comprised of native Kubernetes offerings. Linux host-level stats gives measurements at PID/Process level, however, in Kubernetes world, it's required we talk in POD/DEPLOYMENT based metrics on resource usage which is what K8 admin will be used to. After all the above understanding is solidified one gets to digest the actual Performance engineering findings which we have captured as a case study which will be of interest to the SNIA audience.

Watch video: