Abstract
Quality of Service (QoS) in IO controllers is a mechanism for Cloud Service Providers (CSP) to conform to latency discipline for the IO requests received from its clients. Statically, there are multiple ways to associate or allocate hardware resources to service requests. One key challenge that the CSPs face is associating components with the IO requests on the I-T-L nexus of the client. The problem gets compounded by servicing backend IOs through a multitude of devices, with IO characteristics as high as milliseconds and as low as tens of microseconds.
An IO request or an Object Data Store configuration may generate additional auxiliary IOs to the system components. For example, we have seen RPO and RTO requirement drive IO latency expectations for clients.
We propose a neural network layer, that can classify IO requests based on IO patterns and enable data placement strategies/policies on the data stores. This can preemptively help storage controllers achieve its QoS objective and drive down costs.
Learning Outcomes
a. Dependable parameters for Neural network analysis for IO profiling
b. Tuning Hyper Parameters to improve classification
c. Tuning and or calibrating IO staging strategies in Storage Controller