Containerized Machine Learning Models using NVME

Library Content Type:
Publish Date: 
Tuesday, September 28, 2021
Event Name: 
Event Track:

Machine learning referred to as ML, is the study and development algorithms that improves with use of data -As it deals with the training data, the machine algorithm changes and grows. Most machine learning models begin with “training data” which the machine processes and begins to “understand” statistically. Machine learning models are resource intensive. To anticipate, validate, and recalibrate millions of times, they demand a significant amount of processing power. Training an ML model might slow down your machine and hog local resources. The proposed solution is to Containerize ML with NVME -putting your ML models in a container Proposed talk covers aspects of Containerizing ML with NVME and its benefits  Containers are lightweight software packages that run-in isolation on the host computing environment. Containers are predictable, repetitive, and immutable and are also easy to coordinate (or “orchestrate”). Containerizing the ML workflow requires putting your ML models in a container (Docker is sufficient), then deploying it on a machine. We can create a cluster of containers with a configuration suited for machine learning requirements  Artificial Intelligence (AI) at scale sets the standard for storage infrastructure in terms of capacity and performance, making it one of the most crucial factors for containers Hence storage plays an important aspect in containers

NVME is a new storage access and transport protocol for flash and next-generation solid-state drives (SSDs) provides the highest throughput , improved system level CPU utilization and fastest response times.  The combination of NVMe-over Fabric [NVMEoFC] with an NVME SSD solution allows Kubernetes orchestration to scale data-intensive workloads and increase data mining speed.

  • Containerizing ML with NVME and its benefits -putting ML models in a container.
  • Machine learning models can be resource heavy-ML models can be trained faster and turned into APIs with containers
  • NVMe-over Fabric [NVMEoFC] with an NVME SSD will allow to scale data-intensive workload need for data mining in ML.

Watch video: