Sorry, you need to enable JavaScript to visit this website.

AI Model Inferencing and Deployment Options

webinar

Library Content Type

Webinar

Technology Focus

Library Release Date

Focus Areas

Networked Storage

Abstract

This webinar will explain the process of model inferencing and deployment options. We will discuss how trained models are used to make predictions and the considerations for deploying models. Attendees will understand the challenges and strategies for optimizing model performance, covering:

• What is model inferencing  
• Model inferencing process 
• How inferencing deployment differs in Gen AI and visual inspection
        - Inferencing deployment software options 
         - Inferencing hardware considerations (e.g., on the edge, on-prem, etc.)
• Deploying AI models for production best practices and lessons learned 
• Strategies for maintaining model performance post-deployment  
• Real-world examples of successful model deployments  
• The importance of monitoring and maintaining deployed models

View webcast