According to Gartner, only 53% of machine learning proofs of concept (POC) are ever scaled to production. Even fewer manage to deliver the intended and measurable business value. On the bright side, Gartner projects that by 2024, 75% of enterprises will shift from piloting to operationalizing AI, driving a 5x increase in streaming data and analytics infrastructures.
Machine learning ops (ML Ops) is a practice and methodology for preparing, deploying and managing machine learning models in production. Productizing ML Ops is hard, but in reality, it’s a requirement for AI scalability and success.
In this Q&A session, Sapta Girisa, Senior Director at Lohika and Capgemini Engineering will discuss key considerations for ML Ops, why it’s essential to AI scalability and how to accelerate your AI engineering.
Sapta is a Sr. Director at Capgemini and leads Technical Presales and Consulting function for Capgemini Engineering US West region. His focus is Software Product Engineering services spanning full-stack cloud-native architectures, data engineering, MLOps, and automation. Has more than 2 decades of experience leading presales and engineering delivery across Telecommunications, Industrial IoT, and in Cloud/SaaS solutions for clients across multiple domains. Sapta holds a masters degree in Computer Science.
Roger Jie Luo is Director of Machine Learning of Niantic. Prior to joining Niantic, Roger was a serial entrepreneur, and co-founded two AI SaaS startups. Previously, Roger led applied machine learning teams at both Snapchat and Yahoo. He is also an active angel investor and has been investing in early stage startups since 2016. Roger obtained his PhD in Machine Learning from the Swiss Federal Institute of Technology in Lausanne (EPFL).