By Mordechai Worch on October 25, 2022
2022,

Deploying machine learning models is hard. Deploying multiple machine-learning models in parallel is harder.

We had the pleasure of hosting a MLOps Community IRL meetup this month. If you are a machine learning practitioner and aren't familiar with MLOps Community, it's worth checking out. It's the world's largest community dedicated to addressing the unique technical and operational challenges of production machine learning systems.

At this IRL, I gave a talk on Machine Learning Workflow for Parallel Model Improvements. In this talk, I discuss IRONSCALES' strategy for decomposing a machine learning system and providing components that allow highly parallelized development, including:

  • How does an ML product get better?
  • How does an ML product get better faster?
  • What are the main bottlenecks in parallelizing machine learning development?
  • What machine learning design patterns can we introduce to minimize these bottlenecks?
  • How can we design a development workflow that is best suited for introducing new improvements?


ML Workflow for Parallel Model Improvements Preview

 

Learn more about engineering at IRONSCALES, the team, and some of their work here.

 

Published by Mordechai Worch October 25, 2022
Shapes-Left

Join thousands of your peers! Subscribe to our blog.

Ironscales needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Policy.

Shapes-Right