Monitoring is critical to the success of ML models deployed in production. Because ML models are not static pieces of code but, rather, dynamic predictors which depend on data, hyperparameters, evaluation metrics, and many other variables, it is vital to have insight into the training and deployment process in order to prevent model drift and predictive stasis. That said, not all monitoring solutions are created equal. These are the three must-haves for a machine learning monitoring tool, whether you are deciding to build or buy a solution.
Posts about MLOps:
In the past 3 years I’ve been working with teams implementing automated workflows using ML/DL, NLP, RPA, and many other techniques, for a myriad of business functions ranging from fraud detection, audio transcription all the way to satellite imagery classification. At various points in time, all of these teams realized that alongside the benefits of automation they have also added additional risk. They have lost their “eyes and ears on the field”, the natural oversight you get by having humans in the process. Now, if something goes wrong, there isn’t a human to notify them, and if there’s a place in which an improvement could be made, there might not be a human to think about it and recommend it. Put differently, they realized that humans weren’t only performing the task that is now automated, they were also there, at least partially, to monitor and QA the actual workflow. While each business function is different, and every automation or AI that is used has its own myriad of intricacies and things requiring monitoring and observing, one common thread binding all of these use-cases is that issues and opportunities for improvement usually appear in pockets, as opposed to grand sweeping, across the board.
Deploying AI instantly brought value and growth to many businesses. However, it is well established that sustaining the value over time, not to mention maximizing it, could be quite challenging. Continuous optimization is the key to successful AI deployments. Beginning with a product that’s good enough, learning from how it performs in the real world, especially as the world (read: the data environment) changes, and then improving; then learning and improving again and so on. It’s a bit of an obvious insight but it is rare for AI-driven products to be perfect from day one.
As our customer base grows and the number of production AI use-cases being monitored by Mona increases, our team has been working tirelessly to advance our product to become a best in class AI observability solution.
Last week, a draft of the EU’s highly anticipated Regulation on A European Approach For Artificial Intelligence was leaked. The official version is expected this week.
Just recently we published an important update on our growth, from recent customers to our team growth. Today, I’d like to go a little deeper on our current product and share how we’ve been expanding it in multiple areas to create value for our customers.