Data and concept drift are frequently mentioned in the context of ML monitoring, but what exactly are they and how are they detected? Furthermore, given the common misconceptions surrounding them, are data and concept drift things to be avoided at all costs or natural and acceptable consequences of training models in production? Read on to find out. In this article we will provide a granular breakdown of data and concept drift, along with methods for detecting them and best practices for dealing with them when you do.
Mona is happy to announce the expansion of use cases supported by the intelligent monitoring platform for AI / ML to include Robotic Process Automation (“RPA”) workflows. Currently providing customers across 8 different industries with actionable insights into their AI systems, Mona excels at providing complete process visibility, detecting issues within specific segments of data. As a highly extensible platform for many use cases including machine learning, NLU/NLP, speech recognition, and vision, the extension to support intelligent automations and RPA is seamless.
Monitoring is critical to the success of ML models deployed in production. Because ML models are not static pieces of code but, rather, dynamic predictors which depend on data, hyperparameters, evaluation metrics, and many other variables, it is vital to have insight into the training and deployment process in order to prevent model drift and predictive stasis. That said, not all monitoring solutions are created equal. These are the three must-haves for a machine learning monitoring tool, whether you are deciding to build or buy a solution.
Before you launch a project to build an AI monitoring system from scratch, consider whether or not this would be a good use of your resources. When does it make sense to buy instead? Let’s discuss. This post explores the advantages and disadvantages of both alternatives so that you can make an informed decision about what’s best for your organization.
We hope that everyone had a fantastic holiday season and is now ready to tackle the 2022 New Year! Looking back to where we started in 2018 to where we are now, we have grown so much overall as a company. From three (⅓ balding) guys with a crazy idea nobody understood, through assembling a team of passionate trailblazers, and to building advanced features for Mona’s platform - now leveraged by incredible AI/ML teams at industry leaders, and even recognized by Gartner, we are continuing to strengthen our position as the leading monitoring solution for AI, - providing the most flexible and comprehensive insight engine.
In the past 3 years I’ve been working with teams implementing automated workflows using ML/DL, NLP, RPA, and many other techniques, for a myriad of business functions ranging from fraud detection, audio transcription all the way to satellite imagery classification. At various points in time, all of these teams realized that alongside the benefits of automation they have also added additional risk. They have lost their “eyes and ears on the field”, the natural oversight you get by having humans in the process. Now, if something goes wrong, there isn’t a human to notify them, and if there’s a place in which an improvement could be made, there might not be a human to think about it and recommend it. Put differently, they realized that humans weren’t only performing the task that is now automated, they were also there, at least partially, to monitor and QA the actual workflow. While each business function is different, and every automation or AI that is used has its own myriad of intricacies and things requiring monitoring and observing, one common thread binding all of these use-cases is that issues and opportunities for improvement usually appear in pockets, as opposed to grand sweeping, across the board.