In the past 3 years I’ve been working with teams implementing automated workflows using ML/DL, NLP, RPA, and many other techniques, for a myriad of business functions ranging from fraud detection, audio transcription all the way to satellite imagery classification. At various points in time, all of these teams realized that alongside the benefits of automation they have also added additional risk. They have lost their “eyes and ears on the field”, the natural oversight you get by having humans in the process. Now, if something goes wrong, there isn’t a human to notify them, and if there’s a place in which an improvement could be made, there might not be a human to think about it and recommend it. Put differently, they realized that humans weren’t only performing the task that is now automated, they were also there, at least partially, to monitor and QA the actual workflow. While each business function is different, and every automation or AI that is used has its own myriad of intricacies and things requiring monitoring and observing, one common thread binding all of these use-cases is that issues and opportunities for improvement usually appear in pockets, as opposed to grand sweeping, across the board.
Posts about AIMonitoring:
Deploying AI instantly brought value and growth to many businesses. However, it is well established that sustaining the value over time, not to mention maximizing it, could be quite challenging. Continuous optimization is the key to successful AI deployments. Beginning with a product that’s good enough, learning from how it performs in the real world, especially as the world (read: the data environment) changes, and then improving; then learning and improving again and so on. It’s a bit of an obvious insight but it is rare for AI-driven products to be perfect from day one.
Here at Mona, we are now allowing new users to try our leading AI monitoring platform with a free 30 day trial! No credit card required, no strings attached. That’s right. You get instant access to our full platform including all features!
At Mona, we strive to enable better visibility into AI systems in order to reduce the associated risk with production AI, optimize the operational processes around versions and releases, and plan better AI roadmaps using feedback based on production data.
As our customer base grows and the number of production AI use-cases being monitored by Mona increases, our team has been working tirelessly to advance our product to become a best in class AI observability solution.
Last week, a draft of the EU’s highly anticipated Regulation on A European Approach For Artificial Intelligence was leaked. The official version is expected this week.