We hope that everyone had a fantastic holiday season and is now ready to tackle the 2022 New Year! Looking back to where we started in 2018 to where we are now, we have grown so much overall as a company. From three (⅓ balding) guys with a crazy idea nobody understood, through assembling a team of passionate trailblazers, and to building advanced features for Mona’s platform - now leveraged by incredible AI/ML teams at industry leaders, and even recognized by Gartner, we are continuing to strengthen our position as the leading monitoring solution for AI, - providing the most flexible and comprehensive insight engine.
In the past 3 years I’ve been working with teams implementing automated workflows using ML/DL, NLP, RPA, and many other techniques, for a myriad of business functions ranging from fraud detection, audio transcription all the way to satellite imagery classification. At various points in time, all of these teams realized that alongside the benefits of automation they have also added additional risk. They have lost their “eyes and ears on the field”, the natural oversight you get by having humans in the process. Now, if something goes wrong, there isn’t a human to notify them, and if there’s a place in which an improvement could be made, there might not be a human to think about it and recommend it. Put differently, they realized that humans weren’t only performing the task that is now automated, they were also there, at least partially, to monitor and QA the actual workflow. While each business function is different, and every automation or AI that is used has its own myriad of intricacies and things requiring monitoring and observing, one common thread binding all of these use-cases is that issues and opportunities for improvement usually appear in pockets, as opposed to grand sweeping, across the board.
Last week, we caught up with Michael Tambe, Head of Data Science of Amazon Advertising Field Sales, to discuss his views on some of the challenges and opportunities that data scientists face working with real world AI / ML solutions. Michael has been a data science leader for over 8 years, focused on building data science teams in go-to-market areas within sales, marketing, and pricing. Here’s the advice that Michael has for other business leaders looking to deploy AI solutions and the best practices on optimizing model performance.
Deploying AI instantly brought value and growth to many businesses. However, it is well established that sustaining the value over time, not to mention maximizing it, could be quite challenging. Continuous optimization is the key to successful AI deployments. Beginning with a product that’s good enough, learning from how it performs in the real world, especially as the world (read: the data environment) changes, and then improving; then learning and improving again and so on. It’s a bit of an obvious insight but it is rare for AI-driven products to be perfect from day one.
Here at Mona, we are now allowing new users to try our leading AI monitoring platform with a free 30 day trial! No credit card required, no strings attached. That’s right. You get instant access to our full platform including all features!
At Mona, we strive to enable better visibility into AI systems in order to reduce the associated risk with production AI, optimize the operational processes around versions and releases, and plan better AI roadmaps using feedback based on production data.