Recent posts by Mona

Be the first to know about top trends within the AI / ML monitoring industry through Mona's blog. Read about our company and product updates.

Case Study: Best Practices for Monitoring GPT-Based Applications

Case Study: Best Practices for Monitoring GPT-Based Applications
 
This is a guest post by Hyro - a Mona customer. 

What we learned at Hyro about our production GPT usage after using Mona, a free GPT monitoring platform

At Hyro, we’re building the world’s best conversational AI platform that enables businesses to handle extremely high call volumes, provide end to end resolution without a human involved deal with staff shortages in the call center, and mine analytical insights from conversational data, all at the push of a button. We’re bringing automation to customer support at a scale that’s never been seen before, and that brings with it a truly unique set of challenges. We recently partnered with Mona, an AI monitoring company, and used their free GPT monitoring platform to better understand our integration of OpenAI’s GPT into our own services. Because Hyro operates in highly-regulated spaces, including the healthcare industry, it is essential for us that we ensure control, explainability, and compliance in all our product deployments. We can’t risk LLM hallucinations, privacy leaks, and other GPT failure modes that could compromise the integrity of our applications. Additionally, we needed a way to monitor token usage and the latency of the OpenAI service in order to keep costs down and deliver the best possible experience to our customers.

Everything You Need to Know About Model Hallucinations

Everything You Need to Know About Model Hallucinations

If you’ve worked with LLMs at all, you’ve probably heard the term model hallucinations tossed around. So what does it mean? Is your model ingesting psychedelic substances? Or are you the crazy one and hallucinating a model that doesn’t actually exist? Luckily, the cultural parlance points to a problem that is less serious than it sounds. However, model hallucinations are something that every LLM user will encounter, and they can cause problems for your AI-based systems if not properly dealt with. Read on to learn about what model hallucinations are, how you can detect them, and steps you can take to remediate them when they inevitably do arise.

Overcome cultural shifts from data science to prompt engineering

Overcome cultural shifts from data science to prompt engineering

The widespread use of large language models such as ChatGPT, LLaMa, and LaMDA has the tech world wondering whether data science and software engineering jobs will at some point be replaced by prompt engineering roles, rendering existing teams obsolete. While the complete obsolescence of data science and software engineering seems unlikely anytime soon, there’s no denying that prompt engineering is becoming an important role in its own right. Prompt engineering blends the skills of data science, such as a knowledge of LLMs and their unique quirks, with the creativity of artistic positions. Prompt engineers are tasked with devising prompts for LLMs that elicit a desired response. In doing so, prompt engineers rely on some techniques used by data scientists, such as A/B testing and data cleaning yet must also have a finely developed aesthetic sense for what constitutes a “good” LLM response. Furthermore, they need the ability to make iterative tweaks to a prompt in order to nudge a model in the correct direction. Integrating prompt engineers into an existing data science and engineering org therefore requires some distinct shifts in culture and mindset. Read on to find out how the prompt engineering role can be integrated into existing teams and how organizations can better make the shift towards a prompt engineering mindset.

Mona launches free, self-service monitoring solution for GPT-based applications

Mona launches free, self-service monitoring solution for GPT-based applications

In the rapidly evolving landscape of AI, staying ahead of the curve is crucial for data scientists and engineers. With the increasing adoption of large language models (LLMs) such as OpenAI’s GPT, monitoring the performance, quality and efficiency of the applications that leverage these models has become crucial for businesses. As a leader in intelligent monitoring solutions for AI, we have leveraged our industry expertise and existing platform to develop a monitoring solution specifically tailored for GPT-based products, enabling teams to optimize the performance of their applications and improve their usage of LLMs over time.