When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Microsoft is making a new tool to block and rewrite ungrounded info for better AI responses

Copilot logo

When Microsoft first launched Copilot (then known as Bing Chat) to the public in February 2023, there were lots of reports in the media at the time about strange answers and responses coming from the generative AI chatbot, along with factual errors.

These responses were quickly labeled as "hallucinations." A few days after the launch of the chatbot, Microsoft decided to install some hard limits on the chat turns per session and per day so the development could try to reduce the amount of errors and weird responses people were getting.

While Microsoft eventually got rid of most of those chat turn limits, Copilot hallucinations can still show up. In a new post this week on the Microsoft Source blog, the company detailed how these kinds of errors get made with generative AI and how they are working on cutting down on these incidents.

Microsoft says that hallucinations generally get made in AI answers because they come from what's called "ungrounded" content. That means the AI model, for some reason, either changes or adds data that's been put into its model. While that can be a good thing for answers that are designed to be more creative, such as asking a chatbot like Copilot or ChatGPT l to write a story, businesses need AI models to use grounded data so they can get correct answers to questions.

Microsoft says it has been working on tools to help AI models stick with grounded data. It stated:

Company engineers spent months grounding Copilot’s model with Bing search data through retrieval augmented generation, a technique that adds extra knowledge to a model without having to retrain it. Bing’s answers, index and ranking data help Copilot deliver more accurate and relevant responses, along with citations that allow users to look up and verify information.

For its outside customers, Microsoft says they can access Azure OpenAI Service and use a feature called On Your Data that will allow businesses and organizations to access their in-house data with their AI applications. There's also another real-time tool customers can use to detect how grounded the responses are from their chatbots.

The company says it is working on an additional method to cut down on AI hallucinations:

Microsoft is also developing a new mitigation feature to block and correct ungrounded instances in real time. When a grounding error is detected, the feature will automatically rewrite the information based on the data.

There's no word yet on when this new mitigation feature will be available.

Report a problem with article
Next Article

Nvidia Broadcast gets fixes for Windows 11 version 24H2

Galaxy Watch7 Ultra leaked images
Previous Article

Samsung Galaxy Watch Ultra briefly gets listed on the official website

Join the conversation!

Login or Sign Up to read and post a comment.

0 Comments - Add comment