OpenAI has trained a new model called CriticGPT to catch LLM bugs. As the name says, it's meant to analyze ChatGPT's code outputs during the training process and provide critiques for human trainers.
Hallucinations RSS
Microsoft says it is trying to cut down on hallucinations in responses from generative AI like Copilot with an upcoming tool that's designed to both block and rewrite ungrounded info.
AI-based chatbots often come up with factually incorrect answers, making themselves an easy target for criticism. But are LLMs really broken? AI expert argues hallucinating is their greatest feature.
Anthropic has announced the availability of Claude Instant 1.2 through an API. Interestingly, benchmarks show it to be the safest of all the models so far with fewer hallucinations.
Call of Duty's Ricochet Anti-Cheat team is now making cheaters see hallucinations that mimic a regular player but are only visible to the cheating software, flagging them in the system.
OpenAI is currently researching process supervision, a method for reducing the number of hallucinations produced by LLMs. It has had success with maths datasets but needs to be tested generally.