AI-based chatbots often come up with factually incorrect answers, making themselves an easy target for criticism. But are LLMs really broken? AI expert argues hallucinating is their greatest feature.
Hallucinations RSS
Anthropic has announced the availability of Claude Instant 1.2 through an API. Interestingly, benchmarks show it to be the safest of all the models so far with fewer hallucinations.
Call of Duty's Ricochet Anti-Cheat team is now making cheaters see hallucinations that mimic a regular player but are only visible to the cheating software, flagging them in the system.
OpenAI is currently researching process supervision, a method for reducing the number of hallucinations produced by LLMs. It has had success with maths datasets but needs to be tested generally.