Another day, another hiccup from AI. This time, it’s Apple getting a not-so-friendly nudge after Apple Intelligence made a pretty big mistake. Apple Intelligence, launched with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, brings AI-powered tools like notification summaries, along with other features.
These include typing queries to Siri instead of speaking, a photo clean-up tool, email summaries, Genmoji for custom emojis, an image playground for creating visuals from photos, and visual intelligence on iPhone 16 for advanced image recognition.
The trouble started when the AI was summarizing notifications from the BBC News app on iOS devices. One of the summaries incorrectly reported that Luigi Mangione had shot himself. But here’s the thing: that wasn’t true. The AI just got it wrong.
Now, let's back up for a second to give some context around this. Brian Thompson, the CEO of UnitedHealthcare, was tragically killed earlier this month. His death generated a lot of attention, especially after his alleged killer, Luigi Mangione, was arrested.
Mangione’s name was heavily reported in the news, but the AI misinterpreted the headlines, creating a serious mix-up in the summary it generated. This wasn’t just a simple typo or a small error, it was a significant mistake in how the news was presented, one that could easily lead to confusion or even panic among users.
Following this blunder, the BBC filed a complaint with Apple. Now, Reporters Without Borders (RSF) has weighed in, urging Apple to disable the Apple Intelligence notification feature altogether. RSF pointed to the incident as evidence of the limitations of AI in handling sensitive information. In a statement, they said:
This accident highlights the inability of AI systems to systematically publish quality information, even when it is based on journalistic sources. The probabilistic way in which AI systems operate automatically disqualifies them as a reliable technology for news media that can be used in solutions aimed at the general public.
This isn't the only AI mistake that has made the news. Earlier this year, AI-generated images sparked controversy when Google's chatbot, Gemini, produced images of white historical figures, such as the Founding Fathers, Nazi soldiers, and Vikings, depicted as other races. It also refused to process prompts like "happy white people" and "ideal nuclear family." Google later apologized for "missing the mark" and temporarily blocked images of people from being generated in Gemini.
4 Comments - Add comment