Update - We received this statement from a Google spokesperson:
The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we've launched in Search, we appreciate the feedback. We're taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.
Original story - Earlier this month, Google held its annual I/O developers conference. As expected, AI was at the top of the company's agenda during the event as it revealed new and upcoming generative AI features for a number of its products and services.
One of those features announced at I/O is AI Overviews for Search. It started rolling out for US Search features soon after I/O. The idea is that when you type in a search string, you will get an answer to your question generated by an AI program based on information from a number of sources on the web, with credits to those sources.
However, as CNBC reports, since AI Overviews has started to be a part of Google Search in the US, many people have posted obvious factual errors that the feature had generated.
One person, "@heavenrend" on X, showed that when they typed in "cheese not sticking to pizza" into Google Search, the AI Overview summary suggested putting in "about 1/8 cup of nontoxic glue to the sauce."
https://t.co/W09ssjvOkJ pic.twitter.com/6ALCbz6EjK
— SG-r01 (@heavenrend) May 22, 2024
Another X user, "@napalmtrees," asked if it is normal to leave a dog in a hot car. The AI Overview summary from Google Search said simply, "Yes, it’s always safe to leave a dog in a hot car." The apparent source of this flat-out wrong answer was the lyrics to a fictional Beatles song.
https://t.co/U1wBLD67Mh pic.twitter.com/CHh74EMmUd
— napalm (@napalmtrees) May 23, 2024
The criticism of AI Overview sounds very similar to how Google launched its first AI chatbot, Bard, in March 2023, which was criticized for launching without enough safety and ethical guardrails. In February 2024, Google turned off the creation of images of people in Bard's successor, Gemini. That feature allowed the making of artwork that sometimes showed people with darker skin color in inappropriate ways. At the time, Google stated it would work to improve the feature before it was put back in Gemini, but so far, that has yet to happen.
9 Comments - Add comment