On Sunday, CBS News devoted the majority of its latest 60 Minutes episode to looking at Google efforts into AI technology. That included a chat with the company's CEO Sundar Pichai, who stated that there was a pressing need for regulations for the use of AI.
However, a new report from Bloomberg claims Google rushed out new AI products like Bard with little efforts to put in ethical guardrails for its information. The company was reportedly threatened with the sudden rise of ChatGPT, and the AI ethics team at Google was allegedly ignored when it decided to rush out Bard.
The article states:
The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said. The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development, they said.
The article says that before Bard was launched to the general public, Google employees were asked to test it. Some of their responses were very critical:
One worker’s conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”
In the end, Google launched Bard as an "experiment", telling users that it could make mistakes in its answers. However, it also shows that AI still has a long way to go before it offers answers that are truthful. A Google spokesperson responded to Bloomberg's request for a comment, stating, "We are continuing to invest in the teams that work on applying our AI Principles to our technology,”
7 Comments - Add comment