Nick Clegg, the former UK deputy prime minister, has downplayed the risk of generative AI labelling it ‘quite stupid’. Clegg, who is also Meta’s president of global affairs, was speaking to BBC’s Today Programme when he said that the hype ‘has somewhat run ahead of the technology’.
His comments come as Meta releases its open-source generative AI model, Llama 2. Meta said Llama 2 is pre-trained on 40% more content than Llama 1 and that it can be used for free for research and commercial purposes.
Clegg’s comments about large language models (LLMs) being ‘stupid’ is partially true, especially if trying to extract factual information. They currently have a bad habit of hallucinating which means they say things that are totally false, in some cases. OpenAI has proposed a method for reducing incidents of hallucination.
Another risk with Llama 2 that the BBC raised in its coverage was that this model is open-source meaning anyone can edit the code. They could then potentially remove the guardrails which prevent it from saying harmful things for malicious purposes.
Clegg dismissed this as hyperbole and insisted that Llama 2 couldn’t even generate images and definitely couldn’t ‘build a bioweapon’.
He went on to explain that LLMs are being open-sourced all of the time and compared to those Llama 2 is safer than any of those, he claimed. Only time will tell if these models do end up being used maliciously.
One of the benefits of open-sourcing the models, however, is that it democratizes access to generative AI. Startups could take this code and build upon it and compete against the big players such as ChatGPT and Google Bard, which could lead to good outcomes.
What do you think? Should Llama 2 be open source or is it too dangerous to open it up for anyone to manipulate?
Source: BBC News