Sam Altman, the CEO of OpenAI, has confirmed to CNBC that the company no longer uses API customer data to train its large language models. OpenAI updated its Terms of Service to reflect this at the start of March but didn’t make a song and dance about it. If you use ChatGPT directly, this data will still be used for training, unless you go incognito.
In an interview, Sam Altman told CNBC that customers “clearly want us not to train on their data, so we’ve changed our plans: We will not do that.” Unfortunately for those using ChatGPT directly, this is not the case by default. The collection of data is such an issue Samsung has banned employees from using chatbots like ChatGPT over security leaks.
As an entirely new category of software, companies like OpenAI as well as wider society are still getting to grips with the best practices. Earlier today, Neowin reported on the fact that the Competition and Markets Authority was going to start investigating how these generative AI products could affect competition and consumers.
Another way in which these bots have had to retrospectively be improved upon is in relation to guard rails. Since their launch, they’ve been adapted a little bit to ensure they don’t say offensive things. When users try to get the bots to say something offensive, the bots recall pre-written scripts letting the user know they can’t help with that request.
Source: CNBC