The UK government is pushing firms like OpenAI, Anthropic, and Google to explain the internal workings of their respective large language models (LLMs). While the code for some models is publicly available, models such as GPT-3.5 and GPT-4 are not public and OpenAI is very reluctant to share many of the details.
The UK is getting ready to host a new global AI summit that will bring together governments, companies, and researchers to examine the risks AI poses and discuss ways to mitigate these risks.
One of the reasons companies are reluctant to share the internals of their LLMs is because the action could reveal proprietary information about their products. If malicious actors know more about the internals, it could also make AI models vulnerable to cyber attacks.
According to the FT, one of the things that the government would like to inspect are model weights which define the strength of connections between neurons across different layers of the models. AI companies currently don’t need to share these details but there are some calls for greater transparency on the matter.
The UK will hold its first summit at Bletchley Park in November. Bletchley Park holds a significant place in computing history because it's the place that the Nazi messages were decrypted. Alan Turing, which the AI-related Turing Test is named after, also worked there breaking codes.
The Financial Times pointed out that Google’s DeepMind, OpenAI, and Anthropic all agreed in June to open up their models to the UK government for the purposes of research and safety. Unfortunately, at the time, the extent and technical details of the access were not arrived at by the parties. Now, the government is asking for quite a deep level of access.
Ultimately, for the summit to be a success, it will be important for the attendees to understand enough about how the models work so the dangers can be better understood. Whether they get enough access to understand the models is another question.
Source: Financial Times