Biden signed a new national security memorandum directing the Pentagon and intelligence agencies to embrace AI technology. The move aims to maintain American dominance over rivals such as China.
Ai safety RSS
OpenAI has released the system card and preparedness framework scorecard for its newest multimodal model, GPT-4o. It has implemented safeguards at both the model and system levels.
OpenAI CEO Sam Altman revealed that his company is working with the US AI Safety Institute to offer early access to its next foundation model for testing to make sure it works safely
OpenAI has outlined a new way to make large language models safe. It's method is called Rule-Based Rewards (RBR) and can help reduce incorrect refusals and speed up model safety training.
A new study has outlined an attack using flowchart images with visual language models like GPT-4o to have it output unsafe outputs. The researchers even managed to automate the flowchart creation.
A study has found that multimodel AI models perform poorly at giving safe responses when users give multimodal inputs such as an image and text together. The new SIUO benchmark was made as a result.
Former OpenAI safety lead Jan Leike has taken up a job at rival firm Anthropic. There, he will continue his work on superalignment and try to find a way for humans to control powerful AIs.
Major technology companies came together to promote responsible developments of AI and address security concerns. Representatives established frameworks and principles for evaluating models.
Sam Altman and Greg Brockman has issued a response to Jan Leike who raised questions over OpenAI's commitment to safety. The pair said they do care about safety. You can read the response here.
Major AI creators, academics, government officials, and researchers have formed a consortium to address the risks of deploying AI. Tech giants like Apple and Microsoft have also joined the movement.
Researchers discovered a technique using ChatGPT to repeatedly say words that could reveal private details. The chatbot now refuses some repetitive requests, even though the terms allow repetition.
Microsoft briefly blocked employees from using OpenAI's ChatGPT chatbot due to a testing bug that limited LLMs more broadly. Microsoft recommends using its own AI chatbot, Bing Chat.
Samsung unveiled its first generative AI. Gauss includes three key models - Gauss Language for writing support, Gauss Code for software development support, and Gauss Image for image generation.
Which? has warned that it's relatively easy to get ChatGPT and Bard to craft text for scam emails and text messages. Its warning comes just days before the UK is set to host a summit on AI safety.
British Prime Minister Rishi Sunak announces the plans for establishing the world's first AI Safety Institute. This announcement comes ahead of the upcoming AI Safety Summit next week in UK.
Microsoft's president calls for a regulatory blueprint to control AI, emphasizing transparency, standards and coordination across sectors. He suggests knowing how to use AI and setting standards.
Tech giants such as Microsoft and Google met with the Biden-Harris administration on AI development. They'll ensure security, transparency and risk sharing, and address concerns about abuse.
Microsoft has revealed plans to give its third-party customers commitments to better handle AI products and services, including sharing info on its research, resources to regulate their use and more.
The UK government will hold the first global summit on AI safety this autumn. It will invite countries, tech companies, and researchers to discuss safety measures against potential AI risks.