Microsoft-backed OpenAI has announced a new cybersecurity grant program that is aimed at improving AI-powered cybersecurity. The maker of ChatGPT said it's working on developing methods that will help in quantifying the cybersecurity capabilities of AI models to better understand and improve their effectiveness.
OpenAI has started accepting applications for its funding program on a rolling basis. The $1 million grant will be offered in increments of $10,000 in different ways, including API credits and direct funds. The research lab said it will give strong preference to practical applications of AI (such as tools, methods, and processes) in defensive cybersecurity.
"Our goal is to work with defenders across the globe to change the power dynamics of cybersecurity through the application of AI and the coordination of like-minded individuals working for our collective safety," OpenAI said in a blog post.
It suggested a wide range of project ideas, for instance, mitigating social engineering tactics, assisting network or device forensics, automatically patching vulnerabilities, creating honeypots and deception technology to misdirect or trap attackers, assisting end users to adopt security best practices, helping developers port code to memory safe languages, etc.
For now, OpenAI won't consider offensive-security projects. It will prioritize applications that have a clear plan for how their project will be licensed and distributed for "maximal public benefit and sharing."
OpenAI's cybersecurity grant comes just days after it announced ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow while staying within the bounds defined by the law.
In recent news, OpenAI's new research paper discusses how it can address the common problem of hallucinations where AI makes up stuff on its own. An incident was reported last month where a lawyer used ChatGPT for legal research and the AI chatbot created fake cases that never existed.