OpenAI is funding a project that researches morality in AI systems

OpenAI is stepping into complex territory by funding research on "AI morality." In a move revealed through an IRS filing (via TechCrunch), the nonprofit arm of OpenAI awarded a grant to Duke University researchers for a project titled "Research AI Morality". This effort falls under a larger initiative, which has provided $1 million over three years to study ways to make AI morally aware.

This project is being led by Walter Sinnott-Armstrong, a professor specializing in practical ethics, and Jana Schaich Borg, both recognized for their work on how Artificial Intelligence can handle moral decisions. Sinnott-Armstrong is a big name in philosophy, diving into areas like applied ethics, moral psychology, and neuroscience. His team at Duke has worked on real-world dilemmas, such as designing algorithms to decide who should receive organ transplants, weighing public and expert perspectives to refine the fairness of these systems.

The OpenAI-funded project seeks to create algorithms that predict human moral judgments across fields like medicine, law, and business. While this sounds promising, history suggests this is no easy feat. Take, for example, the Allen Institute for AI’s Ask Delphi, which aimed to provide ethical answers. While capable of addressing simple dilemmas, it could easily be tricked into morally dubious responses just by rephrasing questions.

The limitations stem from how AI operates: machine learning models predict outcomes based on training data, which often reflect biases from dominant cultures. This issue raises significant concerns about whether AI can ever be genuinely "moral," especially when morality varies across societies and lacks universally accepted frameworks.

Whether AI can align with human values in a meaningful way—or even whether it should—remains an open question. But the implications of getting it right could be profound, influencing how we trust machines in moral decision-making. For now, the world will probably have to wait until 2025, when this grant ends, to see if this "moral AI" project has made any groundbreaking progress.

Report a problem with article
Next Article

You can now watch videos in landscape on Threads, at least on iOS

Previous Article

Flight Simulator 2024's first hotfix targets multiple crashing issues, missing cursor