The Peace Research Institute Oslo (PRIO) explores the necessary human involvement in AI-enabled weapons systems.
PRIO Senior Researcher Jovana Davidovic argues that most policy proposals for managing AI risks in warfare rely on vague calls for 'meaningful human control' or 'appropriate human judgment'.
We cannot build effective governance for warfighting AI unless we are explicit about why we want humans involved, what kind of involvement we seek, and what exactly our policies are meant to govern.
Davidovic's article is part of the PRIO project 'Ethical Risk Management for AI-Enabled Weapons: A Systems Approach (ERM)', which examines responsible governance of emerging technologies in the military.
Author's summary: Rethinking human roles in AI warfare for effective governance.