Austria Urges Global Action on AI Weapons

270

Austria urged renewed endeavours on Monday to establish regulations concerning the utilisation of artificial intelligence in weaponry, which has the potential to lead to the development of autonomous “killer robots.” 

This call came during a conference hosted by Austria, intending to reignite discussions that have largely stalled on this matter.

With AI technology advancing rapidly, weapons systems that could kill without human intervention are coming ever closer, posing ethical and legal challenges that most countries say need addressing soon.

“We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control,” Austrian Foreign Minister Alexander Schallenberg told the meeting of non-governmental and international organisations as well as envoys from 143 countries.

“At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines,” he said in an opening speech to the conference entitled “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation.”

Years of discussion at the United Nations have produced few tangible results and many participants at the two-day conference in Vienna said the window for action was closing rapidly.

“It is so important to act and to act very fast,” the president of the International Committee of the Red Cross, Mirjana Spoljaric, told a panel discussion at the conference.

“What we see today in the different contexts of violence are moral failures in the face of the international community. And we do not want to see such failures accelerating by giving the responsibility for violence, for the control over violence, over to machines and algorithms,” she added.

AI is already being used on the battlefield. Drones in Ukraine are designed to find their way to their target when signal-jamming technology cuts them off from their operator, diplomats say.

The United States said this month it was looking into a media report that the Israeli military has been using AI to help identify bombing targets in Gaza.

“We have already seen AI making selection errors in ways both large and small, from misrecognizing a referee’s bald head as a football, to pedestrian deaths caused by self-driving cars unable to recognize jaywalking,” Jaan Tallinn, a software programmer and tech investor, said in a keynote speech.

 

 

Reuters

Comments are closed.