Navigation

The amorals of AI vs Human dignity

As a teenager in the 1980’s I grew up on a diet of Terminator, Robocop and of course Michael Knight and his trusty car KIT.

The dystopian future of what we now call AI was already a concern.  I loved Isaac Asimov’s foundation Trilogy – and let’s not forget he penned i-Robot. And, whilst reminiscing, who can forget the finale of 2001 Space odyssey and the calm voice of Hal – “Dave, I can’t do that”?.

Here in 2020 we now have the communications and technology to build highly efficient killing machines that have no morals, can be fully autonomous and require no human interaction.  As a chap with a decent level of imagination I find this very disturbing. Imagine an autonomous drone with a machine gun that simply kills anything with a human biomarker.  No decision, no consideration – Human biomarker = exterminate.

 Associate Professor of International Relations, University of Southern Denmark has written this excellent piece which brings this into focus.

She says  – “Unlike machines, human decisions to use force cannot be pre-programmed. Indeed, the brunt of international humanitarian law obligations apply to actual, specific battlefield decisions to use force, rather than to earlier stages of a weapons system’s lifecycle. This was highlighted by a member of the Brazilian delegation at the recent UN meetings.

Adhering to international humanitarian law in the fast-changing context of warfare also requires constant human assessment. This cannot simply be done with an algorithm. This is especially the case in urban warfare, where civilians and combatants are in the same space.

Ultimately, to have machines that are able to make the decision to end people’s lives violates human dignity by reducing people to objects. As Peter Asaro, a philosopher of science and technology, argues: “Distinguishing a ‘target’ in a field of data is not recognising a human person as someone with rights.” Indeed, a machine cannot be programmed to appreciate the value of human life.

Many states have argued for new legal rules to ensure human control over autonomous weapons systems. But a few others, including the US, hold that existing international law is sufficient. Though the uncertainty surrounding what meaningful human control actually is shows that more clarity in the form of new international law is needed.

This must focus on the essential qualities that make human control meaningful, while retaining human judgement in the context of specific use-of-force decisions. Without it, there’s a risk of undercutting the value of new international law aimed at curbing weaponised AI.

This is important because without specific regulations, current practices in military decision-making will continue to shape what’s considered “appropriate” – without being critically discussed.

Over the years we at Polestar have worked with many defence and manufacturing companies.  I recall one time when a client of ours was hauled over the coals when an “over enthusiastic” sales person explained to an undercover reporter how a product could in effect be used as an anti-personnel mine. 

Billions of dollars are being invested in AI solutions in this industry segment .  Defence companies, be they manufactures, software designers or consultants, will, more than ever in history need to ensure they do not open the Pandora’s box of fully automated, data driven, killers.  

Outsourcing use-of-force decisions to machines violates human dignity. And it’s also incompatible with international law which requires human judgement in context.

By Charles Whelan on 19/10/2020