A.I- artificial intelligence, it can be a real benefit, but like many past inventions never intended for military use, AI too can become "weaponised". (image via CBC)

Weaponised A.I.: Google says no, but is that enough?

One of the world’s leading tech developers, Google, says it will no longer develop artificial intelligence programmes for the military once the current contract expires. This comes on the heels of internal resignations over military work along with a protest letter from other employees.

But will this really going to have any effect?

Ramona Pringle is an associate professor in the Faculty of Communication and Design and Director of the Transmedia Zone at Ryerson University in Toronto. I reached her by mobile phone in California.

Listen

Weaponised or militarised artificial intelligence is a scary thought says professor pringle. It creates autonomous machines which would make life and death determinations.

Ramona Pringle, associate professor, Ryerson University (CBC)

The tech giant Google was involved in developing AI for the U.S military and that bothered a great many employees at the company. Several quit over the issue, and a vast number of others signed a petition to management against such work.

Robots with AI are becoming so sophisticated, they can sense and negotiate uneven terrain and obstacles.. It wouldn’t take much to arm them with weaponry. (Boston Dynamics-YouTube)

In response Google said it would cease military contracts which involve platforms or other technology that could do harm. However it did not totally put an end to working with the military on other programmes.
Pringle says even if google gets out, other firms will fill the gap, so its not going to be a game changer in terms of the ethics of A.I. development or how its used.

Google : Objectives for AI applications
We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.
2. Avoid creating or reinforcing unfair bias.
3. Be built and tested for safety.
4. Be accountable to people.
5. Incorporate privacy design principles.
6. Uphold high standards of scientific excellence.
7. Be made available for uses that accord with these principles.

AI applications we will not pursue
-Technologies that cause or are likely to cause overall harm.
-Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
-Technologies that gather or use information for surveillance violating internationally accepted norms.
-Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Professor Pringle says militarisation  or other questionable uses will be difficult to control, because even if corporations say they won’t get involved in certain ethically questionable issues, they are currently self-governing with no higher oversight.
Other questions also arise as to who or what determines what ethical limits should be.
We are at the early stages of a new era she says, but it’s not the robots or other machines we need to be concerned about, but rather the people creating them

Categories: International, Internet, Science & Technology
Tags: ,

Do you want to report an error or a typo? Click here!

For reasons beyond our control, and for an undetermined period of time, our comment section is now closed. However, our social networks remain open to your contributions.