One of the world’s leading tech developers, Google, says it will no longer develop artificial intelligence programmes for the military once the current contract expires. This comes on the heels of internal resignations over military work along with a protest letter from other employees.
But will this really going to have any effect?
Ramona Pringle is an associate professor in the Faculty of Communication and Design and Director of the Transmedia Zone at Ryerson University in Toronto. I reached her by mobile phone in California.Listen
Weaponised or militarised artificial intelligence is a scary thought says professor pringle. It creates autonomous machines which would make life and death determinations.
The tech giant Google was involved in developing AI for the U.S military and that bothered a great many employees at the company. Several quit over the issue, and a vast number of others signed a petition to management against such work.
In response Google said it would cease military contracts which involve platforms or other technology that could do harm. However it did not totally put an end to working with the military on other programmes.
Pringle says even if google gets out, other firms will fill the gap, so its not going to be a game changer in terms of the ethics of A.I. development or how its used.
Google : Objectives for AI applications
We will assess AI applications in view of the following objectives. We believe that AI should:
1. Be socially beneficial.
2. Avoid creating or reinforcing unfair bias.
3. Be built and tested for safety.
4. Be accountable to people.
5. Incorporate privacy design principles.
6. Uphold high standards of scientific excellence.
7. Be made available for uses that accord with these principles.
AI applications we will not pursue
-Technologies that cause or are likely to cause overall harm.
-Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
-Technologies that gather or use information for surveillance violating internationally accepted norms.
-Technologies whose purpose contravenes widely accepted principles of international law and human rights.
Professor Pringle says militarisation or other questionable uses will be difficult to control, because even if corporations say they won’t get involved in certain ethically questionable issues, they are currently self-governing with no higher oversight.
Other questions also arise as to who or what determines what ethical limits should be.
We are at the early stages of a new era she says, but it’s not the robots or other machines we need to be concerned about, but rather the people creating them
- Additional information
- Google AI principles (expanded statement)
- Technology Review: Apr 10/18: US military wants to weaponise A.I.
- MIT Tech Review: W Knight: Jun 7/18: Google AI- no harm
- Vanguard Canada: V Findlay: Jun 8/18: AI, safety, ethics
- Futurist Speaker: Aug 8/17: weaponized AI- 36 examples
- Forbes: B Marr: Apr 23/18: weaponising AI- terrorism