Google’s Code Of Ethics For AI Bans Its Use In Weaponry

Google’s Code Of Ethics For AI Bans Its Use In Weaponry

Google's decision to restrict military work has inspired criticism from members of Congress.

In a separate blog post from Google Cloud CEO Diane Greene, the company confirmed reports that it will terminate its involvement with Project Maven when the contract expires in 2019.

Google has just released a set of principles and practices to guide its work as well as development in artificial intelligence.

After pressure from its employees, Google officially announced its AI technology will not be used in weapons.

Google expects to have talks with the Pentagon over how it can fulfil its contract obligations without violating the principles outlined Thursday.

It planted its ethical flag on use of AI just days confirming it would not renew a contract with the USA military to use its AI technology to analyse drone footage.

On Saturday, Google announced it won't renew a contract to provide AI technology to the U.S. Department of Defense to hasten analysis of dronefootage by automatically interpreting video images.

The following month, over 4,000 employeessigned a petition demanding that Google's management cease work on Project Maven and promise to never again "build warfare technology". Should this prove to be more than window dressing, it is a refreshing change to see a company taking corporate responsibility for the repercussions of its work. Last Friday, Diane Greene, Google Cloud chief, reportedly told employees that Google had chose to stop working with the military on AI.

Northern Ireland awaits landmark abortion ruling
The Northern Ireland Human Rights Commission says the law should be changed in cases of rape, incest and fatal foetal abnormality. The UK's highest court is set to rule on a challenge over the legality of Northern Ireland's strict abortion law .

He said the principles also called for AI applications to be "built and tested for safety", to be "accountable to people" and to "incorporate privacy design principles".

Pichai also mentioned that Google will avoid developing any surveillance technology that would violate the internationally accepted norms of human rights or something that breaks worldwide laws.

Pichai's pledge regarding weapons was "really strong", Peter Asaro, associate proifessor of media studies at the New School in NY, told Business Insider.

Google's pledge to quit doing military work involving its AI technology does not include its current job helping the Pentagon with drone surveillance.

These no-nos include AI specifically for weaponry, as well as surveillance tools that would violate "internationally accepted norms".

"Who and what dictates the norm?"

The advances made by robot companies like Boston Dynamics has many people nervous about the potential for AI to be weaponised.

Kirk Hanson, the executive director of the Markkula Center for Applied Ethics at Santa Clara University, which counts Google as a financial supporter, said Google's contract highlights a larger debate about AI and military applications.

Related Articles