Google vows not to use AI for weapons, sets up ethical rules
Google is banning the development of artificialintelligence software that can be used in weapons, CEO Sundar Pichai said Thursday, setting strict new ethical guidelines for how the tech giant should conduct business in an age of increasingly powerful AI.
The new rules could set the tone for the deployment of AI far beyond Google, as rivals in Silicon Valley and around the world compete for supremacy in self-driving cars, automated assistants, robotics, military AI and other industries.
“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai wrote in a blog post. “As a leader in AI, we feel a special responsibility to get this right.”
The ethical principles are a response to a firestorm of employee resignations and public criticism over a Google contract with the Defense Department for software that could help analyze drone video, which critics argued had nudged the company one step closer to the “business of war.”
Google executives said last week that they would not renew the deal for the military’s AI endeavor, known as Project Maven, when it expires next year.
Google, Pichai said, will not pursue the development of AI when it could be used to break international law, cause overall harm or surveil people in violation of “internationally accepted norms of human rights.”
The company will, however, continue to work with governments and the military in cybersecurity, training, veterans health care, search and rescue and military recruitment, he said.