Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue “technologies that cause or are likely to cause overall harm,” “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “technologies that gather or use information for surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”
The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads.
In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the “backdrop” to why Google’s principles needed to be overhauled.
Google first published the principles in 2018 as it moved to quell internal protests over the company’s decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights.
But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Google’s AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Google also now says it will work to “mitigate unintended or harmful outcomes.”
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” wrote James Manyika, Google senior vice president for research, technology, and society, and Demis Hassabis, CEO of Google DeepMind, the company’s esteemed AI research lab. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
They added that Google will continue to focus on AI projects “that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.”
Got a Tip?
Are you a current or former employee at Google? We’d like to hear from you. Using a nonwork phone or computer, contact Paresh Dave on Signal/WhatsApp/Telegram at +1-415-565-1302 or paresh_dave@wired.com, or Caroline Haskins on Signal at +1 785-813-1084 or at emailcarolinehaskins@gmail.com
US President Donald Trump’s return to office last month has galvanized many companies to revise policies promoting equity and other liberal ideals. Google spokesperson Alex Krasov says the changes have been in the works much longer.
Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives. Gone are phrases such as “be socially beneficial” and maintain “scientific excellence.” Added is a mention of “respecting intellectual property rights.”
After the initial release of its AI principles roughly seven years ago, Google created two teams tasked with reviewing whether projects across the company were living up to the commitments. One focused on Google’s core operations, such as search, ads, Assistant, and Maps. Another focused on Google Cloud offerings and deals with customers. The unit focused on Google’s consumer business was split up early last year as the company raced to develop chatbots and other generative AI tools to compete with OpenAI.
Timnit Gebru, a former colead of Google’s ethical AI research team who was later fired from that position, claims the company’s commitment to the principles had always been in question. “I would say that it’s better to not pretend that you have any of these principles than write them out and do the opposite,” she says.
Three former Google employees who had been involved in reviewing projects to ensure they aligned with the company’s principles say the work was challenging at times because of the varying interpretations of the principles and pressure from higher-ups to prioritize business imperatives.
Google still has language about preventing harm in its official Cloud Platform Acceptable Use Policy, which includes various AI-driven products. The policy forbids violating “the legal rights of others” and engaging in or promoting illegal activity, such as “terrorism or violence that can cause death, serious harm, or injury to individuals or groups of individuals.”
However, when pressed about how this policy squares with Project Nimbus—a cloud computing contract with the Israeli government, which has benefited the country’s military — Google has said that the agreement “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”
“The Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy,” Google spokesperson Anna Kowalczyk told WIRED in July.
Google Cloud’s Terms of Service similarly forbid any applications that violate the law or “lead to death or serious physical harm to an individual.” Rules for some of Google’s consumer-focused AI services also ban illegal uses and some potentially harmful or offensive uses.