Google has removed long-standing commitments that prevented the company from applying AI technology to weapons and surveillance. The change, implemented yesterday, is a dramatic reversal of its ethical stance on artificial intelligence, a policy it first established in 2018.
The company’s updated AI principles no longer include the section “Applications we will not pursue,” which previously ruled out AI development for weapons, surveillance, and technologies likely to cause harm or violate international law and human rights.
This policy change aligns Google with other major AI companies like OpenAI and Anthropic, which have already established partnerships with defense contractors. OpenAI recently partnered with military manufacturer Anduril for Pentagon projects, while Anthropic collaborated with Palantir to provide US intelligence agencies access to its Claude chatbot.
Google’s shift in policy is very concerning given the current global AI race and increasing geopolitical tensions. The company’s executives, Demis Hassabis and James Manyika, justified the change in a blog post, emphasizing the importance of democratic countries leading AI development.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” they wrote, advocating for collaboration between companies and governments that share democratic values.
In 2018, Google faced significant internal resistance when thousands of employees protested a Pentagon drone footage analysis contract known as Project Maven. The backlash led Google to establish its original AI principles and decline to renew the military contract. That same year, Google also withdrew from a $10 billion Pentagon cloud computing contract, citing concerns about alignment with its AI principles.
The updated principles still include commitments to human oversight and testing to mitigate harmful outcomes. However, critics like Lilly Irani, a former Google employee and current UC San Diego professor, question the substance of these promises, observing that similar commitments to international law and human rights in the past have not prevented controversial applications.
Even before this change in policy, Google had already expanded its military and defense relationships. Recent reports indicate the company increased Israel’s Defense Ministry’s access to AI tools following the October 7, 2023 Hamas attack, despite employee protests over concerns about potential harm to Palestinians.