Begin typing your search...

Google Shifts AI Ethics, Opens Door To Military Applications

As Google revises its AI principles, the line between tech innovation and military use becomes increasingly blurred, sparking concerns over the ethical implications of AI in warfare

Google Shifts AI Ethics, Opens Door To Military Applications

Google Shifts AI Ethics, Opens Door To Military Applications
X

14 Feb 2025 9:45 AM IST

In a significant shift, Google has removed key statements from its AI principles, making way for potential military applications of its technology. This move follows a broader trend of major tech companies, such as Meta, Anthropic, and OpenAI, aligning with U.S. national security interests. While these decisions are framed as necessary for global competition and defense, the integration of AI in military contexts raises serious questions about human rights and the growing risks of AI-enabled warfare

Last week, Google quietly abandoned a long-standing commitment to not use artificial intelligence (AI) technology in weapons or surveillance. In an update to its AI principles, which were first published in 2018, the tech giant removed statements promising not to pursue: technologies that cause or are likely to cause overall harm weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people technologies that gather or use information for surveillance violating internationally accepted norms technologies whose purpose contravenes widely accepted principles of international law and human rights.

The update came after United States President Don-ald Trump revoked former President Joe Biden's executive order aimed at promoting safe, secure and trustworthy development and use of AI. The Google decision follows a recent trend of big tech enter-ing the national security arena and accommodating more military applications of AI. So why is this hap-pening now? And what will be the impact of more military use of AI?

The growing trend of militarised AI

In September, senior officials from the Biden government met with bosses of leading AI companies, such as OpenAI, to discuss AI development. The government then announced a taskforce to coordi-nate the development of data centres, while weighing economic, national security and environmental goals. The following month, the Biden government published a memo that in part dealt with “harness-ing AI to fulfil national security objectives”. Big tech companies quickly heeded the message. In No-vember 2024, tech giant Meta announced it would make its “Llama” AI models available to govern-ment agencies and private companies involved in defence and national security.

This was despite Meta's own policy which prohibits the use of Llama for “military, warfare, nuclear industries or applica-tions”. Around the same time, AI company Anthropic also announced it was teaming up with data ana-lytics firm Palantir and Amazon Web Services to provide US intelligence and defence agencies access to its AI models.

The following month, OpenAI announced it had partnered with defence startup Anduril Industries to develop AI for the US Department of Defence. The companies claim they will combine OpenAI's GPT-4o and o1 models with Anduril's systems and software to improve US military's defenc-es against drone attacks.

Defending national security

The three companies defended the changes to their policies on the basis of US national security inter-ests. Take Google. In a blog post published earlier this month, the company cited global AI competi-tion, complex geopolitical landscapes and national security interests as reasons for changing its AI prin-ciples. In October 2022, the US issued export controls restricting China's access to particular kinds of high-end computer chips used for AI research. In response, China issued their own export control measures on high-tech metals, which are crucial for the AI chip industry.

The tensions from this trade war escalated in recent weeks thanks to the release of highly efficient AI models by Chinese tech company DeepSeek. DeepSeek purchased 10,000 Nvidia A100 chips prior to the US export control measures and allegedly used these to develop their AI models. It has not been made clear how the militarisation of commercial AI would protect US national interests. But there are clear indications ten-sions with the US's biggest geopolitical rival, China, are influencing the decisions being made.

A large toll on human life What is already clear is that the use of AI in military contexts has a demonstrated toll on human life. For example, in the war in Gaza, the Israeli military has been relying heavily on ad-vanced AI tools. These tools require huge volumes of data and greater computing and storage ser-vices, which is being provided by Microsoft and Google.

These AI tools are used to identify potential targets but are often inaccurate. Israeli soldiers have said these inaccuracies have accelerated the death toll in the war, which is now more than 61,000, according to authorities in Gaza. Google removing the “harm” clause from their AI principles contravenes the international law on human rights. This identifies “security of person” as a key measure. It is concerning to consider why a commercial tech company would need to remove a clause around harm.

Avoiding the risks of AI-enabled warfare

In its updated principles, Google does say its products will still align with “widely accepted principles of international law and human rights”. Despite this, Human Rights Watch has criticised the removal of the more explicit statements regarding weapons development in the original principles. The organisation also points out that Google has not explained exactly how its products will align with human rights.

This is something Joe Biden's revoked executive order about AI was also concerned with. Biden's initiative wasn't perfect, but it was a step towards establishing guardrails for responsible development and use of AI technologies. Such guardrails are needed now more than ever as big tech becomes more en-meshed with military organisations – and the risk that come with AI-enabled warfare and the breach of human rights increases.

AI militarization Google AI principles military applications AI in warfare human rights concerns national security geopolitical tensions AI weapons development AI in military defense tech companies and defense contracts AI and international law AI in Gaza war US-China AI trade war 
Next Story
Share it