Google has expanded its contract with the U.S. Department of Defense (DoD) to include its Gemini AI system, according to a recent announcement. The new agreement allows the Pentagon to use Gemini for 'any lawful purpose', a phrase that has sparked debate over the company's past commitment to ethical AI principles.
This development follows Google's 2018 decision to not renew Project Maven, a military drone AI program, after employee protests. At that time, Google adopted AI principles that prohibited using its technology for weapons or surveillance. However, the company has since revised these principles, removing explicit bans on military applications.
Google stated it is 'proud' to support the DoD, emphasizing that the contract complies with all applicable laws and regulations. Critics argue this contradicts the company's former motto 'Don't Be Evil', which was removed from its code of conduct in 2018.
The exact financial terms of the contract expansion have not been disclosed. Google has not provided details on specific projects Gemini will be used for, but the broad language of 'any lawful purpose' has raised concerns among AI ethics researchers about potential use in autonomous weapons or surveillance systems.