Tech Nonprofits Warn Against Weaponizing AI Procurement

A coalition of tech nonprofits urges the U.S. government not to use procurement rules to pressure companies on AI safety policies.

Tech Nonprofits Warn Against Weaponizing AI Procurement

Image: eff.org

A coalition of technology and civil society nonprofits has sent a letter to the U.S. Office of Management and Budget (OMB), urging the federal government not to use its procurement power to pressure companies into weakening their artificial intelligence (AI) trust and safety policies. The letter, dated March 31, 2026, was signed by organizations including the Center for Democracy & Technology, the Electronic Frontier Foundation, and the Mozilla Foundation.

The groups are responding to a proposed update to federal procurement rules, which they argue could be interpreted to penalize companies for having policies that restrict how their AI tools can be used. Specifically, they warn against rules that would treat a company's refusal to allow its technology to be used for controversial purposes—such as mass surveillance or lethal autonomous weapons—as an unfair competitive practice.

The letter states that such an approach would undermine corporate responsibility and ethical commitments, forcing companies to choose between federal contracts and their own safety principles. This debate occurs alongside a separate, public dispute between the Department of Defense and AI company Anthropic over similar contractual terms related to AI use in military applications.

The nonprofit coalition is calling for the OMB to issue clear guidance that protects a company's right to establish and enforce responsible use policies for its AI systems, ensuring that procurement fosters innovation without compromising ethical standards.

📰 Source:
eff.org →
Share: