Google Signs Classified AI Deal With Pentagon, Expanding Gemini’s Military Role
Google has signed a new agreement with the U.S. Department of Defense allowing its Gemini AI models to be deployed on classified government systems, marking a significant expansion of the company’s role in U.S. national security infrastructure.
The deal, signed on April 27, 2026, gives the Pentagon access to Gemini for “any lawful government purpose,” including applications such as mission planning, intelligence analysis, and operational support. It builds on an earlier contract worth about $200 million awarded in 2025 and places Google alongside other frontier AI providers now working within classified defense environments.
Under the agreement, Google may be required to adjust its AI safety settings and filters to meet government requirements. While the contract includes language stating the technology should not be used for domestic mass surveillance or fully autonomous weapons without human oversight, these provisions are described as non-binding commitments rather than enforceable restrictions.
The move reflects a broader Pentagon strategy to integrate advanced AI tools across defense operations while diversifying suppliers. Alongside Google, companies like OpenAI and xAI have secured similar access, following the exclusion of Anthropic earlier in 2026 over disagreements on usage limits and risk controls.
Inside Google, the deal has triggered internal debate. More than 600 employees—including staff from DeepMind and Google Cloud—signed an open letter to CEO Sundar Pichai urging the company to reject classified AI work. The letter argued that restricted environments limit oversight and raise the risk of harmful or unintended applications.
The development marks a notable shift from Google’s earlier stance on military AI. In 2018, the company withdrew from the Pentagon’s Project Maven after widespread employee protests, later adopting AI principles that limited certain defense uses. Those guidelines have since evolved, allowing broader participation in national security projects.
For the Pentagon, the agreement supports a rapid push to embed AI across military systems amid intensifying global competition. Officials have emphasized that the goal is to enhance decision-making and operational efficiency, while maintaining human oversight over critical functions.
The deal also highlights a growing tension between private-sector AI ethics frameworks and government authority. Once deployed in classified environments, companies typically lose visibility into how their systems are used, raising questions about accountability and enforcement of safeguards.
As governments and tech firms deepen cooperation on AI, the Google-Pentagon agreement underscores a broader trend: frontier AI is increasingly becoming part of national defense infrastructure, even as debates over its risks and boundaries continue.







