Google DeepMind CEO Demis Hassabis Questions OpenAI's Early Move into ChatGPT Ads
Demis Hassabis expresses surprise at OpenAI's decision to test ads in ChatGPT, highlighting concerns over user trust and the role of assistants in AI monetization.

The intersection of artificial intelligence and national security has reached a critical inflection point. In a move that signals a seismic shift in how the United States government intends to integrate frontier technologies into its defense apparatus, the Trump administration has taken decisive action, formally banning federal usage of Anthropic products. Citing deep-seated national security risks associated with the firm's model development and corporate provenance, the executive branch has initiated a rapid transition of critical workloads away from the provider. Simultaneously, the Pentagon has formally announced a new, extensive partnership with OpenAI to anchor the Department of Defense's next-generation AI architecture.
At Creati.ai, we have monitored the evolving relationship between private sector AI developers and state entities for years. However, the events of February 2026 represent more than just a vendor switch; they reflect a hardening of policy regarding "Constitutional AI," data sovereignty, and the permissible bounds of algorithmic transparency. This move essentially resets the competitive landscape for federal contracts in Washington, elevating the stakes for AI developers aiming to secure future government work.
The executive order regarding Anthropic follows a mounting period of scrutiny concerning the company's reliance on specific, yet-to-be-disclosed cross-border dependencies and concerns regarding the rigidity of its "Constitutional AI" guardrails. Government oversight bodies expressed concern that the interpretability of these guardrails in a high-stakes combat or tactical intelligence scenario could create latency or ambiguity—factors that are unacceptable for mission-critical operations.
For the Department of Defense, the primary issue lies in the lack of alignment between external AI development philosophies and the explicit strategic objectives of national security. As administrative officials noted, the inability to ensure that model training sets are not only audited for safety but also structurally optimized for federal-grade data sensitivity, has created an intolerable security surface area. Consequently, all federal agencies have been ordered to begin an immediate divestiture from any platform reliant on Anthropic-hosted infrastructures or model weights, a process scheduled for completion by the third quarter of 2026.
In the wake of this ban, the Pentagon has pivoted toward a collaboration with OpenAI, marking a victory for the company’s ongoing efforts to demonstrate government-ready resilience and operational security. This deal signifies that OpenAI’s architectural philosophy—emphasizing robust red-teaming, scalable multimodal reasoning, and data isolation—has satisfied the stringent, revised requirements established by the Department of Defense's procurement officers.
The implications for OpenAI are transformative. Beyond the capital injection inherent in federal contracts, this partnership positions the company as the foundational engine for the next generation of American tactical AI systems. Analysts expect this deal to include specific provisions for air-gapped model deployment, allowing Pentagon personnel to leverage sophisticated generative intelligence without exposing sensitive tactical data to public-cloud vulnerabilities.
To understand how this market dynamic is shifting, we have evaluated the diverging paths of the two primary players. The following table provides a breakdown of their current standing regarding federal defense integration:
Table: Comparative Analysis of Defense AI Posture (February 2026)
| Feature | OpenAI | Anthropic |
|---|---|---|
| Federal Contracting Status | Active Strategic Partner | Banned Pending Security Review |
| Primary Integration Focus | Defense Tactical Reasoning | Constitutional AI Safety |
| Cloud Dependency | Managed Federal Infrastructure | Designated National Security Risk |
| Key Compliance Advantage | Model Transparency & Audits | Systemic Constraints(Deprecated) |
This development sends a clear, chilling message to the broader Silicon Valley landscape: the era of laissez-faire deployment for large-scale language models is ending within the government sector. Moving forward, "responsible AI" will no longer be interpreted simply as safety against bias or hallucination, but as "operational readiness." Companies seeking to work with the U.S. government must now guarantee that their models adhere to strict sovereignty standards and provide government agencies with unprecedented access to the "weights" and inner mechanics of their underlying algorithms.
Industry observers should anticipate an uptick in procurement rigor. Moving beyond standard cloud service provider certifications, future AI tenders will likely require developers to host their models within authorized, sovereign environments. This creates a formidable barrier to entry, essentially prioritizing larger incumbents that possess the operational bandwidth to manage federal compliance audits alongside consumer-grade product launches.
As the federal government tightens its grip on its technology supply chain, the implications for the wider private sector are profound. Organizations will need to choose their allegiances carefully, or face being excluded from a vital economic driver: the military and federal government market.
At Creati.ai, we foresee a fracturing of the "General-Purpose AI" ecosystem. On one side, developers will lean into the requirements set by the defense establishment—prioritizing reliability, interpretability, and localized hosting. On the other, companies that retain their autonomy—or their specific ethical or technical methodologies—will likely face reduced, or potentially eliminated, access to government resources.
The deal between OpenAI and the Pentagon effectively establishes a benchmark for success in 2026. Developers, enterprises, and regulators will undoubtedly look to this partnership to gauge where the lines are drawn. Will the ban on Anthropic be a temporary obstacle or a structural turning point for their business model? How will OpenAI manage the pressure of becoming the backbone of government-grade artificial intelligence? These remain the open questions that will define the rest of this year in the rapidly accelerating field of AI development. We continue to monitor the technical specifications of these deployments as they materialize into active systems.