Google DeepMind CEO Demis Hassabis Questions OpenAI's Early Move into ChatGPT Ads
Demis Hassabis expresses surprise at OpenAI's decision to test ads in ChatGPT, highlighting concerns over user trust and the role of assistants in AI monetization.

In a watershed moment that may be remembered as the official commencement of the recursive AI era, OpenAI has released GPT-5.3 Codex, the first large language model explicitly credited with being instrumental in its own development. Launched on Thursday, February 5, 2026, the model represents a paradigm shift from static training to self-reinforcing optimization loops.
At Creati.ai, we have closely monitored the trajectory of agentic coding models, but GPT-5.3 Codex distinguishes itself not merely by its output, but by its genesis. According to OpenAI, this model generated significant portions of the synthetic data used for its fine-tuning and wrote the low-level kernel optimizations that allow it to run 25% faster than its immediate predecessors.
The release comes amidst a frantic news cycle, dropping just minutes after rival Anthropic announced its own agentic coding update, signaling that the "AI arms race" has shifted from parameter count to recursive capability and agentic autonomy.
The defining characteristic of GPT-5.3 Codex is its role in its own creation. While previous models have been used to assist researchers, OpenAI confirms that GPT-5.3 was deployed as a primary engineer during the "Phase 2" pre-training and optimization stages.
This process involved two distinct recursive mechanisms:
"This is the first time we have allowed the model to substantially architect its own runtime environment," stated an OpenAI spokesperson in the technical release notes. "The efficiency gains we are seeing are a direct result of the model's ability to understand the hardware it lives on better than we do."
For developers and enterprise users, the theoretical implications of recursive AI take a backseat to raw performance. In this arena, GPT-5.3 Codex has established a new ceiling.
The model has achieved state-of-the-art (SOTA) performance on SWE-Bench Pro, the industry-standard benchmark for evaluating an AI's ability to solve real-world GitHub issues. Unlike standard coding tests that require generating a single function, SWE-Bench Pro demands that the AI navigate a complex repository, understand dependencies, reproduce a bug, and generate a passing pull request.
Key Performance Metrics:
These metrics suggest that GPT-5.3 Codex is moving beyond "copilot" status to becoming a fully autonomous "agentic engineer" capable of handling end-to-end feature requests with minimal human oversight.
The timing of this release cannot be ignored. TechCrunch reported that Anthropic released its updated coding agent mere minutes before OpenAI's announcement. This synchronization highlights the intense competitive pressure in the sector.
While Anthropic's release focuses heavily on "Constitutional Safety" in code generation—ensuring generated software is secure by design—OpenAI’s GPT-5.3 Codex appears to be positioning itself on pure velocity and recursive capability.
The market for AI coding assistants has bifurcated into two distinct needs: Assistance (auto-complete, explanation) and Agency (completing tasks autonomously). GPT-5.3 Codex is firmly targeting the latter. Its ability to self-correct during a multi-step coding task has been significantly enhanced, reducing the "drift" often seen where models lose track of the original objective over long coding sessions.
To understand where GPT-5.3 Codex fits into the current ecosystem, we have compiled a comparative analysis of the leading models available as of February 2026.
Table 1: Comparative Analysis of Leading AI Coding Models
| Model Name | SWE-Bench Pro Score | Inference Speed (Relative) | Recursive Training |
|---|---|---|---|
| GPT-5.3 Codex | 64.2% | 1.25x (Baseline) | Yes (Phase 2) |
| Anthropic Claude 4.5 Code | 58.9% | 0.95x | No |
| Google Gemini 2.0 Pro Dev | 55.4% | 1.05x | Partial (Synthetic Data) |
| Meta Llama 4-Code (Open) | 49.1% | 0.85x | No |
The data clearly indicates a widening gap between proprietary recursive models and those relying on traditional human-curated training pipelines. The 5.3% lead over its nearest competitor on SWE-Bench Pro is statistically significant, representing potentially thousands of complex edge cases that GPT-5.3 can handle which others cannot.
The release of GPT-5.3 Codex introduces profound questions and opportunities for the software engineering workforce. The transition to recursive self-improvement implies that the rate of model advancement may no longer be linearly tied to human research timelines.
As models like GPT-5.3 Codex become capable of handling the implementation details of software architecture, the role of the human software engineer is accelerating its shift toward system design, product logic, and verification. Developers using the alpha version of the API report that their workflow has changed from writing code to reviewing PRs generated by the AI.
With a model that helps build itself, safety alignment becomes critical. If a model optimizes its own code, how do we ensure it preserves safety constraints? OpenAI has addressed this by stating that the "Constitution" of the model—its core safety guidelines—remains immutable and human-controlled, even as the model optimizes its own execution logic.
OpenAI’s GPT-5.3 Codex is more than just a faster coding bot; it is a proof of concept for the recursive self-improvement hypothesis. By successfully leveraging the model to improve its own inference speeds and generate its own training data, OpenAI has closed the loop.
For the readers of Creati.ai, the message is clear: the tools we use are no longer just static products. They are evolving systems that participate in their own growth. As we integrate GPT-5.3 Codex into our workflows, we are not just using software; we are collaborating with an intelligence that is actively learning to build a better version of itself.
As the recursive era begins, the ceiling for what AI can achieve in software development has just been raised—by the AI itself.