Google DeepMind CEO Demis Hassabis Questions OpenAI's Early Move into ChatGPT Ads
Demis Hassabis expresses surprise at OpenAI's decision to test ads in ChatGPT, highlighting concerns over user trust and the role of assistants in AI monetization.

In a watershed moment for the artificial intelligence industry, OpenAI has officially released GPT-5.3-Codex, a groundbreaking coding model that distinguishes itself not just by its performance, but by its origin story. According to the announcement made earlier today, this is the first deployed model that significantly assisted in its own development, effectively marking the industry's tangible entry into the era of recursive self-improvement.
The release comes at a fever pitch of industry activity. In a dramatic sequence of events, OpenAI's announcement dropped only minutes after rival Anthropic released its own agentic coding model, setting the stage for a high-stakes confrontation in the developer tools market. However, OpenAI’s claim that GPT-5.3-Codex successfully "debugged its own training run" has captured the primary focus of the tech community, shifting the conversation from simple code generation to autonomous system stewardship.
The concept of an AI contributing to its own creation has long been a theoretical milestone—often referred to as the "singularity" in science fiction. While GPT-5.3-Codex does not represent a runaway intelligence, it demonstrates a functional, controlled version of this feedback loop. OpenAI’s technical report reveals that the model was integrated into the company’s internal DevOps and research pipelines during its pre-deployment phase.
Unlike its predecessors, which required human engineers to meticulously diagnose evaluation failures or optimize training kernels, GPT-5.3-Codex was granted "agentic" privileges. It successfully identified inefficiencies in its training data ingestion, wrote patches to fix them, and diagnosed specific anomalies in its evaluation metrics.
This capability represents a shift from Passive Tooling to Active Collaboration. The model did not merely suggest code snippets for human review; it managed the deployment of its own sub-modules, reducing the operational overhead for OpenAI’s human researchers. This internal "dogfooding"—where the AI builds the AI—has resulted in a system that is intimately tuned to the nuances of complex software architecture.
Beyond its recursive development capabilities, GPT-5.3-Codex boasts significant performance upgrades. The most immediate benefit for developers is a 25% increase in execution speed compared to the previous flagship model.
This speed enhancement is reportedly a direct result of the model's self-optimization. During its development, the system analyzed its own inference pathways and suggested optimizations for the underlying CUDA kernels used in its operation.
Key Performance Improvements:
The implications for enterprise clients are profound. Faster inference translates directly to lower API costs and reduced latency in user-facing applications, making GPT-5.3-Codex a formidable engine for real-time software development environments.
The release of GPT-5.3-Codex underscores the industry's pivot toward "Agentic AI." While previous models like GPT-4 served as sophisticated autocomplete engines, agentic models are designed to pursue goals. They plan, execute, observe results, and iterate.
For software engineers, this signals a transformation in daily workflows. The role of the human developer is increasingly shifting towards high-level architecture and supervision, while the "grunt work" of syntax, testing, and deployment pipeline management is offloaded to the AI.
To illustrate this shift, the following table compares the capabilities of traditional Large Language Models (LLMs) against the new agentic standard set by GPT-5.3-Codex.
Comparison: Traditional vs. Agentic Coding Models
| Feature | Traditional Coding LLMs | GPT-5.3-Codex (Agentic) |
|---|---|---|
| Error Handling | Highlights errors; suggests fixes | Diagnoses, patches, and re-runs code automatically |
| Scope | Function or file-level generation | Repository-level architecture and deployment |
| Development Role | Assistant (Copilot) | Collaborator (DevOps & Engineering) |
| Training Input | Static datasets | Dynamic feedback from self-diagnostics |
| Optimization | Requires human tuning | Self-optimizes runtime kernels |
The timing of this release cannot be overlooked. TechCrunch reports that Anthropic launched its competing agentic coding model mere minutes before OpenAI’s announcement. This synchronization suggests a fierce "cold war" of development velocity between the two San Francisco-based labs.
While details on Anthropic’s model are still emerging, the simultaneous release forces the market to choose between two distinct philosophies. Anthropic has historically emphasized "Constitutional AI" and safety rails, often resulting in more conservative model behaviors. OpenAI, with GPT-5.3-Codex, appears to be pushing the envelope of autonomy, betting that the productivity gains from a self-improving model will outweigh the risks inherent in granting AI more control over code execution.
Analysts predict that the "Model Wars" of 2026 will not be fought over benchmark scores on standardized tests, but on utility—specifically, how much autonomy a model can safely be granted within a corporate firewall.
The introduction of a model that "helped build itself" inevitably raises safety concerns. If an AI can modify its own training code, what prevents it from introducing biases or overriding safety protocols?
OpenAI has addressed this in their system card, emphasizing that while GPT-5.3-Codex assisted in debugging and optimization, all critical architectural decisions and final code commits remained under strict human review. The "self-improvement" was scoped strictly to efficiency and error correction, rather than goal modification.
Nevertheless, the trajectory is clear. As these models become better at coding, they become better at improving the software that runs them. The release of GPT-5.3-Codex is likely to accelerate regulatory discussions regarding "Recursive AI Oversight," a topic that has moved from academic papers to legislative halls in Washington and Brussels.
GPT-5.3-Codex represents more than just a version increment; it validates the hypothesis that AI can accelerate its own progress. By successfully employing the model to debug its training and manage its deployment, OpenAI has demonstrated a practical flywheel effect.
For the developers and enterprises relying on Creati.ai for the latest insights, the takeaway is actionable: the toolstack is becoming active. We are moving away from writing code with AI, and towards supervising AI as it writes—and improves—itself. As we evaluate GPT-5.3-Codex over the coming weeks, the primary metric to watch will be trust: not just whether the code runs, but whether we trust the agent that wrote it to manage the system it inhabits.