Google DeepMind CEO Demis Hassabis Questions OpenAI's Early Move into ChatGPT Ads
Demis Hassabis expresses surprise at OpenAI's decision to test ads in ChatGPT, highlighting concerns over user trust and the role of assistants in AI monetization.

The landscape of artificial intelligence shifted fundamentally yesterday. In a move that observers are calling the official beginning of the "recursive era," OpenAI has unveiled GPT-5.3-Codex, the first commercially released AI model explicitly credited with helping to engineer its own architecture.
Released just minutes after competitor Anthropic announced their latest coding agent, GPT-5.3-Codex represents more than just a performance bump. According to OpenAI’s technical documentation, the model successfully identified inefficiencies in its predecessor’s training data and wrote the optimization scripts required to fix them, effectively acting as a co-researcher in its own development.
For the team at Creati.ai, this release signals a transition from AI as a static tool to AI as an active collaborator in its own evolution.
The headline feature of GPT-5.3-Codex is its capability for recursive self-improvement. While AI models have long been used to generate synthetic data for training, this model goes a step further. OpenAI confirmed that during the pre-training phase of version 5.3, an intermediate version of the model was tasked with analyzing the codebase of the training infrastructure.
The model reportedly identified a redundancy in the data ingestion pipeline that human engineers had overlooked. It then proposed a patch, generated the unit tests to verify the fix, and, upon human approval, implemented the code. This resulted in a 14% reduction in compute requirements for the final training run.
This specific capability distinguishes GPT-5.3-Codex from standard LLMs (Large Language Models). It possesses a high degree of "agentic" reasoning—the ability to plan, execute, and evaluate multi-step engineering tasks without constant human hand-holding.
To understand how GPT-5.3-Codex differs from the standard GPT-5 model released late last year, we have broken down the core technical specifications and behavioral shifts.
Comparison: GPT-5 vs. GPT-5.3-Codex
| Feature | GPT-5 (Standard) | GPT-5.3-Codex |
|---|---|---|
| Primary Function | General Purpose Multimodal | specialized Agentic Coding & Architecture |
| Self-Correction | Requires User Prompting | Autonomous Error Detection Loops |
| Context Window | 1 Million Tokens | 5 Million Tokens (Optimized for Repositories) |
| Recursion Level | None | Level 1 (Can optimize own training scripts) |
| Inference Speed | Standard Latency | High-Speed "Thought" Stream for Debugging |
The expanded context window of 5 million tokens allows the model to ingest massive, monolithic codebases in a single pass, making it uniquely requested by enterprise-level software architects.
The timing of this release was anything but coincidental. TechCrunch reports that OpenAI’s announcement dropped exactly 18 minutes after Anthropic released their updated "Claude-Code-Next" model. This tight grouping of releases highlights the ferocious intensity of the AI arms race in early 2026.
While Anthropic continues to focus on "Constitutional AI" and safety-first coding practices, OpenAI is pushing the envelope on autonomy. The industry consensus suggests that while Anthropic’s model is favored for highly regulated industries like healthcare and fintech due to its interpretability, GPT-5.3-Codex has taken the lead in raw development velocity and creative problem-solving.
Analysts suggest that this "minute-by-minute" competition benefits developers the most, as pricing for API tokens continues to drop while capabilities skyrocket. However, it also places immense pressure on engineering teams to constantly migrate to the newest models to maintain a competitive edge.
OpenAI’s blog post details the "Loop-and-Verify" architecture embedded in GPT-5.3-Codex. Unlike previous models that generated code linearly (token by token), the new Codex operates in a loop:
This internal loop mimics the workflow of a human developer but occurs in a fraction of a second. This significantly reduces the hallucination rate in code generation, a persistent issue in models like GPT-4 and early GPT-5 versions.
The release has triggered a polarized response across the tech sector. On one side, CTOs and startup founders are celebrating the potential for 10x developer productivity. NBC News highlighted several Silicon Valley startups that have already integrated the beta version of GPT-5.3-Codex, claiming it allowed them to ship features in days that previously took months.
"It’s like having a senior engineer who never sleeps and knows every library in existence," noted a lead developer from a prominent fintech unicorn.
However, the "self-improving" aspect has raised eyebrows among AI safety researchers. The concept of an AI improving its own code touches upon the theoretical risks of an intelligence explosion, or "singularity." While OpenAI assures the public that human oversight is strictly enforced—the model cannot deploy changes to its core weights without cryptographic signing by human researchers—skeptics argue that the line is becoming increasingly blurred.
Major concerns cited by experts include:
For the Creati.ai community of builders and creators, GPT-5.3-Codex offers immediate practical utilities that go beyond theoretical debates.
One of the most touted use cases is the modernization of legacy systems. The model’s massive context window allows it to read COBOL or Fortran mainframes and refactor them into modern Python or Rust architectures with a high degree of accuracy, preserving business logic that might have been lost to time.
Developers can point GPT-5.3-Codex at a GitHub repository and ask it to "find vulnerabilities." The model acts as a "Red Team," aggressively trying to break the application and then suggesting patches for the holes it finds.
Documentation is rarely up to date. This model can be integrated into CI/CD pipelines to auto-generate documentation that evolves in real-time as the code changes, ensuring that the README.md never lags behind the production build.
The release of GPT-5.3-Codex marks a pivotal moment in 2026. We have moved past the era of AI that simply predicts the next word; we are now in the era of AI that evaluates the quality of its own thoughts.
While the "self-improving" tag is currently limited to optimization scripts and training data curation, the trajectory is clear. As these models become better at coding, they become better at building the next generation of models. For developers, the message is clear: adaptability is the only metric that matters.
OpenAI has promised a gradual rollout to Enterprise and Plus users over the coming weeks. As always, Creati.ai will continue to test these tools to determine how they best fit into the creative workflow.