Voice AI Infrastructure Startup LiveKit Reaches $1 Billion Valuation in New Funding Round
LiveKit, the real-time audio and video infrastructure powering OpenAI's voice mode, hits unicorn status with a $100 million investment led by Index Ventures.

In a move that fundamentally alters the landscape of global AI infrastructure, Advanced Micro Devices (AMD) and Meta have executed a definitive agreement to deploy a staggering 6 gigawatts (GW) of AI compute capacity. Announced earlier this week, this multi-year, multi-generational partnership is not merely a purchase order; it is a strategic alignment of two technology titans designed to break the existing hardware hegemony and power the next era of "personal superintelligence."
The deal, unprecedented in both scale and structure, will see Meta deploy custom versions of AMD's next-generation Instinct MI450 GPUs and 6th Generation EPYC "Venice" CPUs. Perhaps most significantly, the agreement includes a performance-based equity warrant allowing Meta to acquire up to 160 million shares of AMD common stock, effectively tying the chipmaker’s financial future to the successful execution of Meta’s AI roadmap.
For the AI industry, the implications are profound. 6 gigawatts represents a power envelope roughly equivalent to the output of six standard nuclear reactors, dedicated entirely to artificial intelligence workloads. This commitment signals that Meta is moving beyond experimental diversification and is now aggressively building a sovereign hardware ecosystem capable of challenging Nvidia’s dominance.
At the heart of this deployment lies the AMD Instinct MI450, a GPU architecture that has been the subject of intense speculation until now. While the standard MI450 is poised to be a formidable competitor in the general market, the chips slated for Meta’s data centers are custom-engineered silicon.
According to technical disclosures surrounding the deal, these custom MI450s are optimized specifically for Meta’s recommendation engines and generative AI inference workloads, such as Llama 4 and its successors. The architecture reportedly leverages TSMC’s 2nm-class process technology and utilizes the CDNA 5 architecture, delivering a massive leap in energy efficiency—a critical metric when deploying at the multi-gigawatt scale.
The deployment will be orchestrated using the Helios rack-scale architecture, a collaborative design developed by AMD and Meta through the Open Compute Project (OCP). Helios is not just a server chassis; it is a fully integrated power, cooling, and interconnect standard designed to handle the extreme thermal density of next-gen AI silicon.
Key Technical Components of the Deal:
| Component | Architecture/Codename | Role in Infrastructure |
|---|---|---|
| AI Accelerator | Custom Instinct MI450 | Primary inference and training engine, optimized for PyTorch and Llama models |
| Host Processor | 6th Gen EPYC "Venice" | Orchestration and data preprocessing; features Zen 6 cores |
| Infrastructure | Helios Rack-Scale | OCP-compliant liquid cooling and power delivery system for high-density deployment |
| Software Stack | ROCm 7.0+ | Open software ecosystem, heavily optimized by Meta engineers for internal workloads |
The inclusion of 6th Gen AMD EPYC CPUs, codenamed "Venice," further solidifies AMD’s dominance in the data center CPU market. These processors, built on the Zen 6 architecture, are expected to offer the high core counts and PCIe Gen 6 connectivity required to feed data to the hungry MI450 clusters without becoming a bottleneck.
While the hardware is impressive, the financial architecture of this deal is equally groundbreaking. AMD has issued a warrant to Meta to purchase up to 160 million shares of common stock. This is not a simple stock grant; it is a performance-based instrument designed to ensure mutual success.
The warrants are structured in tranches that vest only upon the achievement of specific milestones. The first tranche is tied to the successful shipment and deployment of the initial 1 gigawatt of compute capacity, scheduled for the second half of 2026. Subsequent tranches unlock as the deployment scales toward the full 6GW target and, crucially, as AMD’s stock price hits specific appreciation targets.
This structure serves two purposes:
Market analysts view this as a "shared destiny" model. By holding a potential 10% stake in AMD (based on current outstanding shares), Meta effectively becomes a partner rather than just a customer, ensuring their engineering teams work in lockstep to optimize the ROCm software stack and resolve hardware bottlenecks.
To understand the magnitude of 6 gigawatts, one must look at the current state of global data centers. A typical hyperscale data center campus might consume between 100 to 300 megawatts. This deal alone represents the energy equivalent of 20 to 60 massive data center campuses.
Mark Zuckerberg, Meta’s CEO, framed this investment as necessary to deliver "personal superintelligence." As AI models transition from text-based chat assistants to multimodal agents capable of reasoning, video generation, and real-time world interaction, the inference costs skyrocket.
Current localized inference on smartphones is insufficient for the scale of models Meta envisions. The 6GW infrastructure suggests a future where billions of users have continuous access to always-on, high-intelligence agents. The energy density required for this led to the adoption of the Helios architecture, which likely utilizes direct-to-chip liquid cooling to manage the heat generated by millions of MI450 GPUs running in parallel.
From the perspective of Creati.ai, this deal marks the official end of the "Nvidia monopoly" narrative and the beginning of a true hardware duopoly in the AI space. While Nvidia remains the leader in training frontier models, AMD has successfully carved out a massive stronghold in high-volume inference and fine-tuning.
This partnership also validates the open-source software approach. Meta has long championed open compute and open frameworks (PyTorch). By betting 6GW on AMD, they are casting a massive vote of confidence in the ROCm ecosystem, signaling to the rest of the industry that the software barrier to entry for AMD hardware has been dismantled.
For the broader AI ecosystem, this increases competition, which should theoretically drive down the cost of compute. If AMD can deliver on the performance promises of the MI450, other hyperscalers like Microsoft and Oracle—who have already begun piloting MI300 clusters—may feel emboldened to expand their AMD footprints, further diversifying the supply chain.
The AMD-Meta 6-gigawatt agreement is more than a procurement contract; it is a defining moment for the AI hardware industry in 2026. With custom MI450 silicon, the Venice CPU platform, and a 160-million-share warrant binding the two companies together, AMD has secured its position as a central pillar of the global AI buildout. As the first 1GW comes online later this year, the tech world will be watching closely to see if this alliance can deliver the efficiency and scale required to power the next generation of artificial intelligence.