트럼프 행정부, 급등하는 에너지 비용 속에서 기술 대기업들에게 AI 전력발전소 자금 지원 촉구
트럼프 행정부는 전기 요금이 급등함에 따라 AI 데이터센터의 전력 수요를 지원하기 위해 대형 기술 기업들에게 새로운 발전소 건설 자금을 제공하라고 압박하고 있다.

In a decisive move that underscores the industry's shift from training large models to deploying them in real-time environments, LiveKit has secured $100 million in Series C funding, propelling its valuation to $1 billion. The round was led by Index Ventures, with significant participation from Salesforce Ventures and returning backers Altimeter Capital, Redpoint Ventures, and Hanabi Capital.
Creati.ai 관점에서 이 밸류에이션은 단순한 재무적 이정표 그 이상으로, "AI 인프라(AI infrastructure)" 계층의 성숙을 알리는 신호입니다. 2024년과 2025년이 OpenAI나 Anthropic 같은 기초 모델 제공업체 간의 군비 경쟁으로 정의되었다면, 2026년은 시각, 청각, 음성 기능을 모두 갖춘 멀티모달 에이전트(multimodal agents)의 해, 즉 애플리케이션 계층의 해로 빠르게 자리 잡고 있습니다. LiveKit은 Russ d'Sa와 David Zhao가 2021년에 설립했으며, 이러한 상호작용을 즉각적이고 자연스럽게 느끼게 하는 데 필요한 핵심 인프라를 조용히 구축해 왔습니다.
이 신규 자금은 LiveKit의 글로벌 엣지 노드 네트워크 확장과 복잡한 AI 파이프라인의 오케스트레이션을 단순화하는 "Agents" 프레임워크 강화에 사용될 예정입니다. 기업들이 텍스트 기반 챗봇에서 음성 기반 어시스턴트로 전환함에 따라, 특화된 저지연 인프라에 대한 수요가 급증했고, 이는 차세대 컴퓨팅의 기본 전송 계층으로서 LiveKit의 위치를 확고히 했습니다.
To understand LiveKit's rapid ascent, one must first understand the technical bottlenecks of conversational AI. Building a voice agent is not merely about connecting a speech-to-text (STT) engine to a Large Language Model (Large Language Model, LLM) and a text-to-speech (TTS) synthesizer. The real challenge lies in latency(latency) and state management(state management).
LiveKit’s infrastructure operates as a high-performance programmable network. It manages the ingestion of audio streams, processes them through an ultralow-latency pipeline, and delivers the AI's response back to the user in milliseconds.
By handling the "turn-taking" logic—knowing when a user has stopped speaking or is interrupting the AI—LiveKit allows developers to build experiences that feel like natural phone calls rather than walkie-talkie exchanges. This capability is critical for the new wave of "음성 모드(Voice Mode)" applications where fluidity is the primary metric of success.
이 회사의 기술은 지터 버퍼(jitter buffers), 에코 캔슬레이션(echo cancellation), 연결 단절(connection drops) 관리를 복잡하게 처리하는 과정을 추상화하여, AI 엔지니어들이 에이전트 로직에만 온전히 집중할 수 있도록 합니다. 이러한 개발자 우선(developer-first) 철학은 광범위한 채택을 이끌어냈으며, 현재 이 플랫폼은 연간 수십억 분 이상의 AI 상호작용을 지원합니다.
Perhaps the most significant endorsement of LiveKit’s technology comes from its partnership with OpenAI. LiveKit serves as the backbone for ChatGPT’s Advanced Voice Mode(고급 음성 모드), a feature that stunned the tech world with its ability to hold emotionally nuanced, real-time conversations.
For enterprise buyers, the logic is simple: if LiveKit’s infrastructure is robust enough to handle the massive concurrent load of ChatGPT’s global user base, it is more than capable of handling customer support agents, telehealth(telehealth) consults, or internal enterprise tools. This "OpenAI 효과(OpenAI Effect)" has accelerated LiveKit's adoption across the Fortune 500, with companies like Salesforce and Tesla integrating the technology into their own AI strategies.
The distinction between trying to build 음성 AI(voice AI) on legacy communications stacks versus using purpose-built infrastructure is stark. The following table outlines the key technical differences that are driving developers toward LiveKit.
| Feature | Traditional WebRTC | LiveKit AI 인프라(AI infrastructure) |
|---|---|---|
| Latency Management | 가변적이며 종종 예측 불가능함 | 100ms 미만으로 최적화된 전송 |
| AI Integration | 수동 glue code 필요 | STT/LLM/TTS를 위한 네이티브 파이프라인 |
| Interruption Handling | 구현이 어려움 | 내장된 턴 감지 로직 |
| Scalability | 높은 운영 오버헤드 | 관리되는 글로벌 엣지 네트워크 |
| Protocol Architecture | Peer-to-Peer 중심 | 서버 측 포워딩(SFU) |
While conversational AI is the current driver of growth, LiveKit’s roadmap extends into the broader realm of 멀티모달 에이전트. The ability to stream video data in real-time allows AI models to "see" and reason about the physical world.
This capability is opening new frontiers in robotics and industrial automation. For instance, 원격 조작(teleoperation) startups are using LiveKit to transmit low-latency video from robots to human operators or AI supervisors. In the healthcare sector, mental health providers are utilizing the platform to power autonomous therapy assistants that can detect subtle emotional cues in a patient's voice, a task that requires high-fidelity audio transmission that standard telephony cannot provide.
Furthermore, the involvement of Salesforce Ventures in this Series C round suggests a deep integration into 고객 관계 관리(customer relationship management, CRM) workflows. We can expect to see "에이전틱 CRM(Agentic CRM)" systems where AI voice agents not only handle support calls but also autonomously update customer records and trigger workflows in real-time, all powered by LiveKit’s 데이터 레일(data rails).
Despite its unicorn valuation and enterprise focus, LiveKit remains deeply rooted in the open-source community. The core of its technology is accessible to developers, fostering a vibrant ecosystem of plugins and integrations.
The "LiveKit Agents" framework allows developers to write agent logic in Python or Node.js, treating the complex audio/video processing as a standard library import. This democratization of real-time media technology is lowering the barrier to entry for building sophisticated AI applications. A single developer can now prototype a voice assistant in an afternoon that would have previously required a team of VoIP engineers and months of development.
As we move deeper into 2026, the capitalization of LiveKit validates a broader trend: the AI stack is solidifying. The era of building bespoke infrastructure for every AI application is ending. Just as Twilio became the default API for SMS and Stripe for payments, LiveKit is positioning itself as the default API for AI-to-human communication.
For Creati.ai readers, the takeaway is clear. The constraint on AI utility is no longer model intelligence—it is the speed and reliability of the interface. With a $1 billion valuation and a war chest of $100 million, LiveKit is ensuring that the interface of the future is instant, seamless, and everywhere.