Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.livepeer.org/llms.txt

Use this file to discover all available pages before exploring further.

Livepeer is a decentralised serverless GPU fabric with a cryptoeconomic control plane, where services are exposed through a set of developer-friendly products and applications, enabling real-time compute infrastructure.
Protocol vs Network vs Platform The protocol provides trust, coordination and payment mechanisms, the network supplies compute, routing, and verification, and platforms expose the network’s capabilities in a usable way.

Infrastructure Layers

Livepeer is a decentralised serverless GPU fabric with a cryptoeconomic control plane, where services are exposed through a set of developer-friendly products and applications, enabling real-time compute infrastructure. Livepeer’s crypto-economic primitives and decentralised compute mesh provide additional benefits to the system such as censorship resistance, economic security, and trustless coordination.

Livepeer Protocol and Network Architecture

Livepeer’s crypto-economic primitives and decentralised compute mesh provide additional benefits to the system such as censorship resistance, economic security, and trustless coordination.

Protocol contracts

The Livepeer Protocol is a set of Solidity contracts deployed to Arbitrum One. Five contracts carry the load: BondingManager tracks stake and delegation, TicketBroker issues and redeems probabilistic micropayment tickets, RoundsManager advances the protocol clock, Minter issues LPT inflation, and Controller is the upgrade authority that registers all the others. These contracts run unchanged across every node in the network. An Orchestrator earns by redeeming winning tickets at TicketBroker and reward calls at BondingManager; a Delegator earns by bonding LPT through BondingManager. See Protocol Architecture for contract addresses and ABIs.

Network nodes

The network layer is a single binary, go-livepeer, run in different modes. One mode is the Gateway: it accepts video and AI jobs from clients, selects an Orchestrator, and settles payment in tickets. Another is the Orchestrator: it advertises capabilities, receives jobs, runs them, and redeems winning tickets on-chain. The transcoder mode is a worker that an Orchestrator can split off onto a separate machine to scale horizontally. Newer modes – redeemer and remote signer – separate ticket redemption and key custody from the live job path so that Gateway implementations in other languages can integrate. A small operator runs a single binary that fills both Gateway and Orchestrator roles. A larger operator splits the modes onto separate machines: Orchestrator on the network edge, transcoders or AI workers behind it on a private subnet. See Network Architecture for the deployment topology.

Off-chain coordination

Most of what happens on the network never touches a contract. Gateways discover Orchestrators through the on-chain subgraph, direct configuration, a webhook, or the Network Capabilities API. Orchestrators advertise capabilities and prices in OrchestratorInfo messages. Payment runs in probabilistic micropayment tickets that batch off-chain until a winning ticket triggers an on-chain redemption. This off-chain layer is what makes per-frame, per-pixel pricing economical. The off-chain loop is what scales the network: thousands of jobs and tickets per second between Gateway and Orchestrator, with on-chain settlement only when a ticket wins. See Marketplace and Discovery for the full discovery and selection algorithm.

AI Runtime

The AI runtime sits inside the Orchestrator. ai-worker is a Go subsystem in go-livepeer that owns the Orchestrator-side job lifecycle: it receives an AI job from the Gateway, picks a registered pipeline, starts or wakes the corresponding container, and streams frames in and out. Each pipeline runs as a separate ai-runner Python container, isolated by GPU and model. The transport between ai-worker and ai-runner for real-time work is the trickle protocol; for batch work it is a request/response HTTP API. ComfyStream is one such container, offering a ComfyUI-graph runtime for real-time video-to-video pipelines. An Orchestrator declares which pipelines it serves through aiModels.json, which sets per-pipeline pricing and warm-model strategy. See AI Capabilities for the pipeline catalogue and pricing units.

Applications and integrations

Applications are the products built on the layers below. Livepeer Studio is the hosted video product run by Livepeer Inc – streaming, VOD, transcoding, and the embeddable player behind an API key. Daydream is the hosted real-time AI video product, built on ComfyStream and the AI Gateway API. Storyboard is an AI agent workspace that uses Daydream as its inference backend. Streamplace is the live-video product for the AT Protocol and Bluesky. BYOC (“bring your own container”) is the path for a partner to plug a custom AI pipeline into the network as an Orchestrator-side container. A reader choosing where to build picks the highest layer that meets their needs. Studio for managed video, Daydream for managed AI video, the AI Gateway API for direct network calls, ComfyStream for custom real-time pipelines, BYOC for new pipeline types. See Solutions for the full product catalogue.

How the layers interact

A job is the test: it touches every layer in one round trip. The job descends from an application call to platform routing to network compute, then settles back up in payment to the operator and a small redemption fee on-chain. Governance acts across, not through, the live job path. The protocol does not see most jobs – it only sees the ones whose tickets win and the per-round inflation distribution. That is the design that makes per-pixel and per-token pricing viable on Arbitrum.
Last modified on May 4, 2026