Documentation Index
Fetch the complete documentation index at: https://docs.livepeer.org/llms.txt
Use this file to discover all available pages before exploring further.
Summary: Real Promises of DePIN for AI
Summary: Real Promises of DePIN for AI
Real promises of DePIN for AI that the panel articulates:
- Unlock idle GPU capacity. Huge amounts of hardware sit underutilised across data centres, crypto miners, and consumer machines while AI demand outstrips supply. DePIN networks aggregate that latent supply into usable clusters (Aline: “we cannot have idle GPUs while we have a lot of AI demand”).
- Pay-per-job pricing instead of pay-for-blocks. Centralised clouds force you to rent time windows; decentralised inference networks can charge per inference task or per minute transcoded, which is far more efficient for bursty AI workloads (Shannon).
- Order-of-magnitude cost reduction. Livepeer transcoding already runs around 10x cheaper than Web2 cloud equivalents, and the same economics now extend to generative inference.
- Make compute a tradable financial asset. Crypto’s core strength is building efficient financial markets; compute may be the largest market in the world, so putting it on-chain drives down the cost of capital and lets compute, AI models, and adjacent assets be financialised (Vincent).
- Aggregate the cheapest GPUs globally. With new training methods (DiLoCo-style infrequent sync), a 20 to 30 percent training inefficiency is fine if you’re sourcing GPUs 3 to 4 times cheaper than hyperscalers.
- Novel funding models for open-source AI. Today AI has only two business models (closed labs raising billions, or fully open with no revenue); DePIN plus on-chain IP enables a Stable Diffusion-style middle ground where open models are collectively funded and earn licensing revenue from enterprise users (Vincent).
- Cryptographic guarantees for AI. Privacy, integrity, authenticity, and provenance for data, models, and agent-to-agent interactions (Stepan).
- Incentive alignment via mechanism design. Token rewards plus smart contracts to fairly compensate contributors (data, compute, training) and resist fraud, directly relevant to ongoing lawsuits about uncompensated training data.
- Anti-monopoly and fair access. Keep AI from being owned by a handful of corporations (Meta, OpenAI) or a single state (US, CCP) whose preferences override everyone else’s.
- Programmable governance for AI systems. DAOs and on-chain agents allowing users to express preferences over what models get trained, what they optimise for, and how they’re used; futarchy and prediction-market governance as an experimental frontier.
- Sandbox for governance experimentation. Crypto lets you test governance mechanisms in weeks with real value at stake, instead of waiting decades (and risking revolutions) for nation-state-scale change.
- Geopolitical resilience. Distributed supply across jurisdictions, not concentrated in one cluster or one country (Aline, briefly).
- Repurposability of existing DePIN infrastructure. Networks built for one workload (e.g. Livepeer’s video transcoding) can absorb adjacent AI workloads on the same nodes with the same payment rails, compounding value from already-deployed hardware.
Livepeer Actors
The protocol defines three core actors, and implements mechanisms to coordinate their interactions and align incentives for the desired network outcomes:Gateways
Gateways
Function: Ingests source video or AI inference requests from end users and routes the work to Orchestrators.Purpose: Connects demand-side applications and users to the Livepeer Network.Economics: Pays for transcoding services using ETH through the ticket system.Technical role. Probably the most demanding integration work of the three. Run Gateway software, ingest video or AI requests from end users, handle Orchestrator discovery and selection, manage payment channels and ticket issuance, deal with failover, and bridge the protocol to whatever application is actually consuming the work. This is where most of the engineering surface area lives.Economic role. They’re the demand side and the only source of external revenue. They fund deposits and reserves to back probabilistic tickets, set their own quality/price tradeoffs by selecting Orchestrators, and ultimately decide whether the network’s pricing is competitive against centralised alternatives. If Gateways aren’t economically satisfied, no one else gets paid.Community role. Historically the quietest of the three, but increasingly important. Gateway operators are often building products on top – streaming platforms, AI apps, developer tools – and their feedback drives what capabilities Orchestrators add and what the protocol prioritises. The Livepeer AI subnet exists in large part because Gateway-side demand pulled it into existence.
Delegators
Delegators
Function: Bonds LPT to Orchestrators without running infrastructure directly.Purpose: Secures the network by delegating stake to Orchestrators.Economics: Earns rewards by sharing in the fees and inflationary rewards of the Orchestrator they delegate to.Technical role. Light but real – they need to evaluate Orchestrators (uptime, performance, fee history, reliability), manage their own wallet security, and periodically claim earnings or move stake. No infrastructure, but non-trivial diligence if they’re doing it well.Economic role. Capital allocators. They route LPT toward Orchestrators they believe will earn the most fees and rewards relative to risk, and away from underperformers. In aggregate this is the price signal that shapes the Active Set. They also bear the opportunity cost of locked stake and unbonding periods.Community role. Lower visibility than Orchestrators but real influence – they’re the constituency Orchestrators are courting. Active Delegators participate in governance polls, weigh in on Orchestrator behaviour, and (especially the larger ones) shape sentiment about which Orchestrators are trustworthy. They’re the network’s accountability layer.
Orchestrators
Orchestrators
Function: Runs the hardware that performs transcoding and AI inference work.Purpose: Registers on-chain, stakes LPT, and bids for work. The top-ranked subset by total stake forms the Active Set, which is eligible to receive jobs in a given round.Economics: Earns rewards and fees for performing work, after the Orchestrator’s configured fee share and Reward Cut are applied.Technical role. Operate the infrastructure. Run go-livepeer, manage transcoders and GPUs, maintain uptime, keep model containers current for AI work, advertise capabilities, and actually perform jobs. This is the only role in the protocol that requires running hardware.Economic role. Price the work (
pricePerUnit), set the split with Delegators (feeShare, rewardCut), bond self-stake as collateral, redeem winning tickets, and call reward each round. They’re a small business inside the network – capex on hardware, opex on power and bandwidth, revenue in ETH and LPT.Community role. They’re the most visible public face of the network. Reputation matters because Delegators are choosing them – so Orchestrators run websites, publish stats, show up on Discord, write guides, help newer operators, and participate in governance discussions. The serious ones are effectively brands.- Transcoders and AI Workers - often co-located on Orchestrator nodes - execute video transcoding and AI inference tasks in Docker containers or as external endpoints.
Livepeer Capabilities
The network’s workload-agnostic design, enabled by crypto-economic primitives, allows it to evolve organically based on demand and supply dynamics without requiring protocol changes for each new capability.Video & Media Streaming
Adaptive bitrate transcoding, live streaming, and VOD ingest at sub-cloud cost. The original Livepeer workload, now serving production traffic for projects such as Livepeer Studio and Streamplace.
AI and Agents
Open and proprietary models running on Orchestrator GPUs: text-to-image, image-to-image, image-to-video, audio-to-text, text-to-speech, segment-anything-2, LLM, upscale, and frame interpolation pipelines.
Real-time AI Video
Live video-to-video (LV2V) pipelines that apply AI effects to a webcam or stream at sub-second latency. Powers Daydream, ComfyStream, pytrickle, and creator tooling such as Storyboard.
Generalised Real-time Compute (BYOC)
BYOC - Bring Your Own Container, lets builders ship a containerised workload that runs on Orchestrator GPUs and pays through the standard payment flow.
Livepeer Capability Frontier
The forward roadmap is a function of the same three properties that define current capability: supply growth, settlement scalability, and workload-runtime extensibility. Each direction below is grounded in active engineering or LIP discussion.Provenance and metadata for AI outputs
Provenance and metadata for AI outputs
Cryptographic attestation of which Orchestrator ran which model with which inputs, anchored to on-chain receipts. Combined with watermarking and signed-output flows, this makes Livepeer a candidate substrate for compliant AI generation in markets that require traceability (advertising, news, regulated industries). The building blocks already exist in the AI subnet’s job receipts; the remaining work is standardising the provenance schema and exposing it through the AI Gateway API.
Data oracles and external integration
Data oracles and external integration
Inference workloads increasingly need live external data: market prices, sensor feeds, real-time event streams. The same Gateway pattern that routes a video segment can route a job that fuses on-network compute with off-network data, with the oracle relationship secured by the same stake and slashing model. This extends the network from a static-input compute provider into a real-time data-aware inference layer.
Cross-chain interoperability
Cross-chain interoperability
LPT and ETH already span Ethereum mainnet and Arbitrum One, with the bridge handling deposits, withdrawals, and round-trip stake flows. Cross-chain extensions in scope include bridging payment surfaces to other rollups, exposing job settlement on alternative L2s, and enabling LPT delegation from holders on chains where they already hold positions. The protocol’s modular design keeps these extensions additive.
Agent and creative-pipeline infrastructure
Agent and creative-pipeline infrastructure
Storyboard’s agent runtime, Daydream’s hosted creative API, and the NaaP plugin platform together point at a network surface that resembles a programmable creative studio. The combination of LV2V, BYOC, and a stable payment rail makes Livepeer a natural backend for autonomous creative agents, where model orchestration, asset persistence, and rights tracking happen across composed services on top of the same compute layer.
Generalised real-time compute
Generalised real-time compute
Real-time is the niche the network is uniquely positioned for. Sub-second-latency workloads tolerate decentralised supply far better than batch training does, and the probabilistic micropayment design was built for high-frequency, low-value-per-job traffic. Future workload classes that fit this profile include real-time translation and dubbing, multi-agent coordination loops, embodied-AI control surfaces, and gaming-grade interactive inference. The protocol layer does not need to change to absorb them.