Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.livepeer.org/llms.txt

Use this file to discover all available pages before exploring further.

Why Pipelines Matter

The Network’s capability set is defined by the pipelines its orchestrators run. A pipeline is a workload the orchestrator can accept, with a defined input format, output format, and pricing unit. Pipelines are the unit of capability growth: when a new model or workload becomes available, it ships as a new pipeline that operators choose to load. The key consequence is that the Network’s capability set is open and grows by participation. New AI models, new transcoding profiles, and new custom workloads all enter the Network the same way: an orchestrator declares the pipeline in its configuration, loads the model, and starts advertising the capability. No protocol upgrade required.

Three Workload Classes

Every pipeline falls into one of three classes by how the work is shaped. The class determines what an orchestrator must run, how the gateway dispatches work, and what latency the workload tolerates. The state machine each pipeline follows is the same in shape: ingest, dispatch, compute, return, settle. The differences are in cadence, transport, and compute path. Operator-side detail on how the state machine is implemented lives in the orchestrator and gateway tabs.

Pipeline Lifecycle

Every pipeline has the same lifecycle from job intake to settlement, regardless of class. Cadence and transport differ; the lifecycle does not.

Built-In Pipelines

The Network ships a set of built-in pipelines maintained in the ai-runner repository. Each runs as a Python container an orchestrator can load and serve. The set covers the most common AI workloads. The set evolves continuously. New pipelines ship through the ai-runner release cadence; orchestrators choose which to load based on hardware, demand, and operator strategy.

Custom Pipelines (BYOC)

Pipelines outside the built-in set arrive through Bring-Your-Own-Container. A developer or operator packages a custom pipeline as a container, declares its interface, and either runs it themselves as an orchestrator or partners with operators to run it for them. BYOC pipelines use the same payment flow, the same dispatch shape, and the same settlement boundary as built-in pipelines. The difference is that the pipeline definition lives outside the ai-runner repository, controlled by the BYOC author. This is how new workload types reach the Network without coordination through Livepeer Inc.

How developers package and ship custom pipelines on the Network.

Purpose, properties, actors.

Fleet structure and surfaces.

Reachable surfaces and protocols.

Operator-side pipeline configuration.
Last modified on May 4, 2026