Pick Your Path
Workload Provider
Create workloads that run on Livepeer orchestrators - build containers, deploy pipelines, and leverage the network’s GPU compute.
Workload Consumer
Consume existing pipeline workloads running on the Livepeer network - no infrastructure setup required.
Core Contributor
Contribute directly to go-livepeer, the Go implementation that powers the Livepeer network.
Path 1: Workload Provider
As a Workload Provider, you create workloads that run on Livepeer orchestrators. You build the containers and pipelines - orchestrators on the network provide the GPU compute to execute them. Whether it’s an AI inference pipeline, a video transcoding job, or something entirely custom, you define the workload and the network runs it. There are two approaches depending on how much control you need.Option A: Traditional Route (Gateway + BYOC)
The standard path for getting your workloads running on orchestrators. You develop a BYOC (Bring Your Own Container) workload, run a gateway to route jobs, and orchestrators pick up and execute your containers on their GPUs.Understand the BYOC model
BYOC lets you package your workload as a sidecar container that runs alongside the go-livepeer main container on orchestrator nodes. You define what the container does - the orchestrators provide the compute.
BYOC Documentation
Learn how BYOC containers work and how to build one.
Build your BYOC container
Develop and test your sidecar container locally. This is where your workload logic lives - inference models, processing pipelines, or any custom compute task.
BYOC Examples & Integrations
Reference implementations and example pipelines for building BYOC containers.
Run your own gateway
Set up a Livepeer gateway node. The gateway is how you submit jobs to orchestrators and receive results back.
Gateway Quickstart
Get your gateway node running.
Coordinate with orchestrators
Contact orchestrators directly to get your BYOC container running on their nodes. Once they’re running your container, you can route jobs to them through your gateway.
AI Pipelines Overview
Understand the full pipeline architecture.
Option B: Direct Smart Contract Interaction
If you want full control over orchestrator management, you can interact with Livepeer’s smart contracts directly using your own tooling. This lets you onboard orchestrators, control nodes remotely, manage payments, and build custom orchestration logic - all without going through the standard gateway flow. A good starting point is forking livepeer-ops, which provides infrastructure tooling for exactly this: onboarding orchestrators, remote node management, and payment handling through direct smart contract interaction.livepeer-ops
Fork this to get started - includes orchestrator onboarding, remote node control, and smart contract payment tooling.
Embody Pipeline
A reference implementation that uses direct smart contract interaction to run a real-time avatar pipeline on Livepeer.
Path 2: Workload Consumer
As a Workload Consumer, you use existing pipeline workloads that are already running on the Livepeer network. You don’t need to set up infrastructure or deploy containers - you connect to available pipelines and consume their output.Available Pipelines
Daydream (DaS Scope)
Consume Daydream pipeline workloads on the Livepeer network.
Embody Pipeline
Use Embody workloads for real-time avatar and VTuber applications by giving your agent the
SKILL.md file.
The SKILL.md contains the full instructions needed to consume and use Embody workloads.