Skip to main content
Livepeer offers multiple paths for developers depending on how you want to engage with the network. Whether you’re bringing compute workloads, consuming existing AI pipelines, or contributing to the core Go implementation, there’s a clear path for you.
Looking to run an orchestrator? Head to the Orchestrator section for setup guides and options.

Pick Your Path



Path 1: Workload Provider

As a Workload Provider, you create workloads that run on Livepeer orchestrators. You build the containers and pipelines - orchestrators on the network provide the GPU compute to execute them. Whether it’s an AI inference pipeline, a video transcoding job, or something entirely custom, you define the workload and the network runs it. There are two approaches depending on how much control you need.

Option A: Traditional Route (Gateway + BYOC)

The standard path for getting your workloads running on orchestrators. You develop a BYOC (Bring Your Own Container) workload, run a gateway to route jobs, and orchestrators pick up and execute your containers on their GPUs.

Understand the BYOC model

BYOC lets you package your workload as a sidecar container that runs alongside the go-livepeer main container on orchestrator nodes. You define what the container does - the orchestrators provide the compute.

BYOC Documentation

Learn how BYOC containers work and how to build one.

Build your BYOC container

Develop and test your sidecar container locally. This is where your workload logic lives - inference models, processing pipelines, or any custom compute task.

BYOC Examples & Integrations

Reference implementations and example pipelines for building BYOC containers.

Run your own gateway

Set up a Livepeer gateway node. The gateway is how you submit jobs to orchestrators and receive results back.

Gateway Quickstart

Get your gateway node running.

Coordinate with orchestrators

Contact orchestrators directly to get your BYOC container running on their nodes. Once they’re running your container, you can route jobs to them through your gateway.

AI Pipelines Overview

Understand the full pipeline architecture.

Option B: Direct Smart Contract Interaction

If you want full control over orchestrator management, you can interact with Livepeer’s smart contracts directly using your own tooling. This lets you onboard orchestrators, control nodes remotely, manage payments, and build custom orchestration logic - all without going through the standard gateway flow. A good starting point is forking livepeer-ops, which provides infrastructure tooling for exactly this: onboarding orchestrators, remote node management, and payment handling through direct smart contract interaction.
You’re not limited to these two options. The smart contract interface is open - you can fork livepeer-ops as a foundation, extend the Embody pipeline, or build your own tooling from scratch. Use whatever fits your architecture.

Path 2: Workload Consumer

As a Workload Consumer, you use existing pipeline workloads that are already running on the Livepeer network. You don’t need to set up infrastructure or deploy containers - you connect to available pipelines and consume their output.

Available Pipelines


Path 3: Core Contributor

As a Core Contributor, you work directly on go-livepeer - the Go implementation that powers gateways, orchestrators, and the protocol itself. This path is for developers who want to improve the network at the infrastructure level.
Last modified on March 3, 2026