Proposed By: Rick Staa
Proposed On: 2026-05-11
Owner: Rick (Technical Director, Livepeer Foundation)
Funding Mechanism: RFP, Network Engineering SPE
The BYOC work delivered under the AI SPE made it possible for the community to run custom containers on the Livepeer network for the first time. The gap that remains is the one above the protocol: packaging a new AI pipeline into a BYOC-compatible container is still core-developer work. A builder must either rebuild the orchestrator-side plumbing (trickle channels, healthcheck state machine, capability registration, etc.) from scratch, or merge pipeline-specific code into go-livepeer / ai-runner and wait for a core review. There is no abstraction layer in between, which prohibits the quick demand experiments the community wants to run, the kind much needed to gather real-world data and inform improvements to the core network stack.
The Foundation's mission is to help the community build out Livepeer's vision: an open network the community itself extends. Any developer or community member should be able to ship demand-side experiments without being bottlenecked by core engineering reviews or having to write custom plumbing for every new pipeline. The end state we're aiming for:
A developer, builder, or provider with no prior Livepeer experience makes their first Video AI API call in under 5 minutes from Claude, Cursor, or any MCP-compatible tool. Any orchestrator deploys new pipelines through a self-serve, well-documented BYOC interface. No Foundation in the loop at any step.
Getting there requires several enablers: strong developer documentation, easy-to-use SDKs, payment abstractions, live network data, a performant runtime, and strong agent compatibility. Livepeer Cloud has improved the data landscape under their last proposal. Under the Transformation SPE, the community has already shipped a number of these on the demand side: the remote signer separated payments from the gateway, and the Python gateway SDK removed the need for gateway-specific code in go-livepeer. A builder can now send paid jobs to orchestrators without running the go-livepeer gateway — Scope is already using this path for onboarding their workflow. Further work on the gateway side of the SDK will be proposed under the new Developer SPE as its own opportunity, and payment abstractions will be addressed by John in a separate SPE proposal.
Creating a new pipeline, however, still requires deep knowledge of the core stack and a fair bit of engineering work to get the plumbing right. The Pipeline SDK is the abstraction that closes that gap: an opinionated, class-based Python interface — similar to how Cog and comparable ecosystems handle this — that lets a builder produce a BYOC-compatible container an orchestrator can pull and run, with the communication schemas set by the core software and no plumbing to rebuild or core review to wait on. It is an opinionated path designed to speed up the developer journey on top of an unopinionated core — builders with more complex requirements can still target the BYOC communication schemas directly.
A small tech spec on this approach lives here, and an initial prototype has been built to showcase it works. Before the SDK is ready to put in front of the wider developer community for real-world testing, gaps remain around documentation, stability, ease of use, and a CLI surface for benchmarking and publishing. Cost of waiting: without this SDK running in parallel with the runtime and core improvements, the demand experiments that would inform that work never happen — leaving the next wave of network changes to be designed without the real-world data they need.
Out of scope for this opportunity:
More complex pipelines beyond the opinionated SDK pattern — multi-endpoint routing, non-standard schemas, or anything requiring runtime-side support. Some can be built today using the SDK's lower-level communication packages inside a custom BYOC container; others depend on the orchestrator runtime improvements below.
Client-side SDK and payment abstractions — both tracked under separate SPE proposals (see above).
Orchestrator runtime improvements — auto-loading containers, hot-swapping between pipelines, capability-based routing. Tracked separately. This SDK uses the runtime's communication schemas to provide a higher-level abstraction; the runtime evolves independently.
Live network data improvements — part of a separate opportunity.
Broader developer-experience and agent enablement work beyond what is directly tied to the SDK and its containers — wider documentation overhauls, agent integrations outside the SDK surface, and tooling not part of the pipeline lifecycle.
A community builder can go from pip install to a Livepeer-compatible container that an orchestrator can pull and run in under five minutes, single take, fresh environment.
A community-agreed, stable v1 of the developer interface is published and locked — the MVP every other deliverable builds against. Once locked, community builders can adopt the SDK without fearing breaking changes, and future runtime improvements stay backward-compatible with it.
A new pipeline can be packaged without hand-written Dockerfiles or registration boilerplate, and the resulting container attaches to orchestrators automatically when pulled — no manual registration step.
A livepeer CLI ships with the SDK that lets a builder (a) benchmark a pipeline container's performance locally against a synthetic workload before publishing, and (b) push the built image to Docker Hub in the format orchestrators can discover and pull — no hand-rolled Dockerfiles, no manual tagging conventions.
The SDK is agent-native: a published AGENTS.md + LLMs.txt (or equivalent agent-readable convention files) describe the SDK to coding agents, and an optional local SDK server exposes the SDK to MCP-compatible tools so a developer can build, test, and publish a new pipeline using AI assistance in under five minutes.
A dedicated example-pipelines repository carries a curated suite of reference pipelines (covering the current hello_world, live_tint, live_detect, live_transcribe, image_upscale, llm, sentiment cases plus broader workloads added over time). Every example builds and runs against a real orchestrator using only the SDK — no go-livepeer fork, no ai-runner change — and serves as the canonical "copy this to start" surface for new builders.
Scope plus at least one additional external builder confirm the end-to-end path from their own workstation.
What is the exact shape of the developer interface? What should the base classes and lifecycle hooks expose — method names, return contracts, parameter-update patterns, error handling, init/teardown surface? The current prototype is one shape; locking v1 means agreeing on the ergonomics, especially across builders with different pipeline patterns (streaming vs batch vs multi-modal).
Should capability registration be auto-wrapped in the container? Two design options: (a) the wrapped container self-registers on start using env variables the orchestrator sets at runtime, or (b) registration stays an explicit orchestrator-side step. Affects how seamless the "attaches automatically when pulled" claim is in practice.
Is heartbeat the right uptime signal? Today the SDK uses heartbeat-based reporting to let orchestrators know the container is alive and serving. Open whether that is the best mechanism — or whether pull-based health probes, event-driven status, or a hybrid would be more reliable and lower-overhead for orchestrators running many pipelines.
How do we coordinate schema evolution with the core software? The SDK's communication schemas are set by go-livepeer / ai-runner, which will keep evolving under the orchestrator runtime opportunity. Need a coordination protocol between this opportunity and that one so v1 doesn't fracture when the runtime changes.
Should v1 be designed with multi-language SDKs in mind? Python-only at v1. Whether the v1 contract should be designed for TypeScript / Go SDKs from the start, or stay Python-first and abstract later, is a design choice that affects how much portability the contract bakes in.
Please authenticate to join the conversation.
Under Review
Suggest Ecosystem Projects
Suggestion Proposed
About 5 hours ago

Rick Staa
Get notified by email when there are changes.
Under Review
Suggest Ecosystem Projects
Suggestion Proposed
About 5 hours ago

Rick Staa
Get notified by email when there are changes.