Would you like to create a Suggestion for the roadmap?

When creating a Suggestion for the roadmap, please give a clear & specific title — avoid vague labels like 'improve governance'. Then make sure you answer the following questions:

(1) What is the problem?

(2) Why does this matter to the ecosystem?

(3) What could success look like?

The Suggestion will then be added to the pipeline, which you can learn more about here.

Explorer: permissionless participation portal

The problem The Explorer now sits on a stable, maintainable foundation following recent RFP work. However, it remains visually and functionally disconnected from the Foundation’s surfaces and limited in what it surfaces to operators and delegators. It was built for an earlier era of the network and has not yet been fully realigned with current needs. Delegators making stake decisions today have less information than they should. Operators don't have the observability they need. The surface does not reflect the current state of the network or the direction the Foundation is moving. For the largest cohort of existing Livepeer participants — the people who stake, delegate, and operate — the Explorer is the Foundation's front door. Leaving it in its current state is a visibility liability that undercuts the trust the rest of the quarter's work is trying to build. Operators with better observability can demonstrate performance; delegators with better data can reward it. Both sides of that loop are currently underserved. The direction Restore the Explorer as the permissionless participation portal. The canonical Foundation-owned surface for operators and delegators, aligned with the Foundation's design system and current network reality. Q2 scope: Tailwind restyle aligned with the Foundation's design system and the Developer Portal surface. Improved default network stats — the data delegators and operators actually need to make decisions, drawing on the subgraph upgrades and metrics framework developed through the NAAP metrics/SLA process. An AI-native interface for stakeholders to query network data, surface custom dashboards, and get answers grounded in live on-chain state — not generic context (#584). LIP voting transparency — surfacing proposal status, voting history, and participation rates directly in the Explorer (#482). A native notification system for delegators and operators — reward events, proposal activity, stake changes (#319). This list is a starting point — community members are invited to propose additions during the scoping window. Work sequences after design system consistency between the Developer Portal and Explorer is specified. The two surfaces share design tokens; they do not share information architecture. Why now The Explorer is the most visible artifact of the network for the audience with the largest standing investment in it. Every week the Explorer remains in its current state is a week the Foundation signals that existing participants are not the priority. That is not the signal the quarter's strategy warrants. Funding this through the Network Engineering SPE provides a path to a retainer-based team that owns the repository and delivers against a clear participation-portal vision — rather than treating the Explorer as spot work between other priorities. What the community helps scope The problem statement is settled. The direction is settled. The scoping work ahead is: Which features matter most to delegators vs. operators. Current state assumes one surface serves both audiences equally; that assumption deserves testing. How AI chat is grounded in real network data. What data sources, what refresh cadence, what accuracy bar. Which observability surfaces serve decision-making vs. which are vanity. Specifically: what does a delegator need that they don't have, and what would an operator check daily. How LIP voting transparency is designed — what proposal data is surfaced, what participation metrics matter, and how the interface lowers friction without reducing quality (#482). What retainer structure best fits the work — a single team, a rotating set of contributors, or a hybrid model. Funding path Network Engineering SPE, Priority 2 — Explorer — Participation & Observability. The SPE supports retainer-based contributor work where scope warrants it, in addition to the RFP-and-retroactive structure.

Rick Staa 10 days ago

Developer Portal: capability discovery and activation

The problem Builders evaluating Livepeer today don't have a clear path from discovery to first inference call. Capabilities across gateways are not surfaced in a consistent, product-oriented way. There is no interface that takes a builder from "what can this network do" to "I've made my first call" in a measurable, reproducible window. The result: high-intent builders bounce before they activate. The network doesn't get the real-time feedback it needs to prioritize what to build next. Demand-side signal — the thing the Foundation's strategy depends on — stays thin. This is not a documentation problem. It's a surface problem. The documentation, the SDK, the gateway infrastructure, the payment primitives all exist. The activation layer that brings them into a single legible path for a developer — does not. The direction A Developer Portal that serves as the demand-side interface for the Livepeer network. The Foundation-owned surface where a builder moves from discovery to first use. Scope includes documentation, a Python SDK, BYOC container tooling, payment and auth infrastructure, and the scaffolding required to deliver a measurable claim: docs to first inference call in under five minutes, from any MCP-compatible tool. The five-minute API is the critical-path metric, not a slogan. It gets measured weekly on a fresh environment. Published Tuesdays. If it slips past 10 minutes, scope review. Past 15 minutes, timeline review. Why now The engineering foundations are in place. The Payment Clearinghouse, the Python SDK, the gateway replacement — six months of work that the community has seen shipped but whose demand-side implications have not been surfaced cleanly. The Developer Portal is the surface that makes that work legible. Related work is also shipping from the broader ecosystem. The NaaP Epic 2 MVP delivers the dashboard, API manager, capacity planner, and SDK that any demand-side surface requires. NaaP and the Developer Portal point in the same direction. How the two integrate — whether NaaP becomes the Developer Portal's backend, whether specific components are adopted, or how the surfaces relate — is part of the scoping work ahead. Funding this through the Network Engineering SPE lets it move at the pace the opportunity requires, with public accountability on deliverables and impact. What the community helps scope The problem statement is settled. The solution direction is settled. The scoping work ahead is: The specific RFPs and their sequencing. What ships first, what depends on what. How the Python SDK, BYOC tooling, payment infrastructure, and agentic harness components sequence against each other. How the Developer Portal integrates with existing demand-side infrastructure including NaaP. Where orchestrator needs intersect with developer needs — and where the Developer Portal surface needs to expose both. What "five minutes, on a fresh environment, from any MCP-compatible tool" means in concrete test terms. Funding path Network Engineering SPE, Priority 1 — Developer Portal — The 5-Minute API. The SPE funds scoped RFPs in the $2k–$20k range, with a Review Team drawn from the orchestrator community, a Technical Director sign-off on delivery, and all decisions published with written rationale. Pre-proposal: Network Engineering SPE — Pre-Proposal.

Rick Staa 10 days ago

Developer Community Activation Program (Emerging Markets Focus)

Suggestion: I’d like to propose a structured Developer Community Activation Program aimed at driving adoption of tools like Daydream across emerging markets, starting with Nigeria and expanding across Africa. There is a rapidly growing base of developers and creators in these regions who are actively exploring video, AI, and onchain tools, but there is currently low awareness and limited onboarding support for platforms like Daydream. Proposed Approach: Launch grassroots initiatives led by local developer advocates Host webinars, bootcamps, and live demos focused on: Video creation workflows using Daydream API integrations into real-world applications Develop localized tutorials and starter projects Encourage community-led feedback loops to inform product direction Why This Matters: Unlocks a high-growth, under-tapped developer market Drives real usage of Daydream APIs beyond passive awareness Provides direct feedback from new user segments Strengthens Livepeer’s ecosystem positioning globally Execution Model (Lean): Start as a pilot in 1–2 regions (e.g., Nigeria) Community-led, low-cost experimentation Measure traction via: Developer onboarding API usage interest Event participation Scale based on validated engagement Additional Context: I’m currently engaging with developers in Nigeria and am willing to help initiate and document early traction from this region as part of a pilot. Outcome: If successful, this can evolve into a repeatable model for global community-driven growth and developer adoption.

Gideon Jones 16 days ago

Payment Clearinghouse

Who Are You? Name: John Mull Your connection to this problem (Why did you spot it? Are you affected by it? Do you work in the area it touches?): Core NaaP engineer actively building with the SDK. Over the past three months, John and Josh have been the only developers using it — this is a direct blocker to wider adoption. What Is The Problem? One specific sentence. Not a theme — the actual friction or gap. There is no general-purpose payment, usage metering, or authentication layer that multiple independent apps can rely on, which means non-core developers cannot build or monetize products on the Livepeer Network today. Why Does It Matter To The Ecosystem? 2–3 sentences. Who is affected and how? What is the cost of leaving it unsolved? Is there a window or urgency? Without a payment clearinghouse or remote signer, every developer is forced to use the existing go-livepeer gateway and long-lived API keys — a model that is incompatible with desktop apps, agentic tools (VS Code, Claude Code, BlueClaw), and OAuth 2.0 + OIDC device flows. This blocks the entire community from building with the SDK and makes it impossible to ship strategic initiatives like x402 payment support and MCP server tooling. The window is urgent: the NaaP roadmap and agent ecosystem integrations are waiting on this foundation. Who Else Feels This? Name at least one other person, persona, or group who experiences this problem. All third-party app developers trying to build on Livepeer. Specifically: teams building desktop apps (e.g. Scope), agentic framework integrators (VS Code, Cursor, BlueClaw), and any developer who needs a billing or usage API for their product. What Have You Already Tried or Seen? Prior attempts, related Forum threads, GitHub issues, Discord conversations, or past Advisory Board recommendations. Josh Allmann scoped a payments clearinghouse design document and roadmap as part of the Transformation SPE workstream (remote signer prototype merged into go-livepeer via PR#3822 and PR#3791). A TurnKey USDC pre-auth integration has been prototyped as a potential third-party auth option. DayDream’s current auth model (long-lived API keys, single-domain redirect) has been identified as insufficient for multi-app or desktop use cases. What Does A Good Outcome Look Like? Concrete and observable. "X goes from Y to Z" or "teams can now do X without Y." At least 2 demand partners onboarded (1 web app + 1 desktop app) using the clearinghouse and SDK to access the Livepeer Network. A working OAuth 2.0 + OIDC login flow demonstrated in a desktop or third-party integration (e.g. VS Code, BlueClaw, or Cursor). A billing and usage API that lets apps show users their consumption, and lets developers view usage in the clearinghouse dashboard. HTTP 402-driven automatic top-up flows working on the remote signer proxy for at least one integration. What You Don't Know Yet 2–4 genuine unknowns you’d want the group to help answer. Where exactly is user prepay and ticket valuation measured — in the proxy, or in the value of the actual claimed ticket? This is critical for correct micropayment accounting on a shared remote signer. What is the right account model for metering and billing (e.g. per-user wallet addresses on the remote signer side)? Which third-party auth and signing providers (Turnkey, Privy, others) are viable, and what is the minimal interface needed if developers bring their own? How are payments applied to accounts in a developer-facing Account Management API (roles, permissions, app linkage)?

John | Elite Encoder about 1 month ago

ComfyStream Multi-Pipeline Experiment + ComfyMeme Demo

1. What is the problem? Today most AI pipelines on the network run using a one-container-per-pipeline setup. In practice this means: • Each pipeline type runs in its own container environment • Orchestrators must maintain multiple containers to support multiple pipelines • Most orchestrators end up specialising in one pipeline type • GPUs cannot easily pivot between different workloads without redeployment This creates two ecosystem issues: Limited flexibility Even if GPUs have available capacity, orchestrators cannot easily switch between different pipeline types. Operational complexity Running multiple containers increases configuration overhead, environment drift, and maintenance burden. As a result, orchestrators often choose a single pipeline to support rather than experimenting with multiple AI services. The constraint appears to be software orchestration, not GPU capability. 2. Why does this matter to the ecosystem? This primarily affects the supply side of the network. The ecosystem currently has roughly 100 AI-capable orchestrators, meaning supply growth is constrained. When supply is capped, the key growth lever becomes: revenue per GPU Multi-pipeline capability could improve: • revenue per GPU • revenue per orchestrator • supply flexibility across pipeline types • time-to-serve new AI workloads Without this flexibility, the network risks developing specialised supply that cannot adapt quickly to demand changes. 3. Proposed experiment This proposal tests whether ComfyStream can enable multi-pipeline orchestrators by dynamically loading workflows on a single GPU. Instead of running multiple containers, an orchestrator would run: one ComfyStream runtime capable of loading different workflows on demand. The experiment aims to determine whether this is technically viable and operationally useful. 4. Demonstration application: ComfyMeme To test this capability in practice, the experiment includes building a small demonstration application called ComfyMeme. ComfyMeme generates AI-remixed animated memes using short GIF / WebP clips from the Giphy API. Example pipeline: Giphy meme clip → frame extraction → Stable Diffusion + LoRA stylisation → animated meme output Memes are intentionally chosen because they are: • easy to understand • fast to generate • culturally shareable • capable of producing organic traffic The demo therefore acts as both: • a public application • a stress test for multi-pipeline orchestration 5. Relationship to ComfyStream Cloud The longer-term concept discussed in the AI SPE roadmap is ComfyStream Cloud — a platform where creators could deploy Comfy workflows as hosted AI applications. Instead of this model: creator → workflow JSON → user runs locally Workflows could become hosted services: creator → workflow → hosted endpoint → users ComfyMeme acts as a first proof-of-concept of this idea. If hosted workflows prove viable, future work could explore creator deployment tools and monetisation mechanisms. 6. Scope limitations This experiment intentionally avoids solving several large problems: • automatic model distribution • arbitrary workflow compatibility • creator monetisation infrastructure Instead, it assumes a curated model set preloaded on orchestrator nodes. The goal is simply to validate dynamic workflow execution on the network. 7. Success criteria Success should demonstrate both technical viability and economic potential. Technical signals: • one ComfyStream runtime successfully serving multiple workflows • acceptable workflow switching latency • stable execution across multiple requests Adoption signals: • at least three orchestrators running ComfyStream Economic signal: • at least one orchestrator earning revenue from two distinct pipeline types on a single GPU 8. Deliverables The experiment would produce: • ComfyMeme demonstration application • ComfyStream configuration enabling multi-pipeline loading • documentation for orchestrator setup • public demo endpoint • written findings on performance, latency, and operational challenges 9. Expected duration Estimated timeline: 4–6 weeks This includes: • building the demo application • orchestrator deployment testing • community demonstration • documentation of results 10. Summary This proposal tests whether dynamic workflow loading via ComfyStream can enable orchestrators to serve multiple AI pipelines from a single GPU. The experiment combines: • infrastructure validation • a public demonstration application • measurable economic outcomes The key outcome is simple: prove that a single orchestrator GPU can earn revenue from multiple pipelines. If successful, this would strengthen orchestrator incentives and lay the groundwork for future hosted workflow platforms such as ComfyStream Cloud.

Peter Schroedl about 1 month ago

Funding Community Engineering Initiatives

This roadmap item was discussed during the latest Water Cooler. The overlap Onchain Treasury Allocation Improvements should be recognised, but potentially works alongside it as a short-term solution. 1. What is the problem to solve? The current SPE model creates significant friction for smaller, community-driven engineering contributions: Writing a full proposal requires a major time investment before any work begins The onchain vote cycle adds weeks or months of delay for work that could start quickly For scoped initiatives in the $2k–$20k range, the overhead-to-value ratio is simply too poor — community contributors won't run a full SPE process at that scale This friction has become more acute as AI tools now allow contributors to build and test MVPs far faster than before The result: genuinely useful, well-supported work doesn't happen — not because the community doesn't want it, but because there's no efficient path to fund it. 2. Why is solving this problem key to the Livepeer ecosystem? Smaller experimental initiatives and quick engineering wins are often where early, high-signal progress happens. If the only funding path available requires months of process overhead, contributors either work for free, deprioritize the work, or move on entirely. Losing active contributors — and the compounding value of their momentum — is a real cost to the ecosystem. A more frictionless path to funding smaller initiatives would align disbursement with the natural cadence of community proposals and keep contributors engaged and productive. 3. What could success look like? A delegated, standing funding pool for small-to-medium engineering initiatives, validated through the existing community roadmap process, where: Contributors can apply for funding for quick wins and experiments without the full weight of a standalone SPE Decisions are made transparently and without unnecessary bureaucracy Funding speed matches the speed at which good ideas can now be executed The above is just a potential solution and it is all open for discussion. 4. What are the outstanding questions to discuss? Does this problem resonate — are there contributors who've shelved ideas because the SPE path felt too heavy? What's the right pool size on a quarterly basis to be meaningful without being wasteful? Should the funding mechanism be proactive, retroactive, or some mix depending on the type of initiative? What governance structure makes sense — a multi-sig of trusted core contributors, or direct community votes on individual projects? How does this fit into broader discussions on the onchain treasury, and is a short-term experiment worthwhile regardless? Who should be eligible — public goods only, or should demand bets and other initiative types be in scope too? Note: this has been elaborated and proposed by Rich O’Grady based on prior Water Cooler discussions, but does not represent a Foundation priority.

Rich O'Grady about 2 months ago

Drive AI-centric Livepeer Brand

Purpose Livepeer needs a new meme (realtime? realtime AI?). With Livepeer’s new focus on realtime AI & video, the creation and focus of the Daydream community and product, and the development of new gateway services (Streamplace, Frameworks, Embody), each part of the Livepeer story needs to be coherently weaved together. The market for video is exploding as it merges with AI. This opportunity is to lead a team dedicated to advancing the Livepeer brand, connecting the network product, token and ecosystem. As both a compute network and specialised video infrastructure, Livepeer needs to communicate its value propositions to new customers and investors. Outcome By the end of this 6-month period, Livepeer will have formed its own category and positioned itself as a market leader in providing specialised infrastructure for real-time video and AI. It will have identified its market and have the foundations of a go to market, which can be taken forward in collaboration with other teams. Some key metrics include: Number of inbound, qualified developer leads Social followers and/or social engagement Discord members increase Discord member engagement

Admin Team about 2 months ago

Onchain Treasury Allocation Improvements

Establish norms, processes, and accountability mechanisms for how the onchain treasury is allocated — ensuring capital flows to highest-impact ecosystem activities. Problem Statement: Now that the treasury rate cut is back online we need a prioritization criteria for the deployment of funds, a formal framework for projects where resource allocation is the key decision point, and a broader accountability framework that alleviates previous community concerns about the ROI of deployed funds Scope Define what treasury funds should and shouldn't be used for - eg. align treasury allocation priorities to Roadmap items Establish norms and an evaluation framework for proposals (if needed) Establish a process for use of RFPs, such that funds can be secured before RFP teams are chosen Set performance, transparency and reporting norms for funded projects Key Questions to Answer How are Roadmap items suggested and determined? What are the real differences between "Roadmap-aligned" vs. speculative proposals? What should be used to evaluate proposals? How can an RFP process work in advance of deployed funding given the onchain execution of the treasury? What does performance, transparency and reporting look like post-funding? What group should be engaged to best answer and propose action on the items above? Out of Scope: Size of treasury reward cut rate itself (separate LIP can be further created if needed) Success Criteria: Community-ratified norms published; new treasury proposals evaluated as per norms, future funded projects deliver as per norms

Admin Team about 2 months ago

1