Would you like to create a Suggestion for the roadmap?

When creating a Suggestion for the roadmap, please give a clear & specific title — avoid vague labels like 'improve governance'. Then make sure you answer the following questions:

(1) What is the problem?

(2) Why does this matter to the ecosystem?

(3) What could success look like?

The Suggestion will then be added to the pipeline, which you can learn more about here.

ComfyStream Multi-Pipeline Experiment + ComfyMeme Demo

1. What is the problem? Today most AI pipelines on the network run using a one-container-per-pipeline setup. In practice this means: • Each pipeline type runs in its own container environment • Orchestrators must maintain multiple containers to support multiple pipelines • Most orchestrators end up specialising in one pipeline type • GPUs cannot easily pivot between different workloads without redeployment This creates two ecosystem issues: Limited flexibility Even if GPUs have available capacity, orchestrators cannot easily switch between different pipeline types. Operational complexity Running multiple containers increases configuration overhead, environment drift, and maintenance burden. As a result, orchestrators often choose a single pipeline to support rather than experimenting with multiple AI services. The constraint appears to be software orchestration, not GPU capability. 2. Why does this matter to the ecosystem? This primarily affects the supply side of the network. The ecosystem currently has roughly 100 AI-capable orchestrators, meaning supply growth is constrained. When supply is capped, the key growth lever becomes: revenue per GPU Multi-pipeline capability could improve: • revenue per GPU • revenue per orchestrator • supply flexibility across pipeline types • time-to-serve new AI workloads Without this flexibility, the network risks developing specialised supply that cannot adapt quickly to demand changes. 3. Proposed experiment This proposal tests whether ComfyStream can enable multi-pipeline orchestrators by dynamically loading workflows on a single GPU. Instead of running multiple containers, an orchestrator would run: one ComfyStream runtime capable of loading different workflows on demand. The experiment aims to determine whether this is technically viable and operationally useful. 4. Demonstration application: ComfyMeme To test this capability in practice, the experiment includes building a small demonstration application called ComfyMeme. ComfyMeme generates AI-remixed animated memes using short GIF / WebP clips from the Giphy API. Example pipeline: Giphy meme clip → frame extraction → Stable Diffusion + LoRA stylisation → animated meme output Memes are intentionally chosen because they are: • easy to understand • fast to generate • culturally shareable • capable of producing organic traffic The demo therefore acts as both: • a public application • a stress test for multi-pipeline orchestration 5. Relationship to ComfyStream Cloud The longer-term concept discussed in the AI SPE roadmap is ComfyStream Cloud — a platform where creators could deploy Comfy workflows as hosted AI applications. Instead of this model: creator → workflow JSON → user runs locally Workflows could become hosted services: creator → workflow → hosted endpoint → users ComfyMeme acts as a first proof-of-concept of this idea. If hosted workflows prove viable, future work could explore creator deployment tools and monetisation mechanisms. 6. Scope limitations This experiment intentionally avoids solving several large problems: • automatic model distribution • arbitrary workflow compatibility • creator monetisation infrastructure Instead, it assumes a curated model set preloaded on orchestrator nodes. The goal is simply to validate dynamic workflow execution on the network. 7. Success criteria Success should demonstrate both technical viability and economic potential. Technical signals: • one ComfyStream runtime successfully serving multiple workflows • acceptable workflow switching latency • stable execution across multiple requests Adoption signals: • at least three orchestrators running ComfyStream Economic signal: • at least one orchestrator earning revenue from two distinct pipeline types on a single GPU 8. Deliverables The experiment would produce: • ComfyMeme demonstration application • ComfyStream configuration enabling multi-pipeline loading • documentation for orchestrator setup • public demo endpoint • written findings on performance, latency, and operational challenges 9. Expected duration Estimated timeline: 4–6 weeks This includes: • building the demo application • orchestrator deployment testing • community demonstration • documentation of results 10. Summary This proposal tests whether dynamic workflow loading via ComfyStream can enable orchestrators to serve multiple AI pipelines from a single GPU. The experiment combines: • infrastructure validation • a public demonstration application • measurable economic outcomes The key outcome is simple: prove that a single orchestrator GPU can earn revenue from multiple pipelines. If successful, this would strengthen orchestrator incentives and lay the groundwork for future hosted workflow platforms such as ComfyStream Cloud.

Peter Schroedl about 1 month ago

Suggest Ecosystem Projects

Developer Community Activation Program (Emerging Markets Focus)

Suggestion: I’d like to propose a structured Developer Community Activation Program aimed at driving adoption of tools like Daydream across emerging markets, starting with Nigeria and expanding across Africa. There is a rapidly growing base of developers and creators in these regions who are actively exploring video, AI, and onchain tools, but there is currently low awareness and limited onboarding support for platforms like Daydream. Proposed Approach: Launch grassroots initiatives led by local developer advocates Host webinars, bootcamps, and live demos focused on: Video creation workflows using Daydream API integrations into real-world applications Develop localized tutorials and starter projects Encourage community-led feedback loops to inform product direction Why This Matters: Unlocks a high-growth, under-tapped developer market Drives real usage of Daydream APIs beyond passive awareness Provides direct feedback from new user segments Strengthens Livepeer’s ecosystem positioning globally Execution Model (Lean): Start as a pilot in 1–2 regions (e.g., Nigeria) Community-led, low-cost experimentation Measure traction via: Developer onboarding API usage interest Event participation Scale based on validated engagement Additional Context: I’m currently engaging with developers in Nigeria and am willing to help initiate and document early traction from this region as part of a pilot. Outcome: If successful, this can evolve into a repeatable model for global community-driven growth and developer adoption.

Gideon Jones 2 days ago

Suggest Ecosystem Projects

Payment Clearinghouse

Who Are You? Name: John Mull Your connection to this problem (Why did you spot it? Are you affected by it? Do you work in the area it touches?): Core NaaP engineer actively building with the SDK. Over the past three months, John and Josh have been the only developers using it — this is a direct blocker to wider adoption. What Is The Problem? One specific sentence. Not a theme — the actual friction or gap. There is no general-purpose payment, usage metering, or authentication layer that multiple independent apps can rely on, which means non-core developers cannot build or monetize products on the Livepeer Network today. Why Does It Matter To The Ecosystem? 2–3 sentences. Who is affected and how? What is the cost of leaving it unsolved? Is there a window or urgency? Without a payment clearinghouse or remote signer, every developer is forced to use the existing go-livepeer gateway and long-lived API keys — a model that is incompatible with desktop apps, agentic tools (VS Code, Claude Code, BlueClaw), and OAuth 2.0 + OIDC device flows. This blocks the entire community from building with the SDK and makes it impossible to ship strategic initiatives like x402 payment support and MCP server tooling. The window is urgent: the NaaP roadmap and agent ecosystem integrations are waiting on this foundation. Who Else Feels This? Name at least one other person, persona, or group who experiences this problem. All third-party app developers trying to build on Livepeer. Specifically: teams building desktop apps (e.g. Scope), agentic framework integrators (VS Code, Cursor, BlueClaw), and any developer who needs a billing or usage API for their product. What Have You Already Tried or Seen? Prior attempts, related Forum threads, GitHub issues, Discord conversations, or past Advisory Board recommendations. Josh Allmann scoped a payments clearinghouse design document and roadmap as part of the Transformation SPE workstream (remote signer prototype merged into go-livepeer via PR#3822 and PR#3791). A TurnKey USDC pre-auth integration has been prototyped as a potential third-party auth option. DayDream’s current auth model (long-lived API keys, single-domain redirect) has been identified as insufficient for multi-app or desktop use cases. What Does A Good Outcome Look Like? Concrete and observable. "X goes from Y to Z" or "teams can now do X without Y." At least 2 demand partners onboarded (1 web app + 1 desktop app) using the clearinghouse and SDK to access the Livepeer Network. A working OAuth 2.0 + OIDC login flow demonstrated in a desktop or third-party integration (e.g. VS Code, BlueClaw, or Cursor). A billing and usage API that lets apps show users their consumption, and lets developers view usage in the clearinghouse dashboard. HTTP 402-driven automatic top-up flows working on the remote signer proxy for at least one integration. What You Don't Know Yet 2–4 genuine unknowns you’d want the group to help answer. Where exactly is user prepay and ticket valuation measured — in the proxy, or in the value of the actual claimed ticket? This is critical for correct micropayment accounting on a shared remote signer. What is the right account model for metering and billing (e.g. per-user wallet addresses on the remote signer side)? Which third-party auth and signing providers (Turnkey, Privy, others) are viable, and what is the minimal interface needed if developers bring their own? How are payments applied to accounts in a developer-facing Account Management API (roles, permissions, app linkage)?

John | Elite Encoder 22 days ago

Suggest Ecosystem Projects

Now

Website & Primer Refresh

Proposed By: Adam Soffer, Steph Alinsug Staging URL: https://livepeer-website.vercel.app 1. What Is The Problem? Livepeer's website is out of date, falls short of professional standards, and the embedded Primer only tells the transcoding story — failing to communicate what Livepeer actually is today (a real-time AI video infrastructure network) and leaving developers, ecosystem partners, and community members without a credible entry point that reflects the project's current direction and ambition. 2. Why It Matters To the Livepeer Ecosystem Livepeer is at an inflection point: the real-time AI video opportunity is real, Daydream is live, and the gateway platform is taking shape — but the primary public-facing surfaces tell a years-old story and don't reflect the quality of the work being done. Every week they stay up, they undermine trust with developers evaluating the network, partners doing due diligence, and community members trying to articulate what Livepeer is. The window matters because the AI video market is forming now, and first impressions with the right builders will compound. 3. What Does Success Look Like? Developers landing on livepeer.org understand within 60 seconds what Livepeer is today — a real-time AI video infrastructure network — who it's for, and how to get started The Primer functions as a standalone, shareable explainer covering the full scope of Livepeer's evolution, not just transcoding The Foundation has a live, professionally presented surface to iterate copy and positioning

Adam Soffer about 1 month ago

Live Projects

Funding Community Engineering Initiatives

This roadmap item was discussed during the latest Water Cooler. The overlap Onchain Treasury Allocation Improvements should be recognised, but potentially works alongside it as a short-term solution. 1. What is the problem to solve? The current SPE model creates significant friction for smaller, community-driven engineering contributions: Writing a full proposal requires a major time investment before any work begins The onchain vote cycle adds weeks or months of delay for work that could start quickly For scoped initiatives in the $2k–$20k range, the overhead-to-value ratio is simply too poor — community contributors won't run a full SPE process at that scale This friction has become more acute as AI tools now allow contributors to build and test MVPs far faster than before The result: genuinely useful, well-supported work doesn't happen — not because the community doesn't want it, but because there's no efficient path to fund it. 2. Why is solving this problem key to the Livepeer ecosystem? Smaller experimental initiatives and quick engineering wins are often where early, high-signal progress happens. If the only funding path available requires months of process overhead, contributors either work for free, deprioritize the work, or move on entirely. Losing active contributors — and the compounding value of their momentum — is a real cost to the ecosystem. A more frictionless path to funding smaller initiatives would align disbursement with the natural cadence of community proposals and keep contributors engaged and productive. 3. What could success look like? A delegated, standing funding pool for small-to-medium engineering initiatives, validated through the existing community roadmap process, where: Contributors can apply for funding for quick wins and experiments without the full weight of a standalone SPE Decisions are made transparently and without unnecessary bureaucracy Funding speed matches the speed at which good ideas can now be executed The above is just a potential solution and it is all open for discussion. 4. What are the outstanding questions to discuss? Does this problem resonate — are there contributors who've shelved ideas because the SPE path felt too heavy? What's the right pool size on a quarterly basis to be meaningful without being wasteful? Should the funding mechanism be proactive, retroactive, or some mix depending on the type of initiative? What governance structure makes sense — a multi-sig of trusted core contributors, or direct community votes on individual projects? How does this fit into broader discussions on the onchain treasury, and is a short-term experiment worthwhile regardless? Who should be eligible — public goods only, or should demand bets and other initiative types be in scope too? Note: this has been elaborated and proposed by Rich O’Grady based on prior Water Cooler discussions, but does not represent a Foundation priority.

Rich O'Grady about 1 month ago

Suggest Ecosystem Projects

Drive AI-centric Livepeer Brand

Purpose Livepeer needs a new meme (realtime? realtime AI?). With Livepeer’s new focus on realtime AI & video, the creation and focus of the Daydream community and product, and the development of new gateway services (Streamplace, Frameworks, Embody), each part of the Livepeer story needs to be coherently weaved together. The market for video is exploding as it merges with AI. This opportunity is to lead a team dedicated to advancing the Livepeer brand, connecting the network product, token and ecosystem. As both a compute network and specialised video infrastructure, Livepeer needs to communicate its value propositions to new customers and investors. Outcome By the end of this 6-month period, Livepeer will have formed its own category and positioned itself as a market leader in providing specialised infrastructure for real-time video and AI. It will have identified its market and have the foundations of a go to market, which can be taken forward in collaboration with other teams. Some key metrics include: Number of inbound, qualified developer leads Social followers and/or social engagement Discord members increase Discord member engagement

Admin Team about 1 month ago

Suggest Ecosystem Projects

Onchain Treasury Allocation Improvements

Establish norms, processes, and accountability mechanisms for how the onchain treasury is allocated — ensuring capital flows to highest-impact ecosystem activities. Problem Statement: Now that the treasury rate cut is back online we need a prioritization criteria for the deployment of funds, a formal framework for projects where resource allocation is the key decision point, and a broader accountability framework that alleviates previous community concerns about the ROI of deployed funds Scope Define what treasury funds should and shouldn't be used for - eg. align treasury allocation priorities to Roadmap items Establish norms and an evaluation framework for proposals (if needed) Establish a process for use of RFPs, such that funds can be secured before RFP teams are chosen Set performance, transparency and reporting norms for funded projects Key Questions to Answer How are Roadmap items suggested and determined? What are the real differences between "Roadmap-aligned" vs. speculative proposals? What should be used to evaluate proposals? How can an RFP process work in advance of deployed funding given the onchain execution of the treasury? What does performance, transparency and reporting look like post-funding? What group should be engaged to best answer and propose action on the items above? Out of Scope: Size of treasury reward cut rate itself (separate LIP can be further created if needed) Success Criteria: Community-ratified norms published; new treasury proposals evaluated as per norms, future funded projects deliver as per norms

Admin Team about 1 month ago

1

Suggest Ecosystem Projects

In Progress

Foundation - Map Market Landscape

Purpose What is the specific business problem that this solves? How does this align with and advance the vision? And why should we be working on this now? Livepeer needs to define its core opportunity which can form the foundation of a GTM effort. Since its launch in 2017, Livepeer’s mission has been to offer the world’s open video infrastructure, building a respected and trusted brand rooted in decentralized video technology enabled by cryptoeconomic primitives. More recently, Livepeer has expanded into AI through multiple ecosystem initiatives, but without a dedicated GTM effort to unify them around a clear network value proposition. Defining the core opportunity now is necessary to align these efforts, focus resources, and prevent further fragmentation as the ecosystem grows. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Success means delivering a detailed market intelligence report which can be used for lead generation. Building on detailed interviews with prospective customers, the report should equip the Livepeer ecosystem with a clear overview of the market landscape and a breakdown of ideal customer profiles (ICPs) with clear value propositions for each. All of the work should tee up the core teams behind the Livepeer network’s GTM. For Network ProdEng, the report should provide direction for core requirements and competitor benchmarking. For Livepeer Marketing, it should give clear ICPs to target. For BD, it should provide a pipeline of potential customers and the basis for a sales deck. Key Metrics To Measure The Outcome Against: Engagement from content pieces published Number of new customer opportunities identified Total number of new customer interviews Leads generated by the report

Admin Team about 1 month ago

Live Projects

Completed

Transformation SPE - Improve Capital Management

Purpose What is the specific business problem that this solves? How does this align with and advance the vision? And why should we be working on this now? Livepeer has acute pain points around current capital. On chain liquidity is low, leaking value of LPT holders through high slippage. Inflation is high against external benchmarks, which acts as a barrier to new participants. The treasury is not accumulating new LPT and the current treasury is not actively deployed and so does not generate any yield. We need to address all of this in a holistic way now, as actions in one area can have unintended second-order consequences if not managed carefully. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overview: Overall success is creating a working group that collaborates to assess the ecosystem holistically, enabling clear decision-making and actively managing ecosystem capital through more strategic deployment. Key Metrics To Measure The Outcome Against: Improvements in liquidity depth and market efficiency. Reduction of treasury concentration risk and clearer capital allocation strategy. Measurable progress toward sustainable inflation and data-driven utility models.

Admin Team about 1 month ago

Live Projects

Completed

Raidguild RFP - Upgrade Explorer

Purpose What is the specific business problem does this solve? How does this align with an advance the vision? And why should we be working on this now? Restore the Livepeer Explorer to a secure, maintainable, and high-performance state, as the current deprecated and unreliable Explorer limits visibility into the network, introduces security and stability risks, and prevents informed decision-making; recent outages, poor performance, and data quality issues make this work urgent to re-establish a trusted source of truth and enable future network and governance dashboards. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overview: Overall success means the Explorer becomes a clean, secure, well-tested, high-performance codebase with no critical bugs, modern dependencies and a clear backlog. It delivers faster UX, a simplified data layer, and integrated voting transparency. It emerges as trusted infrastructure with a 6-month roadmap and an active maintainer team. Key Metrics To Measure The Outcome Against: Resolution of existing issues in the Explorer repository. Degree of improvement in voting transparency based on the new voting-transparency feature. Degree of improvement in data quality, performance, codebase health, and security compared to the initial state.

Admin Team about 1 month ago

Live Projects

In Progress

Cloud SPE - Observable Network Data

Purpose What is the specific business problem does this solve? How does this align with an advance the vision? And why should we be working on this now? Lack of trusted, network-wide performance and demand metrics makes it difficult to assess reliability, surface bottlenecks, or confidently onboard real-time AI workloads. Without clear visibility into latency, success rates, capacity, and workload behavior, Livepeer cannot demonstrate production readiness, establish meaningful Service Level Agreements (SLAs), or give builders confidence to deploy real workloads. To advance the real-time AI vision, Livepeer must first measure what matters by establishing unified, trustworthy network observability. A shared data foundation enables consistent SLA tracking, informs scaling and incentive decisions, and gives operators, gateways, and developers a clear, objective view of network performance as demand grows. Prior work: Builds on ecosystem research highlighting fragmented network data and the need for unified, network-wide observability. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overall success is when Livepeer has a unified, trustworthy observability foundation for real-time AI and video workloads—defined by clear metric schemas and open data pipelines that aggregate network-wide performance and demand data into a single, queryable source. Orchestrators, gateways, and the community can access consistent real-time and historical views across key metrics, enabling analysis, insight, and action. This foundation directly enables SLA scoring, informed orchestrator selection, and continuous reliability improvements as demand grows. Key Metrics To Measure The Outcome Against: Unified Metrics Coverage: Number of core performance and demand metrics defined in a standardized schema and made available network-wide. Data Source Integration: Number of distinct network data sources integrated into a unified aggregation layer and exposed through documented access paths. Observability Completeness & Reliability: Defined indicators showing freshness, completeness, and consistency of network telemetry (e.g. data latency, missing dimensions, ingestion success rates). Reproducible Test Load Results: Availability of standardized test load execution producing repeatable, independently verifiable measurements for key SLA metrics.

Admin Team about 1 month ago

Live Projects

Completed

Establish a Protocol Security SPE (Sidestream)

Purpose Which specific business problem does this solve? How does this align with and advance the vision? And why should we be working on this now? Livepeer’s protocol secures meaningful on-chain value and increasingly supports real-time AI workloads that depend on reliability, safety, and predictable upgrades. Today, vulnerability response, protocol maintenance, and core development rely on limited shared resources, slowing iteration and making it harder to proactively evolve the protocol as the network grows. To strengthen Livepeer’s long-term foundations, the network needs a dedicated, always-available protocol development and security function with clear ownership, coordinated workflows, and the ability to respond quickly to issues, ship safe upgrades, and maintain essential infrastructure such as a public testnet. Establishing this structure now ensures the protocol remains secure, resilient, and scalable as demand increases. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overall success is when Livepeer operates a clearly defined, continuously supported protocol development and security function. Immunefi response and upgrade workflows are formalized and resourced, a stable public testnet is launched for validation, and a lightweight triage and release pipeline is operating and used continuously ship prioritized backlog upgrades. This creates a predictable, accountable foundation for safe protocol evolution as the network scales into transcoding and real-time AI workloads. Key Metrics To Measure The Outcome Against: Immunefi Response Readiness: A defined and resourced workflow enabling timely triage and patch preparation. Public Testnet Preparedness: A stable testnet launched or fully scoped with a clear operational plan. Triage & Release Pipeline: A lightweight prioritization and release process established and in active use. Backlog Delivery Velocity: One or more prioritized backlog features or patches progressed through triage → testing → deployment per release cycle.

Admin Team about 1 month ago

Live Projects

Network-As-A-Product: First Solutions Onboarded

Epic 3: First Solutions Published on NaaP Platform (April → June) Outcome: A successful program to incentivise and integrate ~3 Solutions Providers and their APIs to the network, tested and owned by members of the Livepeer community. To Ship: Set up scalable way to integrate first solutions into the NaaP platform; work with 5 new Solutions Providers to scope needs; establish way to track usage on a public dashboard per Solution Provider. Enables: Developers can build applications directly from NaaP integrating with multiple Solutions Providers, validation of the concept of community-built plugins. [WORK IN PROGRESS] Features & User Stories Coming Soon

Admin Team about 1 month ago

NaaP

Now

Network-As-A-Product: MVP Platform

(March → April) Output: An MVP of the NaaP platform to prove the Livepeer network can perform popular workflows, at low latency, with the needed scale, at market-best prices, without a large amount of technical overhead for app developers. To Ship: Shell App Foundation; ****dashboard overview; shared UI components; a staging and production infra that is open to community to build, test, and deploy Solutions to the NaaP; successful integration with design partner (Daydream). Enables: Live NaaP platform with Solutions Providers marketplace; community ability build and deploy custom solutions; hypothesis-driven development approach. Features & User Stories (1) Network Overview As a network user, I can see the network overall metrics using overview dashboard As a network user, I can get an overview of the pricing of different pipelines As a network user, I can track the usage of top pipelines As a network user, I can easily query network data to see more detailed performance metrics from the network (2) Developer API As an app developer, I can create an API key per billing provider. As an app developer, I can see my overall usage tracking from all signers As an app developer, I can see my usage tracking per signer (with breakdown of project, and api key) As an app developer, I can view and search list of models that supported by the network. As an app developer, I can see my usage tracking per model (with breakdown of signer, project, and api key) (3) Community Hub As a network user, I can get feedback on new community-built plugins or features As a network user, I can request new community-built features to the network product (4) Capacity Planner As a Service Provider or App Developer, I can request GPU capacity over an allotted time period at a given price. As an Orchestrator, I can soft commit and respond to a GPU capacity request.

Admin Team about 1 month ago

Live Projects