Suggestion - Website & Primer Refresh
Proposed By: Adam Soffer, Steph Alinsug Staging URL: https://livepeer-website.vercel.app 1. What Is The Problem? Livepeer's website is out of date, falls short of professional standards, and the embedded Primer only tells the transcoding story — failing to communicate what Livepeer actually is today (a real-time AI video infrastructure network) and leaving developers, ecosystem partners, and community members without a credible entry point that reflects the project's current direction and ambition. 2. Why It Matters To the Livepeer Ecosystem Livepeer is at an inflection point: the real-time AI video opportunity is real, Daydream is live, and the gateway platform is taking shape — but the primary public-facing surfaces tell a years-old story and don't reflect the quality of the work being done. Every week they stay up, they undermine trust with developers evaluating the network, partners doing due diligence, and community members trying to articulate what Livepeer is. The window matters because the AI video market is forming now, and first impressions with the right builders will compound. 3. What Does Success Look Like? Developers landing on livepeer.org understand within 60 seconds what Livepeer is today — a real-time AI video infrastructure network — who it's for, and how to get started The Primer functions as a standalone, shareable explainer covering the full scope of Livepeer's evolution, not just transcoding The Foundation has a live, professionally presented surface to iterate copy and positioning

Adam Soffer 2 days ago
Community Feedback
Suggest Ecosystem Projects
Suggestion - Website & Primer Refresh
Proposed By: Adam Soffer, Steph Alinsug Staging URL: https://livepeer-website.vercel.app 1. What Is The Problem? Livepeer's website is out of date, falls short of professional standards, and the embedded Primer only tells the transcoding story — failing to communicate what Livepeer actually is today (a real-time AI video infrastructure network) and leaving developers, ecosystem partners, and community members without a credible entry point that reflects the project's current direction and ambition. 2. Why It Matters To the Livepeer Ecosystem Livepeer is at an inflection point: the real-time AI video opportunity is real, Daydream is live, and the gateway platform is taking shape — but the primary public-facing surfaces tell a years-old story and don't reflect the quality of the work being done. Every week they stay up, they undermine trust with developers evaluating the network, partners doing due diligence, and community members trying to articulate what Livepeer is. The window matters because the AI video market is forming now, and first impressions with the right builders will compound. 3. What Does Success Look Like? Developers landing on livepeer.org understand within 60 seconds what Livepeer is today — a real-time AI video infrastructure network — who it's for, and how to get started The Primer functions as a standalone, shareable explainer covering the full scope of Livepeer's evolution, not just transcoding The Foundation has a live, professionally presented surface to iterate copy and positioning

Adam Soffer 2 days ago
Community Feedback
Suggest Ecosystem Projects
ComfyStream Multi-Pipeline Experiment + ComfyMeme Demo
1. What is the problem? Today most AI pipelines on the network run using a one-container-per-pipeline setup. In practice this means: • Each pipeline type runs in its own container environment • Orchestrators must maintain multiple containers to support multiple pipelines • Most orchestrators end up specialising in one pipeline type • GPUs cannot easily pivot between different workloads without redeployment This creates two ecosystem issues: Limited flexibility Even if GPUs have available capacity, orchestrators cannot easily switch between different pipeline types. Operational complexity Running multiple containers increases configuration overhead, environment drift, and maintenance burden. As a result, orchestrators often choose a single pipeline to support rather than experimenting with multiple AI services. The constraint appears to be software orchestration, not GPU capability. 2. Why does this matter to the ecosystem? This primarily affects the supply side of the network. The ecosystem currently has roughly 100 AI-capable orchestrators, meaning supply growth is constrained. When supply is capped, the key growth lever becomes: revenue per GPU Multi-pipeline capability could improve: • revenue per GPU • revenue per orchestrator • supply flexibility across pipeline types • time-to-serve new AI workloads Without this flexibility, the network risks developing specialised supply that cannot adapt quickly to demand changes. 3. Proposed experiment This proposal tests whether ComfyStream can enable multi-pipeline orchestrators by dynamically loading workflows on a single GPU. Instead of running multiple containers, an orchestrator would run: one ComfyStream runtime capable of loading different workflows on demand. The experiment aims to determine whether this is technically viable and operationally useful. 4. Demonstration application: ComfyMeme To test this capability in practice, the experiment includes building a small demonstration application called ComfyMeme. ComfyMeme generates AI-remixed animated memes using short GIF / WebP clips from the Giphy API. Example pipeline: Giphy meme clip → frame extraction → Stable Diffusion + LoRA stylisation → animated meme output Memes are intentionally chosen because they are: • easy to understand • fast to generate • culturally shareable • capable of producing organic traffic The demo therefore acts as both: • a public application • a stress test for multi-pipeline orchestration 5. Relationship to ComfyStream Cloud The longer-term concept discussed in the AI SPE roadmap is ComfyStream Cloud — a platform where creators could deploy Comfy workflows as hosted AI applications. Instead of this model: creator → workflow JSON → user runs locally Workflows could become hosted services: creator → workflow → hosted endpoint → users ComfyMeme acts as a first proof-of-concept of this idea. If hosted workflows prove viable, future work could explore creator deployment tools and monetisation mechanisms. 6. Scope limitations This experiment intentionally avoids solving several large problems: • automatic model distribution • arbitrary workflow compatibility • creator monetisation infrastructure Instead, it assumes a curated model set preloaded on orchestrator nodes. The goal is simply to validate dynamic workflow execution on the network. 7. Success criteria Success should demonstrate both technical viability and economic potential. Technical signals: • one ComfyStream runtime successfully serving multiple workflows • acceptable workflow switching latency • stable execution across multiple requests Adoption signals: • at least three orchestrators running ComfyStream Economic signal: • at least one orchestrator earning revenue from two distinct pipeline types on a single GPU 8. Deliverables The experiment would produce: • ComfyMeme demonstration application • ComfyStream configuration enabling multi-pipeline loading • documentation for orchestrator setup • public demo endpoint • written findings on performance, latency, and operational challenges 9. Expected duration Estimated timeline: 4–6 weeks This includes: • building the demo application • orchestrator deployment testing • community demonstration • documentation of results 10. Summary This proposal tests whether dynamic workflow loading via ComfyStream can enable orchestrators to serve multiple AI pipelines from a single GPU. The experiment combines: • infrastructure validation • a public demonstration application • measurable economic outcomes The key outcome is simple: prove that a single orchestrator GPU can earn revenue from multiple pipelines. If successful, this would strengthen orchestrator incentives and lay the groundwork for future hosted workflow platforms such as ComfyStream Cloud.

Peter Schroedl 2 days ago
Community Feedback
Suggest Ecosystem Projects
ComfyStream Multi-Pipeline Experiment + ComfyMeme Demo
1. What is the problem? Today most AI pipelines on the network run using a one-container-per-pipeline setup. In practice this means: • Each pipeline type runs in its own container environment • Orchestrators must maintain multiple containers to support multiple pipelines • Most orchestrators end up specialising in one pipeline type • GPUs cannot easily pivot between different workloads without redeployment This creates two ecosystem issues: Limited flexibility Even if GPUs have available capacity, orchestrators cannot easily switch between different pipeline types. Operational complexity Running multiple containers increases configuration overhead, environment drift, and maintenance burden. As a result, orchestrators often choose a single pipeline to support rather than experimenting with multiple AI services. The constraint appears to be software orchestration, not GPU capability. 2. Why does this matter to the ecosystem? This primarily affects the supply side of the network. The ecosystem currently has roughly 100 AI-capable orchestrators, meaning supply growth is constrained. When supply is capped, the key growth lever becomes: revenue per GPU Multi-pipeline capability could improve: • revenue per GPU • revenue per orchestrator • supply flexibility across pipeline types • time-to-serve new AI workloads Without this flexibility, the network risks developing specialised supply that cannot adapt quickly to demand changes. 3. Proposed experiment This proposal tests whether ComfyStream can enable multi-pipeline orchestrators by dynamically loading workflows on a single GPU. Instead of running multiple containers, an orchestrator would run: one ComfyStream runtime capable of loading different workflows on demand. The experiment aims to determine whether this is technically viable and operationally useful. 4. Demonstration application: ComfyMeme To test this capability in practice, the experiment includes building a small demonstration application called ComfyMeme. ComfyMeme generates AI-remixed animated memes using short GIF / WebP clips from the Giphy API. Example pipeline: Giphy meme clip → frame extraction → Stable Diffusion + LoRA stylisation → animated meme output Memes are intentionally chosen because they are: • easy to understand • fast to generate • culturally shareable • capable of producing organic traffic The demo therefore acts as both: • a public application • a stress test for multi-pipeline orchestration 5. Relationship to ComfyStream Cloud The longer-term concept discussed in the AI SPE roadmap is ComfyStream Cloud — a platform where creators could deploy Comfy workflows as hosted AI applications. Instead of this model: creator → workflow JSON → user runs locally Workflows could become hosted services: creator → workflow → hosted endpoint → users ComfyMeme acts as a first proof-of-concept of this idea. If hosted workflows prove viable, future work could explore creator deployment tools and monetisation mechanisms. 6. Scope limitations This experiment intentionally avoids solving several large problems: • automatic model distribution • arbitrary workflow compatibility • creator monetisation infrastructure Instead, it assumes a curated model set preloaded on orchestrator nodes. The goal is simply to validate dynamic workflow execution on the network. 7. Success criteria Success should demonstrate both technical viability and economic potential. Technical signals: • one ComfyStream runtime successfully serving multiple workflows • acceptable workflow switching latency • stable execution across multiple requests Adoption signals: • at least three orchestrators running ComfyStream Economic signal: • at least one orchestrator earning revenue from two distinct pipeline types on a single GPU 8. Deliverables The experiment would produce: • ComfyMeme demonstration application • ComfyStream configuration enabling multi-pipeline loading • documentation for orchestrator setup • public demo endpoint • written findings on performance, latency, and operational challenges 9. Expected duration Estimated timeline: 4–6 weeks This includes: • building the demo application • orchestrator deployment testing • community demonstration • documentation of results 10. Summary This proposal tests whether dynamic workflow loading via ComfyStream can enable orchestrators to serve multiple AI pipelines from a single GPU. The experiment combines: • infrastructure validation • a public demonstration application • measurable economic outcomes The key outcome is simple: prove that a single orchestrator GPU can earn revenue from multiple pipelines. If successful, this would strengthen orchestrator incentives and lay the groundwork for future hosted workflow platforms such as ComfyStream Cloud.

Peter Schroedl 2 days ago
Community Feedback
Suggest Ecosystem Projects
Funding Smaller Network Engineering Initiatives
This roadmap item was discussed during the latest Water Cooler. The overlap Onchain Treasury Allocation Improvements should be recognised, but potentially works alongside it as a short-term solution. Objectives Create a delegated, standing funding pool for small-to-medium engineering initiatives validated through the community roadmap process. The aim is to reduce the friction for individual contributors and small teams to access funding for smaller, experimental engineering initiatives, aligning funding disbursement with the cadence of the community proposal and review process, as well as maintaining transparency and accountability without unnecessary bureaucracy. Problem The current SPE model works well for larger programs, but creates significant friction for smaller, community-driven engineering contributions: Writing a full SPE proposal requires a major time investment before any work begins The onchain vote cycle adds weeks / months of delay for work that could otherwise start quickly Community contributors are unlikely to run a full SPE process for a $2k–$20k scoped piece of work The overhead-to-value ratio is poor below a certain funding threshold The result: genuinely useful, well-supported work doesn't happen — not because the community doesn't want it, but because there's no efficient path to fund it. Potential Solution A Network Engineering SPE — a standing, delegated pool of capital that finances a rolling portfolio of smaller engineering initiatives sourced from the Livepeer community roadmap process. An initial funding period could be ~3 months with around $80-100k budget. Initiatives range from $2,000 to $20,000 and must originate from roadmap.livepeer.org. Each initiative must include a clear impact hypothesis — a falsifiable statement of what should change in the network and why it matters — and a verifiable deliverable agreed upfront before work begins. What gets funded: Protocol improvements, gateway and orchestrator tooling, observability, developer SDKs, integrations, and technical documentation — scoped for 1–3 contributors to complete in roughly 1–4 weeks. How payment works: Scope and pricing are agreed before work starts; Tranche 1 is disbursed on verified technical completion, and Tranche 2 after a review period confirms the initiative's impact hypothesis has been borne out. Governance: A 5-member SPE Council (including Orchestrators, Foundation, Livepeer Inc) reviews applications and signs off on payments, with all decisions published publicly with written rationale. The above is just a potential solution and it is all open for discussion. Outside of Scope Small proposals that have not gone through the community roadmap validation process. Work that requires a dedicated team with ongoing work. Large, sustained programs ($20k–$150k+) — these should have a standalone SPE. Foundation strategic priorities — funded directly by the Foundation. Questions To Answer Does the problem resonate? Are there contributors with ideas they haven't pursued because the standalone SPE path felt too heavy? What's the right budget size? What 3-monthly pool would be meaningful without being wasteful? What governance works best? Should the community vote on individual disbursements, or is Foundation discretion within the SPE mandate sufficient? How does this fit in with broader discussions on onchain treasury? Is it worth an experiment in the short run? Note: this has been elaborated and proposed by Rich O’Grady based on prior Water Cooler discussions, but does not represent a Foundation priority.

Rich O'Grady 8 days ago
Suggestion Proposed
Suggest Ecosystem Projects
Funding Smaller Network Engineering Initiatives
This roadmap item was discussed during the latest Water Cooler. The overlap Onchain Treasury Allocation Improvements should be recognised, but potentially works alongside it as a short-term solution. Objectives Create a delegated, standing funding pool for small-to-medium engineering initiatives validated through the community roadmap process. The aim is to reduce the friction for individual contributors and small teams to access funding for smaller, experimental engineering initiatives, aligning funding disbursement with the cadence of the community proposal and review process, as well as maintaining transparency and accountability without unnecessary bureaucracy. Problem The current SPE model works well for larger programs, but creates significant friction for smaller, community-driven engineering contributions: Writing a full SPE proposal requires a major time investment before any work begins The onchain vote cycle adds weeks / months of delay for work that could otherwise start quickly Community contributors are unlikely to run a full SPE process for a $2k–$20k scoped piece of work The overhead-to-value ratio is poor below a certain funding threshold The result: genuinely useful, well-supported work doesn't happen — not because the community doesn't want it, but because there's no efficient path to fund it. Potential Solution A Network Engineering SPE — a standing, delegated pool of capital that finances a rolling portfolio of smaller engineering initiatives sourced from the Livepeer community roadmap process. An initial funding period could be ~3 months with around $80-100k budget. Initiatives range from $2,000 to $20,000 and must originate from roadmap.livepeer.org. Each initiative must include a clear impact hypothesis — a falsifiable statement of what should change in the network and why it matters — and a verifiable deliverable agreed upfront before work begins. What gets funded: Protocol improvements, gateway and orchestrator tooling, observability, developer SDKs, integrations, and technical documentation — scoped for 1–3 contributors to complete in roughly 1–4 weeks. How payment works: Scope and pricing are agreed before work starts; Tranche 1 is disbursed on verified technical completion, and Tranche 2 after a review period confirms the initiative's impact hypothesis has been borne out. Governance: A 5-member SPE Council (including Orchestrators, Foundation, Livepeer Inc) reviews applications and signs off on payments, with all decisions published publicly with written rationale. The above is just a potential solution and it is all open for discussion. Outside of Scope Small proposals that have not gone through the community roadmap validation process. Work that requires a dedicated team with ongoing work. Large, sustained programs ($20k–$150k+) — these should have a standalone SPE. Foundation strategic priorities — funded directly by the Foundation. Questions To Answer Does the problem resonate? Are there contributors with ideas they haven't pursued because the standalone SPE path felt too heavy? What's the right budget size? What 3-monthly pool would be meaningful without being wasteful? What governance works best? Should the community vote on individual disbursements, or is Foundation discretion within the SPE mandate sufficient? How does this fit in with broader discussions on onchain treasury? Is it worth an experiment in the short run? Note: this has been elaborated and proposed by Rich O’Grady based on prior Water Cooler discussions, but does not represent a Foundation priority.

Rich O'Grady 8 days ago
Suggestion Proposed
Suggest Ecosystem Projects
Drive AI-centric Livepeer Brand
Purpose Livepeer needs a new meme (realtime? realtime AI?). With Livepeer’s new focus on realtime AI & video, the creation and focus of the Daydream community and product, and the development of new gateway services (Streamplace, Frameworks, Embody), each part of the Livepeer story needs to be coherently weaved together. The market for video is exploding as it merges with AI. This opportunity is to lead a team dedicated to advancing the Livepeer brand, connecting the network product, token and ecosystem. As both a compute network and specialised video infrastructure, Livepeer needs to communicate its value propositions to new customers and investors. Outcome By the end of this 6-month period, Livepeer will have formed its own category and positioned itself as a market leader in providing specialised infrastructure for real-time video and AI. It will have identified its market and have the foundations of a go to market, which can be taken forward in collaboration with other teams. Some key metrics include: Number of inbound, qualified developer leads Social followers and/or social engagement Discord members increase Discord member engagement

Admin Team 14 days ago
Coming Soon
Suggest Ecosystem Projects
Drive AI-centric Livepeer Brand
Purpose Livepeer needs a new meme (realtime? realtime AI?). With Livepeer’s new focus on realtime AI & video, the creation and focus of the Daydream community and product, and the development of new gateway services (Streamplace, Frameworks, Embody), each part of the Livepeer story needs to be coherently weaved together. The market for video is exploding as it merges with AI. This opportunity is to lead a team dedicated to advancing the Livepeer brand, connecting the network product, token and ecosystem. As both a compute network and specialised video infrastructure, Livepeer needs to communicate its value propositions to new customers and investors. Outcome By the end of this 6-month period, Livepeer will have formed its own category and positioned itself as a market leader in providing specialised infrastructure for real-time video and AI. It will have identified its market and have the foundations of a go to market, which can be taken forward in collaboration with other teams. Some key metrics include: Number of inbound, qualified developer leads Social followers and/or social engagement Discord members increase Discord member engagement

Admin Team 14 days ago
Coming Soon
Suggest Ecosystem Projects
Onchain Treasury Allocation Improvements
Establish norms, processes, and accountability mechanisms for how the onchain treasury is allocated — ensuring capital flows to highest-impact ecosystem activities. Problem Statement: Now that the treasury rate cut is back online we need a prioritization criteria for the deployment of funds, a formal framework for projects where resource allocation is the key decision point, and a broader accountability framework that alleviates previous community concerns about the ROI of deployed funds Scope Define what treasury funds should and shouldn't be used for - eg. align treasury allocation priorities to Roadmap items Establish norms and an evaluation framework for proposals (if needed) Establish a process for use of RFPs, such that funds can be secured before RFP teams are chosen Set performance, transparency and reporting norms for funded projects Key Questions to Answer How are Roadmap items suggested and determined? What are the real differences between "Roadmap-aligned" vs. speculative proposals? What should be used to evaluate proposals? How can an RFP process work in advance of deployed funding given the onchain execution of the treasury? What does performance, transparency and reporting look like post-funding? What group should be engaged to best answer and propose action on the items above? Out of Scope: Size of treasury reward cut rate itself (separate LIP can be further created if needed) Success Criteria: Community-ratified norms published; new treasury proposals evaluated as per norms, future funded projects deliver as per norms

Admin Team 14 days ago
Community Feedback
Suggest Ecosystem Projects
Onchain Treasury Allocation Improvements
Establish norms, processes, and accountability mechanisms for how the onchain treasury is allocated — ensuring capital flows to highest-impact ecosystem activities. Problem Statement: Now that the treasury rate cut is back online we need a prioritization criteria for the deployment of funds, a formal framework for projects where resource allocation is the key decision point, and a broader accountability framework that alleviates previous community concerns about the ROI of deployed funds Scope Define what treasury funds should and shouldn't be used for - eg. align treasury allocation priorities to Roadmap items Establish norms and an evaluation framework for proposals (if needed) Establish a process for use of RFPs, such that funds can be secured before RFP teams are chosen Set performance, transparency and reporting norms for funded projects Key Questions to Answer How are Roadmap items suggested and determined? What are the real differences between "Roadmap-aligned" vs. speculative proposals? What should be used to evaluate proposals? How can an RFP process work in advance of deployed funding given the onchain execution of the treasury? What does performance, transparency and reporting look like post-funding? What group should be engaged to best answer and propose action on the items above? Out of Scope: Size of treasury reward cut rate itself (separate LIP can be further created if needed) Success Criteria: Community-ratified norms published; new treasury proposals evaluated as per norms, future funded projects deliver as per norms

Admin Team 14 days ago
Community Feedback
Suggest Ecosystem Projects
In Progress
Foundation - Narrative Readiness
Understand the existing Livepeer system: map how decisions actually get made (vs. how stakeholders believe they're made), identify where power and narrative ownership currently sit, and surface the story across Foundation, Inc, Gateway and contributor touchpoints

Admin Team 14 days ago
In Progress
Live Projects
In Progress
Foundation - Narrative Readiness
Understand the existing Livepeer system: map how decisions actually get made (vs. how stakeholders believe they're made), identify where power and narrative ownership currently sit, and surface the story across Foundation, Inc, Gateway and contributor touchpoints

Admin Team 14 days ago
In Progress
Live Projects
In Progress
Foundation - Map Market Landscape
Purpose What is the specific business problem that this solves? How does this align with and advance the vision? And why should we be working on this now? Livepeer needs to define its core opportunity which can form the foundation of a GTM effort. Since its launch in 2017, Livepeer’s mission has been to offer the world’s open video infrastructure, building a respected and trusted brand rooted in decentralized video technology enabled by cryptoeconomic primitives. More recently, Livepeer has expanded into AI through multiple ecosystem initiatives, but without a dedicated GTM effort to unify them around a clear network value proposition. Defining the core opportunity now is necessary to align these efforts, focus resources, and prevent further fragmentation as the ecosystem grows. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Success means delivering a detailed market intelligence report which can be used for lead generation. Building on detailed interviews with prospective customers, the report should equip the Livepeer ecosystem with a clear overview of the market landscape and a breakdown of ideal customer profiles (ICPs) with clear value propositions for each. All of the work should tee up the core teams behind the Livepeer network’s GTM. For Network ProdEng, the report should provide direction for core requirements and competitor benchmarking. For Livepeer Marketing, it should give clear ICPs to target. For BD, it should provide a pipeline of potential customers and the basis for a sales deck. Key Metrics To Measure The Outcome Against: Engagement from content pieces published Number of new customer opportunities identified Total number of new customer interviews Leads generated by the report

Admin Team 14 days ago
In Progress
Live Projects
In Progress
Foundation - Map Market Landscape
Purpose What is the specific business problem that this solves? How does this align with and advance the vision? And why should we be working on this now? Livepeer needs to define its core opportunity which can form the foundation of a GTM effort. Since its launch in 2017, Livepeer’s mission has been to offer the world’s open video infrastructure, building a respected and trusted brand rooted in decentralized video technology enabled by cryptoeconomic primitives. More recently, Livepeer has expanded into AI through multiple ecosystem initiatives, but without a dedicated GTM effort to unify them around a clear network value proposition. Defining the core opportunity now is necessary to align these efforts, focus resources, and prevent further fragmentation as the ecosystem grows. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Success means delivering a detailed market intelligence report which can be used for lead generation. Building on detailed interviews with prospective customers, the report should equip the Livepeer ecosystem with a clear overview of the market landscape and a breakdown of ideal customer profiles (ICPs) with clear value propositions for each. All of the work should tee up the core teams behind the Livepeer network’s GTM. For Network ProdEng, the report should provide direction for core requirements and competitor benchmarking. For Livepeer Marketing, it should give clear ICPs to target. For BD, it should provide a pipeline of potential customers and the basis for a sales deck. Key Metrics To Measure The Outcome Against: Engagement from content pieces published Number of new customer opportunities identified Total number of new customer interviews Leads generated by the report

Admin Team 14 days ago
In Progress
Live Projects
In Progress
NaaP Epic 1: Foundational Metrics & Platform
(November → February) Output: A dashboard with an overview of core network data (aligned with Cloud SPE). To Ship: Gateway audit; automated network testing and visibility; metrics collector; network-wide orchestrator data dashboard. Enables: Dashboard overview with key network data; gateway manager dashboard with SLA configuration. Features & User Stories Network Data As a network user, I can see an overview of core metrics NaaP Platform As a network user, I can understand what the NaaP Platform is and how to get involved

Admin Team 14 days ago
In Progress
NaaP
In Progress
NaaP Epic 1: Foundational Metrics & Platform
(November → February) Output: A dashboard with an overview of core network data (aligned with Cloud SPE). To Ship: Gateway audit; automated network testing and visibility; metrics collector; network-wide orchestrator data dashboard. Enables: Dashboard overview with key network data; gateway manager dashboard with SLA configuration. Features & User Stories Network Data As a network user, I can see an overview of core metrics NaaP Platform As a network user, I can understand what the NaaP Platform is and how to get involved

Admin Team 14 days ago
In Progress
NaaP
Completed
Transformation SPE - Improve Capital Management
Purpose What is the specific business problem that this solves? How does this align with and advance the vision? And why should we be working on this now? Livepeer has acute pain points around current capital. On chain liquidity is low, leaking value of LPT holders through high slippage. Inflation is high against external benchmarks, which acts as a barrier to new participants. The treasury is not accumulating new LPT and the current treasury is not actively deployed and so does not generate any yield. We need to address all of this in a holistic way now, as actions in one area can have unintended second-order consequences if not managed carefully. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overview: Overall success is creating a working group that collaborates to assess the ecosystem holistically, enabling clear decision-making and actively managing ecosystem capital through more strategic deployment. Key Metrics To Measure The Outcome Against: Improvements in liquidity depth and market efficiency. Reduction of treasury concentration risk and clearer capital allocation strategy. Measurable progress toward sustainable inflation and data-driven utility models.

Admin Team 14 days ago
Completed
Live Projects
Completed
Transformation SPE - Improve Capital Management
Purpose What is the specific business problem that this solves? How does this align with and advance the vision? And why should we be working on this now? Livepeer has acute pain points around current capital. On chain liquidity is low, leaking value of LPT holders through high slippage. Inflation is high against external benchmarks, which acts as a barrier to new participants. The treasury is not accumulating new LPT and the current treasury is not actively deployed and so does not generate any yield. We need to address all of this in a holistic way now, as actions in one area can have unintended second-order consequences if not managed carefully. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overview: Overall success is creating a working group that collaborates to assess the ecosystem holistically, enabling clear decision-making and actively managing ecosystem capital through more strategic deployment. Key Metrics To Measure The Outcome Against: Improvements in liquidity depth and market efficiency. Reduction of treasury concentration risk and clearer capital allocation strategy. Measurable progress toward sustainable inflation and data-driven utility models.

Admin Team 14 days ago
Completed
Live Projects
Completed
Raidguild RFP - Upgrade Explorer
Purpose What is the specific business problem does this solve? How does this align with an advance the vision? And why should we be working on this now? Restore the Livepeer Explorer to a secure, maintainable, and high-performance state, as the current deprecated and unreliable Explorer limits visibility into the network, introduces security and stability risks, and prevents informed decision-making; recent outages, poor performance, and data quality issues make this work urgent to re-establish a trusted source of truth and enable future network and governance dashboards. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overview: Overall success means the Explorer becomes a clean, secure, well-tested, high-performance codebase with no critical bugs, modern dependencies and a clear backlog. It delivers faster UX, a simplified data layer, and integrated voting transparency. It emerges as trusted infrastructure with a 6-month roadmap and an active maintainer team. Key Metrics To Measure The Outcome Against: Resolution of existing issues in the Explorer repository. Degree of improvement in voting transparency based on the new voting-transparency feature. Degree of improvement in data quality, performance, codebase health, and security compared to the initial state.

Admin Team 14 days ago
Completed
Live Projects
Completed
Raidguild RFP - Upgrade Explorer
Purpose What is the specific business problem does this solve? How does this align with an advance the vision? And why should we be working on this now? Restore the Livepeer Explorer to a secure, maintainable, and high-performance state, as the current deprecated and unreliable Explorer limits visibility into the network, introduces security and stability risks, and prevents informed decision-making; recent outages, poor performance, and data quality issues make this work urgent to re-establish a trusted source of truth and enable future network and governance dashboards. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overview: Overall success means the Explorer becomes a clean, secure, well-tested, high-performance codebase with no critical bugs, modern dependencies and a clear backlog. It delivers faster UX, a simplified data layer, and integrated voting transparency. It emerges as trusted infrastructure with a 6-month roadmap and an active maintainer team. Key Metrics To Measure The Outcome Against: Resolution of existing issues in the Explorer repository. Degree of improvement in voting transparency based on the new voting-transparency feature. Degree of improvement in data quality, performance, codebase health, and security compared to the initial state.

Admin Team 14 days ago
Completed
Live Projects
In Progress
Cloud SPE - Observable Network Data
Purpose What is the specific business problem does this solve? How does this align with an advance the vision? And why should we be working on this now? Lack of trusted, network-wide performance and demand metrics makes it difficult to assess reliability, surface bottlenecks, or confidently onboard real-time AI workloads. Without clear visibility into latency, success rates, capacity, and workload behavior, Livepeer cannot demonstrate production readiness, establish meaningful Service Level Agreements (SLAs), or give builders confidence to deploy real workloads. To advance the real-time AI vision, Livepeer must first measure what matters by establishing unified, trustworthy network observability. A shared data foundation enables consistent SLA tracking, informs scaling and incentive decisions, and gives operators, gateways, and developers a clear, objective view of network performance as demand grows. Prior work: Builds on ecosystem research highlighting fragmented network data and the need for unified, network-wide observability. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overall success is when Livepeer has a unified, trustworthy observability foundation for real-time AI and video workloads—defined by clear metric schemas and open data pipelines that aggregate network-wide performance and demand data into a single, queryable source. Orchestrators, gateways, and the community can access consistent real-time and historical views across key metrics, enabling analysis, insight, and action. This foundation directly enables SLA scoring, informed orchestrator selection, and continuous reliability improvements as demand grows. Key Metrics To Measure The Outcome Against: Unified Metrics Coverage: Number of core performance and demand metrics defined in a standardized schema and made available network-wide. Data Source Integration: Number of distinct network data sources integrated into a unified aggregation layer and exposed through documented access paths. Observability Completeness & Reliability: Defined indicators showing freshness, completeness, and consistency of network telemetry (e.g. data latency, missing dimensions, ingestion success rates). Reproducible Test Load Results: Availability of standardized test load execution producing repeatable, independently verifiable measurements for key SLA metrics.

Admin Team 14 days ago
In Progress
Live Projects
In Progress
Cloud SPE - Observable Network Data
Purpose What is the specific business problem does this solve? How does this align with an advance the vision? And why should we be working on this now? Lack of trusted, network-wide performance and demand metrics makes it difficult to assess reliability, surface bottlenecks, or confidently onboard real-time AI workloads. Without clear visibility into latency, success rates, capacity, and workload behavior, Livepeer cannot demonstrate production readiness, establish meaningful Service Level Agreements (SLAs), or give builders confidence to deploy real workloads. To advance the real-time AI vision, Livepeer must first measure what matters by establishing unified, trustworthy network observability. A shared data foundation enables consistent SLA tracking, informs scaling and incentive decisions, and gives operators, gateways, and developers a clear, objective view of network performance as demand grows. Prior work: Builds on ecosystem research highlighting fragmented network data and the need for unified, network-wide observability. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overall success is when Livepeer has a unified, trustworthy observability foundation for real-time AI and video workloads—defined by clear metric schemas and open data pipelines that aggregate network-wide performance and demand data into a single, queryable source. Orchestrators, gateways, and the community can access consistent real-time and historical views across key metrics, enabling analysis, insight, and action. This foundation directly enables SLA scoring, informed orchestrator selection, and continuous reliability improvements as demand grows. Key Metrics To Measure The Outcome Against: Unified Metrics Coverage: Number of core performance and demand metrics defined in a standardized schema and made available network-wide. Data Source Integration: Number of distinct network data sources integrated into a unified aggregation layer and exposed through documented access paths. Observability Completeness & Reliability: Defined indicators showing freshness, completeness, and consistency of network telemetry (e.g. data latency, missing dimensions, ingestion success rates). Reproducible Test Load Results: Availability of standardized test load execution producing repeatable, independently verifiable measurements for key SLA metrics.

Admin Team 14 days ago
In Progress
Live Projects
Completed
Establish a Protocol Security SPE (Sidestream)
Purpose Which specific business problem does this solve? How does this align with and advance the vision? And why should we be working on this now? Livepeer’s protocol secures meaningful on-chain value and increasingly supports real-time AI workloads that depend on reliability, safety, and predictable upgrades. Today, vulnerability response, protocol maintenance, and core development rely on limited shared resources, slowing iteration and making it harder to proactively evolve the protocol as the network grows. To strengthen Livepeer’s long-term foundations, the network needs a dedicated, always-available protocol development and security function with clear ownership, coordinated workflows, and the ability to respond quickly to issues, ship safe upgrades, and maintain essential infrastructure such as a public testnet. Establishing this structure now ensures the protocol remains secure, resilient, and scalable as demand increases. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overall success is when Livepeer operates a clearly defined, continuously supported protocol development and security function. Immunefi response and upgrade workflows are formalized and resourced, a stable public testnet is launched for validation, and a lightweight triage and release pipeline is operating and used continuously ship prioritized backlog upgrades. This creates a predictable, accountable foundation for safe protocol evolution as the network scales into transcoding and real-time AI workloads. Key Metrics To Measure The Outcome Against: Immunefi Response Readiness: A defined and resourced workflow enabling timely triage and patch preparation. Public Testnet Preparedness: A stable testnet launched or fully scoped with a clear operational plan. Triage & Release Pipeline: A lightweight prioritization and release process established and in active use. Backlog Delivery Velocity: One or more prioritized backlog features or patches progressed through triage → testing → deployment per release cycle.

Admin Team 14 days ago
In Progress
Live Projects
Completed
Establish a Protocol Security SPE (Sidestream)
Purpose Which specific business problem does this solve? How does this align with and advance the vision? And why should we be working on this now? Livepeer’s protocol secures meaningful on-chain value and increasingly supports real-time AI workloads that depend on reliability, safety, and predictable upgrades. Today, vulnerability response, protocol maintenance, and core development rely on limited shared resources, slowing iteration and making it harder to proactively evolve the protocol as the network grows. To strengthen Livepeer’s long-term foundations, the network needs a dedicated, always-available protocol development and security function with clear ownership, coordinated workflows, and the ability to respond quickly to issues, ship safe upgrades, and maintain essential infrastructure such as a public testnet. Establishing this structure now ensures the protocol remains secure, resilient, and scalable as demand increases. Outcome What does overall success look like in one paragraph? And what are some of the tangible key results that the submissions should focus on? Overall success is when Livepeer operates a clearly defined, continuously supported protocol development and security function. Immunefi response and upgrade workflows are formalized and resourced, a stable public testnet is launched for validation, and a lightweight triage and release pipeline is operating and used continuously ship prioritized backlog upgrades. This creates a predictable, accountable foundation for safe protocol evolution as the network scales into transcoding and real-time AI workloads. Key Metrics To Measure The Outcome Against: Immunefi Response Readiness: A defined and resourced workflow enabling timely triage and patch preparation. Public Testnet Preparedness: A stable testnet launched or fully scoped with a clear operational plan. Triage & Release Pipeline: A lightweight prioritization and release process established and in active use. Backlog Delivery Velocity: One or more prioritized backlog features or patches progressed through triage → testing → deployment per release cycle.

Admin Team 14 days ago
In Progress
Live Projects
NaaP Epic 3: First Solutions Published on NaaP Platform
Epic 3: First Solutions Published on NaaP Platform (April → June) Outcome: A successful program to incentivise and integrate ~3 Solutions Providers and their APIs to the network, tested and owned by members of the Livepeer community. To Ship: Set up scalable way to integrate first solutions into the NaaP platform; work with 5 new Solutions Providers to scope needs; establish way to track usage on a public dashboard per Solution Provider. Enables: Developers can build applications directly from NaaP integrating with multiple Solutions Providers, validation of the concept of community-built plugins. [WORK IN PROGRESS] Features & User Stories Coming Soon

Admin Team 14 days ago
Coming Soon
NaaP
NaaP Epic 3: First Solutions Published on NaaP Platform
Epic 3: First Solutions Published on NaaP Platform (April → June) Outcome: A successful program to incentivise and integrate ~3 Solutions Providers and their APIs to the network, tested and owned by members of the Livepeer community. To Ship: Set up scalable way to integrate first solutions into the NaaP platform; work with 5 new Solutions Providers to scope needs; establish way to track usage on a public dashboard per Solution Provider. Enables: Developers can build applications directly from NaaP integrating with multiple Solutions Providers, validation of the concept of community-built plugins. [WORK IN PROGRESS] Features & User Stories Coming Soon

Admin Team 14 days ago
Coming Soon
NaaP
Now
NaaP Epic 2: MVP NaaP Platform Launched With Core Features
(March → April) Output: An MVP of the NaaP platform to prove the Livepeer network can perform popular workflows, at low latency, with the needed scale, at market-best prices, without a large amount of technical overhead for app developers. To Ship: Shell App Foundation; ****dashboard overview; shared UI components; a staging and production infra that is open to community to build, test, and deploy Solutions to the NaaP; successful integration with design partner (Daydream). Enables: Live NaaP platform with Solutions Providers marketplace; community ability build and deploy custom solutions; hypothesis-driven development approach. Features & User Stories (1) Network Overview As a network user, I can see the network overall metrics using overview dashboard As a network user, I can get an overview of the pricing of different pipelines As a network user, I can track the usage of top pipelines As a network user, I can easily query network data to see more detailed performance metrics from the network (2) Developer API As an app developer, I can create an API key per billing provider. As an app developer, I can see my overall usage tracking from all signers As an app developer, I can see my usage tracking per signer (with breakdown of project, and api key) As an app developer, I can view and search list of models that supported by the network. As an app developer, I can see my usage tracking per model (with breakdown of signer, project, and api key) (3) Community Hub As a network user, I can get feedback on new community-built plugins or features As a network user, I can request new community-built features to the network product (4) Capacity Planner As a Service Provider or App Developer, I can request GPU capacity over an allotted time period at a given price. As an Orchestrator, I can soft commit and respond to a GPU capacity request.

Admin Team 14 days ago
Coming Soon
Live Projects
Now
NaaP Epic 2: MVP NaaP Platform Launched With Core Features
(March → April) Output: An MVP of the NaaP platform to prove the Livepeer network can perform popular workflows, at low latency, with the needed scale, at market-best prices, without a large amount of technical overhead for app developers. To Ship: Shell App Foundation; ****dashboard overview; shared UI components; a staging and production infra that is open to community to build, test, and deploy Solutions to the NaaP; successful integration with design partner (Daydream). Enables: Live NaaP platform with Solutions Providers marketplace; community ability build and deploy custom solutions; hypothesis-driven development approach. Features & User Stories (1) Network Overview As a network user, I can see the network overall metrics using overview dashboard As a network user, I can get an overview of the pricing of different pipelines As a network user, I can track the usage of top pipelines As a network user, I can easily query network data to see more detailed performance metrics from the network (2) Developer API As an app developer, I can create an API key per billing provider. As an app developer, I can see my overall usage tracking from all signers As an app developer, I can see my usage tracking per signer (with breakdown of project, and api key) As an app developer, I can view and search list of models that supported by the network. As an app developer, I can see my usage tracking per model (with breakdown of signer, project, and api key) (3) Community Hub As a network user, I can get feedback on new community-built plugins or features As a network user, I can request new community-built features to the network product (4) Capacity Planner As a Service Provider or App Developer, I can request GPU capacity over an allotted time period at a given price. As an Orchestrator, I can soft commit and respond to a GPU capacity request.

Admin Team 14 days ago
Coming Soon
Live Projects
In Progress
Epic 1: Foundational Metrics & Platform (November → February)
Output: A dashboard with an overview of core network data (aligned with Cloud SPE). To Ship: Gateway audit; automated network testing and visibility; metrics collector; network-wide orchestrator data dashboard. Enables: Dashboard overview with key network data; gateway manager dashboard with SLA configuration. Features & User Stories Network Data As a network user, I can see an overview of core metrics NaaP Platform As a network user, I can understand what the NaaP Platform is and how to get involved

Admin Team 14 days ago
Live Projects
In Progress
Epic 1: Foundational Metrics & Platform (November → February)
Output: A dashboard with an overview of core network data (aligned with Cloud SPE). To Ship: Gateway audit; automated network testing and visibility; metrics collector; network-wide orchestrator data dashboard. Enables: Dashboard overview with key network data; gateway manager dashboard with SLA configuration. Features & User Stories Network Data As a network user, I can see an overview of core metrics NaaP Platform As a network user, I can understand what the NaaP Platform is and how to get involved

Admin Team 14 days ago
Live Projects