From Concept to Stage: Presenting “AI Agent Assistants for Live Production” Accelerator Project at IBC2025

Building on the Foundations of IBC2023

The story of the AI Assistance Agents for Live Production project did not start this year. The journey began in 2023 with the IBC Accelerator: Gallery Agnostic Live Broadcasting. The idea was simple — broadcast systems shouldn’t be tied to rigid protocols or one-off integrations.

The bigger issue in our industry is one of “glue” and protocols. You need an extraordinary number of protocols to connect things. We wanted to show that your rundown and your automator should control any device, no matter whether they were on-prem, in the cloud, or hybrid.

The magic was that we focused on the API. By integrating different tools, regardless of platform or location, through API as a common language, you can create a control layer that can talk to any device, making your gallery truly agnostic.

We demonstrated that agnostic live broadcasting was not only possible but also highly effective. We urged the industry to embrace API and standard structures.

Evolution of the Control Room

But we did not stop there. Building on that foundation, the Evolution of the Control Room project launched in 2024. The project explored how XR, voice control, AI, and HTML graphics could transform live production, bringing together major names such as BBC, ITN, Yle, Channel 4, TV2 Denmark, and SVT, along with the European Broadcasting Union and academic institutions like Trinity College Dublin and HSLU Lucerne. Technology partners included Cuez, nxtedition, Loopic, SPX Graphics, CuePilot, and Erizos.tv.

We realized that we could control everything in the gallery by leveraging API as a common language, but how do we make sure we can interact with all these devices?

The main thesis is: what should be the ideal way of interfacing with all these connected devices? Is that your laptop? Your phone? VR Glasses? Voice commands? Or… something else?

Could AI be an answer?

Turns out, yes, AI could be that control layer we were looking for. We connected ChatGPT to our Cuez Automator and trained it (using RAG). It could interpret our commands, after being trained, and would know the context & content of our rundowns.

The results were groundbreaking: AI could read the entire rundown and respond to questions like “Are the clips ready?” or “Is there sound on tape?” through natural human language commands. Not only that, we could also do voice control of the entire gallery, as the automator, in turn, controlled everything from video, to graphics, to camera’s, to…

One single person could use their voice to control the gallery. The main issue was interpretation and… delay of answers & actions by the AI. AI was still fairly fresh at the time.

The point was proven not in theory but in practice. The success of this project earned it the title of IBC Accelerator Project of the Year 2024. But even more importantly, it confirmed that the industry is headed toward smarter production environments, and there is nothing stopping it now.

IBC Accelerator Project of the Year award ceremony

We left this project with ambitions and even bigger question: if AI can control the control layer itself, could it be that, potentially, AI is that ‘glue’ that will connect everyting inside and outside the gallery, from pre-production to live production and… make it all work together?

The success of this project laid strong foundations for what would follow. It was time to reimagine the control room.

Introducing the 2025 Project: AI Agent Assistants for Live Production

Fast forward to IBC2025, where the AI Agent Assistants for Live Production project was presented live on stage.

This project carried a simple but bold idea: what if AI could become an active teammate in the control room, handling routine tasks so us, humans, can focus on creativity, decision-making, and storytelling?

Why Now?

Live production is changing, and broadcasters now have to make more content for more platforms, and with tighter budgets. Old workflows weren’t designed for this, and new software tools bring more flexibility, but they still don’t fully solve the challenge.

This is where AI comes in, and where we enter the age of the Agentic Era. Instead of just looking at data, AI can now use the same tools people use in the control room. And control rooms are just the starting point: this shift will affect the way technology and people work together, going far beyond just live production.

The timing is not random. Over the past year, three big technologies made this possible:

  • MCP (Model Context Protocol): like a ‘universal plug’, it lets AI connect to tools and data without custom work each time. (December 2024)
  • Google ADK: a kit for building AI agents that are designed to collaborate. (April 2025)
  • A2A (Agent-to-Agent Protocol): a new standard introduced by Google that lets AI Agents from different companies talk to each other and share tasks, aka a ‘universal translator’ for AI Agents. (April 2025)

Put this all together, and you can really see why now is the tipping point – we finished our project in August 2025, right on the bleeding edge of this timeline.

For the first time, humans and AI can really work side by side. The live production bubble finally exploded, craving for more agile workflows.

Why This Project Matters

Live production has been under pressure for a while: the explosion of publishing endpoints, the need to deliver more live content faster, and increasing financial constraints. At the same time, we see another change developing. Software-first productions are rising. Why? Because they bring flexibility and new levels of integration never seen before. And the next big thing is already here: the Age of AI Agents.

As mentioned above, AI can now use the tools in the control room. This opens new opportunities for skilled operators — they can now have richer control in the studio and in simpler ways. And when technicalities are reduced, there is always more room for the big ‘C’, aka the Content.

From Theory to Practice

In this project, we moved from theory to practice: we tested whether a network of AI-driven ‘agents’ could support operators in real-time and run real editorial and technical tasks in a control room. This resulted in 2 key breakthroughs:

  1. A new way of interacting with technologies. Instead of relying on static API integrations, the agents act as ‘digital colleagues’: reasoning, collaborating, remembering, and executing tasks. Operators issued instructions in natural language through text or voice, and a team of specialist agents, for graphics, audio, rundown management, video analysis, TX monitoring etc., worked together under a supervisory AI-orchestrator.
  2. A2A + MCP as the new integration fabric. In the past, using Model Context Protocol (MCP) or APIs meant giving tools exact, step-by-step instructions. By combining Agent-to-Agent protocol (A2A) and MCP, we showed a different approach: a “glue” that lets tools connect and cooperate without being told every detail. For example, a Checking Agent can scan the rundown for errors, a Content Discovery Agent can fetch missing media, or a Video Enhancer Agent can blur faces. And all that they can coordinate automatically, without your detailed instructions.

But let’s get down to basics: what exactly are agents?

What Are Agents?

At the core of the project is the concept of the AI Agent, or, in other words, software with initiative. When compared to a basic LLM wrapper, an AI agent can take tasks, not just prompts. They reason, remember, and interact with other systems. Agents are proactive teammates who work alongside you, keep track of what’s happening and collaborate in real time.

That’s what an agent is: it has initiative, memory, and can collaborate, not just wait for prompts.

When you combine agents into an ‘agentic system’, you get a collaborative ‘team’, each with a clear task, working via agent-to-agent protocols through voice requests from a human operator.

The best part is that it’s not set in stone — this system is a flexible mesh that can grow and adapt as new agents are added.

Meet the AI Agents

At the center of it all, the Orchestrator Agent, the one that interacts with the human in the control room and delegates tasks to other Agents as requested by the operator. In other words, it manages Agents and tells them what tasks they need to do, ensuring the whole “agentic team” works in sync.

And now let’s put it into human language. Imagine AI Agents like a great team in a restaurant, where the Orchestrator agent is the waiter, the AI agents and MCP tools are the chefs, bartenders, and kitchen staff. You (aka a human operator) place an order to the waiter (aka the Orchestrator Agent), and the team works together behind the scenes to deliver it. The Orchestrator Agent knows which specialized agents to call on to complete the task.

Below you can see the video demonstrating how AI agents can understand natural speech, check content for errors, and make changes to a rundown in real time, showing that the system can handle human-like, messy input and still deliver useful results.

That means all you have to do is request a task (be it searching for a certain clip, running a fact-check, or asking the system to blur a face in a video), and the Orchestrator will delegate that task to a specialized Agent. And because all agents exist and communicate within one collaborative system, all you have to do is use your voice to send a command. There is no need to address every Agent separately. All the work is done in the back end, leaving you with just one simple job: to voice the request.

This is what we call a collaborative AI framework.

The List of Agents

The consortium behind this project brought together some of the industry’s biggest names, creating a ‘crew’ of specialized agents:

  • Google: Orchestrator Agent
  • Cuez: Rundown Agent & Automator Agent
  • ITN: Checking Agent
  • BBC: Graphics Agent, Automation Agent, Activity Agent
  • Moments Lab: Content Discovery Agent
  • NBCUniversal: Graphics Agent
  • Highfield AI: Graphics Agent
  • Monks: TX Agent
  • Shure: Audio Agent
  • EVS: Video Enhancer Agent
  • Amira Labs: Front End

Cuez’s Contribution

At Cuez, we see immense potential in the emerging ecosystem where tools are naturally interconnectable, where our Rundown and Automator can work alongside emerging new features, vendors, and technologies. It’s not only about building agents, it’s about future-proofing live production and ensuring that our tools can be part of the next-gen production system.

To that end, we contributed two key agents:

  • Rundown Agent: This agent exposes rundown functionality to any AI tool. Operators can interact with rundowns using natural language through tools like Gemini or other AI tools. They can simply ask questions (“Are there any missing clips in the rundown?”), or request tasks (“Create a rundown based on this webpage”), without needing to give step-by-step instructions. Any agent (& tool) can now interact and make changes in the rundown.
  • Automator Agent: Automator already controls devices like vision mixers, graphics engines, and playout servers. The Automator Agent opens those controls to AI, allowing for requests like “Jump to the Ukraine story and trigger the lower third” or “Cue the item about measles.” This agent lets AI help human operators monitor and control the gallery in real time. Any agent (& tool) can now interact and make changes in the automator.

What We Showed On Stage: Live Demo At IBC2025

The live demo at IBC2025 was the true proof point. Agents aren’t just theory. They delivered. The agents ran in practice, in front of a live audience, showing that they could perform real tasks: searching for the right clip within seconds, correcting mistakes, or even blurring faces in video instantly. Crazy, we know.

Among the presenters was our Product Manager for Automator, Jan De Wever, who showed exactly how these agents fit into real-world productions. The result? Cheering public and a powerful message: AI is not here to replace the staff in the control room, but to become a trusted teammate inside it.

Watch the full presentation at IBC here.

Opportunities Unlocked

An important finding of this project is that AI Agents can, indeed, make people in the control room more effective by taking on routine tasks and working within one connected system. But along with effectiveness came a brand new way of running live productions:

  • Instead of complicated control infrastructures, operators can now use intelligent UI and simply talk to the system in natural human language, just as they would talk to a colleague.
  • AI Agents now ‘glue’ tools together, allowing them to talk to each other, instead of relying on complex custom integrations.
  • Each new agent doesn’t just add one extra function; it opens doors to new, ever-expanding workflows.

Curious how this project came together? Watch the full project documentary below.

Challenges

The project proved its main point: that a single orchestrator could drive end-to-end, multi-vendor workflows through simple voice commands. But working on the cutting edge came with real challenges.

First, the technology is not yet mature. We were working with draft standards like A2A, which changed as the project moved forward. That meant extra engineering work just to keep up. Resources and support were limited, as you’d expect when operating at the bleeding edge.

Latency was also an issue: the agents could complete tasks, but not yet at the speed required for live production. And when it comes to security and authentication, more work is needed to make sure data and commands can’t be intercepted.

It’s important to stress: this was not product building. It was a proof of concept to surface capabilities and ask the question, “Does this deliver value?”. The answer was yes. But it’s early, and the prototypes were sometimes unstable and slow — engineering challenges more than fundamental flaws.

The real opportunity lies in the future. Once A2A and other frameworks are stable, the groundwork we laid here will allow people to build on it quickly. This project opened the door and showed what’s possible, setting the stage for faster, more reliable, and production-ready systems in the years to come.

Looking Ahead

From its early foundations in 2023 to the successful live demo in 2025, the Accelerator journey proved that every project is all about collaboration and experimentation. We combined the strongest players in the industry: the expertise of broadcasters, the knowledge of technology providers, and the open-mindedness of innovators, proving that AI can become a trusted teammate in the gallery.

This is also a moment to express a monumental gratitude to everyone involved in the making of this project, spanning several years. In the end, it’s the people and the community that make such a project unforgettable.

Accelerators Hackathon

The IBC2025 was more than a showcase. For us, it was a turning point. It proved that agentic systems are not a futuristic concept but a practical step toward a new standard of live production.

But the work does not stop here. We continue refining the agentic framework, testing it in wider production environments, and moving toward formal industry-wide standards. Broadcasting giants like ITN and BBC are already bringing these advances into live spaces.

And as for us at Cuez, we are working toward making this technology an integral part of our offering, so stay tuned for more AI magic.

This is only the beginning. The age of AI Agents in live production has arrived. And we couldn’t be more excited.

Please reach out to aaron@cuez.app if you’d like to connect your AI Agent to Cuez, or if you’d like to help co-develop the future with Cuez.

Content Marketeer


Meet Cuez at NAB NY 2025

Book a meeting with the Cuez team now!

🗓️ 22-23 October

📍 Booth number: 449