Properties
category: reference tags: [agents, irc, mcp, architecture] last_updated: 2026-03-16 confidence: medium
Agent IRC Architecture
An architecture for multi-agent coordination over IRC, where a human (the PM) and AI agents share a message bus. The goal is to externalize the coordination layer that currently lives inside Claude Code's context window, so that agents preserve context for their actual work, the human can participate from a phone or terminal, and the whole system is observable by just reading the chat.
The problem
The current agent workflow runs everything inside a single Claude Code session tree. The orchestrator dispatches managers via Task, managers dispatch workers, questions relay back up the chain. This works, but:
- The orchestrator's context fills up relaying messages it doesn't need to reason about.
- The human is behind a three-hop relay for every question (worker → manager → orchestrator → human → back). You have to be at the terminal.
- There's no way to observe what agents are doing without being the orchestrator. No dashboard, no logs, no lurking.
- You can't intervene in a task without going through the orchestrator.
The idea
Put everyone on IRC. The PM, the EM, the workers — all peers on a shared message bus. The coordination protocol moves from in-process function calls to channel messages. Everything is observable by joining the channel. The human can participate from a phone IRC client, a terminal, or both.
Org structure
PM (you) — sets priorities, answers product questions, makes scope decisions. Hangs out in #project-{slug}. Doesn't manage the sprint — that's the EM's job. Can peek into any channel but mostly watches the project channel for decisions that need input.
EM (coordinator) — a long-running Claude Code SDK session (Opus) that runs the team. Breaks down requirements into tasks, assigns work, tracks progress, makes implementation decisions, surfaces product questions to the PM. Lives in #project-{slug} and #standup-{slug}. Shields the PM from implementation noise.
Managers — Claude Code SDK sessions (Opus) that own individual tasks. Follow the proceed workflow: plan, implement, test, review, fix, document. Each manager gets a #work-{task-id} channel for its workers. Reports status and completions to #standup-{slug}.
Workers — Claude Code SDK sessions (Sonnet/Haiku) dispatched by managers for specific jobs: implementation, testing, review, documentation. Operate in #work-{task-id} channels. Disposable — when context fills up, they summarize and exit.
The EM decides what needs the PM's input vs. what it can handle itself. Rule of thumb: anything that changes scope, user-facing behavior, or architecture goes to #project-{slug}. Anything that's purely implementation strategy, the EM decides. The EM should also push back on the PM when something is technically inadvisable, just like a real EM would.
Channel topology
All channels exist on a single IRC server. Multiple projects share the server, namespaced by slug.
- #project-{slug} — PM + EM coordination. Product decisions, priority calls, scope questions. Low traffic, high signal. This is the channel you watch on your phone.
- #standup-{slug} — EM + managers. Task assignments, status updates, completion reports. The sprint board. PM can lurk here if they want more detail.
- #work-{task-id} — manager + workers for a specific task. Implementation discussion, test results, review feedback. Noisy and disposable. Created when a task starts, abandoned when it completes.
- #errors — dead-letter channel. Any agent that hits an unrecoverable failure posts here. Monitored by the EM and optionally by the PM.
Agent naming
Agents get human names, not mechanical identifiers. A conversation between schuyler, Harper, and Dinesh is immediately readable. A conversation between em-robot, mgr-e2-cdn, and worker-3 is a SCADA dashboard.
Names also help with the shift-change problem. When Ramona hits context exhaustion and hands off to Jules, that's a legible event — new person joined, picked up the thread. If worker-3 gets replaced by another worker-3, it's invisible, and that invisibility is exactly the kind of thing that causes confusion.
A names file (names.txt, one per line) lives in the repo. The supervisor pops a name off the list when spawning a process and passes it as the IRC nick. The name also goes into the agent's system prompt so it knows who it is. Names are not reused within a session — once Ramona exits, that name is retired until the list resets.
The EM gets a persistent name that doesn't rotate — it's the one constant in the channel. Think of it as the team lead who's always there. Managers and workers get fresh names each time they're spawned.
Transport abstraction
IRC is the first backend, but the architecture shouldn't be welded to it. A thin transport interface keeps options open:
class Transport(Protocol): async def send(self, channel: str, message: str, sender: str) -> None: ... async def read(self, channel: str, since: datetime | None = None) -> list[Message]: ... async def create_channel(self, name: str) -> None: ... async def list_channels(self) -> list[str]: ... async def get_members(self, channel: str) -> list[str]: ... @dataclass class Message: channel: str sender: str text: str timestamp: datetime
The IRC implementation wraps an async IRC client library (bottom or irc). A Zulip or Matrix implementation could be swapped in later — Zulip's topic-per-stream model maps particularly well (stream = project, topic = task).
MCP bridge
A FastMCP server wraps the transport and exposes tools to agents. This is the only interface agents use — they never touch IRC directly.
Design principle: conversational, not structured IPC
A core goal of this architecture is that a human can join any channel and immediately understand what's happening. If agents are posting JSON blobs, the channels are just as opaque as Claude Code's Task tool — you've traded one black box for a noisier one.
Agents communicate in natural language. The EM assigns a task by saying so in plain English. The manager reports a plan the same way. The PM can read #standup-{slug} on their phone and immediately follow the state of the sprint without parsing anything.
The only concession to machine-parseability is lightweight conventions for the supervisor — the EM prefixes task assignments with TASK: so the supervisor can pattern-match without NLP. Everything else is natural language.
Tools
| Tool | Description |
|---|---|
send_message(channel, text) |
Post a message to a channel. |
read_messages(channel, since?, limit?) |
Read recent messages from a channel. Returns newest-first. |
create_channel(name) |
Create a new channel (used by EM when spinning up task channels). |
list_channels() |
List active channels. |
get_members(channel) |
List who's in a channel. |
That's it. No post_task, claim_task, poll_for_task. Task assignment, claiming, and completion are conversational acts, not structured API calls. The EM says "do this," the manager says "on it," the manager says "done."
Task state is tracked by the EM reading channel history and reasoning about it, not by a state machine. This is less reliable than a database but vastly more observable and simpler to build. If it breaks, you can see exactly where it broke by reading the channel.
Configuration
TRANSPORT_TYPE=irc IRC_SERVER=<proxmox-host-ip> IRC_PORT=6667 IRC_NICK=mcp-bridge MCP_PORT=8090
The MCP server maintains a single IRC connection and multiplexes tool calls from multiple agents. Agents identify themselves via a sender parameter so messages get the right nick attribution.
Agent lifecycle: long-running with shift-changes
Agents are long-running Claude Code SDK sessions. They persist across tasks, preserving context — a worker that just finished refactoring the auth module still has that code in context when the next auth-related task comes in.
Why the SDK, not the CLI
The Claude Code CLI is designed for a human at a terminal — prompt handling, display rendering, keybindings are all overhead when the consumer is a daemon. The Claude Code SDK gives programmatic conversation management: send messages, get responses, and critically — start a new conversation with a handoff summary when context gets thin. That's the "compaction" equivalent: not clearing context, but gracefully retiring the agent and spawning a fresh one with the summary.
Polling
The supervisor injects periodic "check your channels" messages into each agent's SDK session. This is the polling heartbeat. Agents respond by reading their IRC channels via the MCP bridge and acting on anything new, or reporting idle.
Idle detection
A Haiku-class classifier determines whether an agent is idle. No conversation state needed — just a single SDK create_message call:
"Here's the last 5 minutes of this agent's IRC activity. Is it idle? yes/no"
Pennies per evaluation. This keeps the supervisor dumb — it doesn't need to understand task semantics, just whether to send a heartbeat.
Context exhaustion and shift-changes
When an agent's context crosses a threshold (monitored by the supervisor via SDK response metadata or token counts):
- Supervisor tells the agent to produce a handoff summary.
- Agent posts the summary to its task channel.
- Agent posts a notice to
#standup-{slug}that it's handing off. - Supervisor kills the session.
- Supervisor spawns a replacement with a new name from the names file and the summary as initial context.
This is the "shift change" pattern — natural for an org metaphor. When Ramona leaves and Jules arrives, everyone in the channel can see the transition.
Architecture components
Three independent components, deployed separately for independent failure domains:
1. ergo IRCd
- Runs in an LXC container on a Proxmox server.
- Set-and-forget after initial configuration.
- IRCv3
chathistoryfor channel persistence. - No TLS needed for LAN traffic in MVP.
2. IRC MCP bridge (FastMCP)
- ~200 lines of Python.
- Wraps the transport abstraction with IRC backend.
- Exposes the five tools above.
- Connects to ergo over LAN.
- Runs in a Docker container on the desktop.
3. Agent supervisor
- Python process using the Claude Code SDK.
- Spawns and manages agent sessions (EM, managers, workers).
- Monitors context usage.
- Handles shift-changes (summarize → kill → respawn).
- Runs the Haiku idle-checker.
- Runs in a Docker container on the desktop, alongside the bridge.
- Bind-mounts a project directory from the host for git repo access.
The bridge and supervisor are orchestrated via docker-compose on the desktop machine (128GB RAM). They share a Docker network for inter-container communication and both reach ergo over the LAN.
Desktop (128GB RAM) Proxmox (16GB RAM)
┌──────────────────────────────┐ ┌──────────────────┐
│ docker-compose │ │ LXC container │
│ ┌────────────┐ ┌───────────┐│ │ ┌──────────────┐ │
│ │ Supervisor │ │ IRC MCP ││ LAN │ │ ergo │ │
│ │ (SDK) │ │ Bridge ││◄────►│ │ IRCd │ │
│ └────────────┘ └───────────┘│ │ └──────────────┘ │
│ bind: ~/projects │ └──────────────────┘
└──────────────────────────────┘
▲
│ IRC client (phone/terminal)
PM
Relationship to existing Agent_Workflow
What carries forward unchanged:
- Role definitions (manager, implementer, test runner, Groucho/Chico/Zeppo/Fixer/Documenter)
- The proceed workflow (plan → implement → test → review → fix → document)
- Model assignments (Opus for EM and managers, Sonnet for workers, Haiku for idle detection and documentation)
- Review and fix loop limits (3 attempts before escalating)
- Worker dispatch guidance (what context to give each worker type)
What changes:
- Coordination moves from in-process
Task/run_in_backgroundto IRC channel messages via MCP - The orchestrator role splits: strategic coordination stays with the EM, human interaction moves to the channel
- Question relay is replaced by direct channel participation — the PM is in the room
- Task state lives in channel history, not in the orchestrator's context
- Claude Code CLI replaced by Claude Code SDK for programmatic lifecycle management
MVP scope
- ergo IRCd in LXC on Proxmox. Single binary, default config, verify
chathistoryworks. - IRC MCP bridge (~200 lines Python). FastMCP wrapping the transport abstraction. Five tools.
- Agent supervisor — Python, Claude Code SDK, Haiku idle-checker, shift-change logic.
- docker-compose for bridge + supervisor on the desktop, bind-mounting the project directory.
- One EM process — Opus, system-prompted as the engineering manager.
- One manager process — spawned when the EM posts a task.
- PM — connected to ergo from phone and/or terminal.
- One end-to-end task — EM assigns, manager runs the proceed workflow, PM observes from IRC.
Not in MVP: multiple parallel workers, TLS, auth, multi-project namespacing, Matrix/Zulip backends.
Open questions
- Polling cadence. How often should the supervisor heartbeat idle agents? Too fast burns tokens, too slow means tasks sit. Probably start at 30s and tune.
- IRC client for phone. The mobile IRC client landscape is thin. Worth testing a few before committing. If it's painful, that's a signal to look at Matrix or Zulip sooner.
- Message length limits. IRC has per-message length limits (~512 bytes traditional, longer with IRCv3). The MCP bridge may need to handle chunking transparently. Check ergo's limits.
- Channel persistence depth. How much
chathistoryshould ergo retain? Enough for the EM to reconstruct sprint state after a restart. - Git branch strategy with multiple agents. Multiple agents editing the same repo need branch isolation. Worktree-per-task within the bind-mounted project directory is the likely answer.
- SDK session management details. How exactly does the Claude Code SDK expose context usage? Need to verify the API surface for monitoring.
Resolved questions
- Structured vs. conversational task format. Conversational wins. The whole point of using IRC is human observability. JSON task objects would make channels unreadable. The only structured convention is a
TASK:prefix on assignments so the supervisor can pattern-match. - CLI vs. SDK. SDK. The CLI's terminal processing is overhead for daemon use. The SDK gives programmatic lifecycle control needed for shift-changes.
- Single process vs. separate bridge and supervisor. Separate. Independent failure domains — restart the bridge without killing agents, restart the supervisor without dropping IRC.
- Launch-per-task vs. long-running agents. Long-running. Preserves context across related tasks. The supervisor handles lifecycle (polling, idle detection, shift-changes).
- Deployment topology. ergo in LXC on Proxmox (set-and-forget), bridge + supervisor in docker-compose on desktop (128GB RAM), bind-mounted project directory for repo access.