Properties
category: plan tags: [minsky, irc, mcp, bridge, implementation] last_updated: 2026-03-17 confidence: high
IRC MCP Bridge Implementation Plan
Context
The minsky project needs an IRC MCP bridge — a FastMCP server that wraps IRC as MCP tools so that Claude Code SDK agents can communicate over IRC channels. This is the standalone communication layer from the Design/Agent_IRC_Architecture spec. The bridge has independent utility: any MCP client can use it to interact with IRC.
The project is greenfield (no code yet). We use uv for dependency management, pydle 1.1 for async IRC, and mcp[cli] 1.26 for FastMCP. SSE transport only.
Why pydle over bottom
The original MVP plan specified bottom. After evaluating both against the installed source:
- pydle auto-handles PING/PONG, NICK registration, NAMES/member tracking, and IRCv3 CAP negotiation.
on_channel_message(target, by, message)is a clean override.ClientPoolmanages multiple connections. Less manual wiring = fewer bugs over time. - bottom 3.0 requires manual PING handling, manual NAMES parsing, manual event handler lifecycle management, and all
send()calls are async with quirky kwargs. Every protocol detail is DIY.
The transport abstraction means we can swap later, but pydle is the better long-term choice for infrastructure code.
Scope
Bridge only — no supervisor, no agent lifecycle, no docker-compose. Files live under bridge/.
File Structure
bridge/
├── pyproject.toml
├── src/minsky_bridge/
│ ├── __init__.py
│ ├── transport.py # Transport Protocol + Message dataclass
│ ├── memory_transport.py # In-memory impl (testing)
│ ├── irc_transport.py # IRC impl (pydle)
│ ├── server.py # FastMCP server: 5 tools + create_app()
│ └── __main__.py # Entry point
└── tests/
├── conftest.py # --run-irc flag for integration tests
├── test_memory_transport.py
├── test_irc_transport.py # Integration tests, skip by default
└── test_server.py
Also at project root: .env.example
Steps
1. Project scaffolding
bridge/pyproject.toml — hatchling build, Python 3.12+, deps: mcp[cli]>=1.0, pydle>=1.0. Dev deps: pytest>=8.0, pytest-asyncio>=0.23. asyncio_mode = "auto". Script entry: minsky-bridge = "minsky_bridge.__main__:main".
bridge/src/minsky_bridge/__init__.py — empty.
.env.example — TRANSPORT_TYPE, IRC_SERVER, IRC_PORT, IRC_NICK, MCP_PORT.
Init git repo.
2. Transport Protocol + Message
bridge/src/minsky_bridge/transport.py
@dataclass(frozen=True) class Message: channel: str sender: str text: str timestamp: datetime class Transport(Protocol): async def send(self, channel: str, message: str, sender: str) -> None: ... async def read(self, channel: str, since: datetime | None = None, limit: int = 50) -> list[Message]: ... async def create_channel(self, name: str) -> None: ... async def list_channels(self) -> list[str]: ... async def get_members(self, channel: str) -> list[str]: ...
3. MemoryTransport + tests (TDD)
bridge/tests/test_memory_transport.py — write tests first:
test_create_channel/test_send_and_read/test_read_returns_newest_firsttest_read_since_filters_by_time/test_read_limit/test_read_empty_channeltest_get_members/test_send_auto_creates_channel
bridge/src/minsky_bridge/memory_transport.py — implement to pass tests. Dict-based storage, reversed() for newest-first.
3. FastMCP server + tests (TDD)
bridge/tests/test_server.py — write tests first using create_app(MemoryTransport()) + app.call_tool(name, args). Returns Sequence[ContentBlock]; check result[0].text.
bridge/src/minsky_bridge/server.py — create_app(transport, **kwargs) -> FastMCP. 5 tools as closures over transport:
| Tool | Params | Returns |
|---|---|---|
send_message |
channel, text, sender |
Confirmation string |
read_messages |
channel, since?, limit? |
[HH:MM:SS] <nick> text lines, newest first |
create_channel |
name |
Confirmation string |
list_channels |
— | Bulleted channel list |
get_members |
channel |
Bulleted member list |
since is ISO 8601 string, parsed to datetime internally. **kwargs forwarded to FastMCP() constructor for port, lifespan, etc.
5. IrcTransport (pydle)
bridge/src/minsky_bridge/irc_transport.py — the real IRC backend.
pydle API (verified against installed 1.1.0):
- Subclass
pydle.Client, overrideon_channel_message(self, target, by, message) on_connect(self)— auto-join channelsself.channels— built-in dict tracking joined channels + membersawait self.join(channel),await self.message(target, text)await self.connect(hostname, port, tls=False)- PING/PONG handled automatically
pydle.ClientPoolfor managing observer + per-sender connections
Design:
- Observer client (subclass of
pydle.Client): joins all channels, overrideson_channel_messageto bufferMessageobjects intodict[str, list[Message]] - Per-sender clients: lazy-created, each a plain
pydle.Clientwith its own nick. Join channels on demand. Used only forsend()so agent messages have the right nick. - Member tracking: pydle's built-in
self.channels[channel]['users']set — no manual NAMES query needed - Pool management:
pydle.ClientPoolto run all clients in one event loop - Lock:
asyncio.Lockon_get_sender()andcreate_channel()for concurrency
bridge/tests/conftest.py — pytest_addoption for --run-irc.
bridge/tests/test_irc_transport.py — integration tests, skipped without --run-irc flag.
6. Entry point
bridge/src/minsky_bridge/__main__.py
def main(): transport_type = os.environ.get("TRANSPORT_TYPE", "irc") port = int(os.environ.get("MCP_PORT", "8090")) if transport_type == "memory": transport = MemoryTransport() app = create_app(transport, port=port) elif transport_type == "irc": transport = IrcTransport(server=..., port=..., observer_nick=...) @asynccontextmanager async def lifespan(app): await transport.connect() try: yield {} finally: await transport.disconnect() app = create_app(transport, port=port, lifespan=lifespan) app.run(transport="sse")
The lifespan pattern lets IRC connect/disconnect share FastMCP's event loop (FastMCP calls anyio.run() internally).
Event loop concern: pydle uses asyncio internally. FastMCP uses anyio (asyncio backend). These are compatible — pydle's client pool needs to run inside the same loop. The lifespan context manager handles this: connect observer + start pool inside FastMCP's loop, tear down on shutdown.
Not in scope
- Message chunking for
maxline(add later when ergo is running) - TLS for IRC connection
- Docker/Dockerfile
- Supervisor, agent lifecycle, prompts
names.txt,docker-compose.yml- stdio MCP transport
Verification
- Unit tests:
cd bridge && uv run pytest tests/test_memory_transport.py tests/test_server.py -v— all pass, no IRC needed - Smoke test with memory transport:
TRANSPORT_TYPE=memory MCP_PORT=8090 uv run minsky-bridge— starts SSE server on port 8090, verify with curl or MCP client - Integration test (requires ergo):
uv run pytest tests/test_irc_transport.py -v --run-irc
