The Wall: Structured I/O vs PTY
Every AI coding CLI spawns subagents the same way:stdio: pipe.
This creates a wall that limits what AI agents can do.
How Subagents Work Today
When Claude Code, Codex, or Gemini CLI spawn a subagent, they create a child process with piped stdin/stdout:The Workarounds
Every tool hits this wall and builds a workaround:Claude Code: Teams (tmux panes)
Claude Code cannot make its subagents interactive, so it spawns entirely new CLI processes in tmux or iTerm2 panes:- Only Claude CLI can be a teammate
- Requires tmux or iTerm2 installed
- Communication is file-based (slow, no real-time)
- Single machine only
Codex: Plugin for Claude Code
OpenAI built codex-plugin-cc to run Codex inside Claude Code:- One direction only (Codex → Claude)
- Requires both tools installed
- Per-pair integration (N² problem for N tools)
- Single machine only
claude-squad: tmux Manager
An open-source tool that manages multiple AI CLIs in tmux sessions:- Requires tmux
- No headless mode (needs visible terminal)
- No remote access
- No daemon persistence
The N² Problem
When AI tools want to collaborate, the current approach requires a plugin for each pair:PTY: The Way Through
A PTY (pseudo-terminal) is a real terminal — not a pipe. When a process runs in a PTY, it gets a screen, keyboard input, and interactive control.- Type into it (send prompts)
- Read its screen (get responses)
- Run anything inside it (any CLI, any tool)
- Control it programmatically (from another process)
N² → N
With PTY-for-AI, you don’t need per-pair plugins:What This Enables
| Capability | Structured I/O | PTY-for-AI |
|---|---|---|
| Heterogeneous teams | ❌ | Claude + Gemini + Codex in one team |
| Cost routing | Model selection only | Route to cheapest capable LLM |
| Provider failover | System stops | Switch to different PTY session |
| Cross-machine teams | ❌ | Workers on different machines |
| Offline AI | ❌ | Local LLM in PTY session |
| AI controlling AI | ❌ | One AI types into another’s terminal |