Your machine right now: claude → deep reasoning, architecture codex → fast code generation, review gemini → broad analysis, large context aider → lightweight edits, git-native opencode → open-source, customizable ollama → local, private, free
Six AI tools. Six isolated terminals. Six separate conversations.
They can’t see each other. They can’t help each other.
You copy-paste between them. You are the integration layer.
Not one AI doing everything. Many AIs, each with a role.
1 agent: Claude does research + design + code + review + docs = one perspective. Blind spots everywhere.6 agents: Each AI does what it's best at. Gemini researches (breadth). Claude designs (depth). Codex implements (speed). OpenCode reviews (independence). = six perspectives. Blind spots caught.
Not everything on your laptop. The right machine for the right workload.
Your laptop: Claude (coordination) + Aider (docs) → fast network, your dev environmentGPU server: Ollama/llama-70b (heavy inference) → 4x A100, can't run on laptopCI server: Codex (implementation) + OpenCode (review) → build tools, test infrastructureData server: Ollama/llama3 (data analysis) → production database, data can't leave
Four machines. Four different AI models. Each placed where its resources are. Data never leaves the data server. Heavy inference runs on GPUs. Tests run on CI. You coordinate from your laptop.
connector.json doesn’t care which AI you use. If it runs in a terminal, it’s a valid worker.
Currently tested: ✅ Claude Code (claude) ✅ Codex CLI (codex) ✅ Gemini CLI (gemini) ✅ Aider (aider) ✅ OpenCode (opencode) ✅ Ollama (ollama run llama3)Works by design (any terminal CLI): ✅ Amp ✅ Cline ✅ Goose ✅ Continue ✅ Roo Code ✅ Any future AI CLI
Why? Because connector.json uses PTY-for-AI — real terminal sessions, not API pipes. The daemon spawns a terminal, types the AI command, reads the output. It doesn’t matter what’s running inside.
niia write --session S1 $'claude\r' → Claude is a workerniia write --session S2 $'codex\r' → Codex is a workerniia write --session S3 $'gemini\r' → Gemini is a workerniia write --session S4 $'aider\r' → Aider is a workerniia write --session S5 $'ollama run llama3\r' → Ollama is a worker
Same interface. Same commands. Same scratchpad. Different AI.
Without connector.json, multi-AI collaboration looks like this:
You: "Claude, research the auth module" [wait 2 minutes] [copy Claude's output] "Codex, review this code: [paste]" [wait 1 minute] [copy Codex's output] "Gemini, does this match the RFC? [paste]" [wait 1 minute] ...You are the integration layer.You are the bottleneck.You copy-paste between AI silos.
With connector.json:
$ niia run auth-refactor.connector.json[Phase 1: Research] Gemini + Ollama → 30 seconds (parallel)[Phase 2: Design] Claude → 45 seconds[Phase 3: Implement] Codex → 60 seconds[Phase 4: Review] OpenCode + Claude → 30 seconds (parallel)[Phase 5: Document] Aider → 15 secondsTotal: ~3 minutes. No copy-paste. Six AI perspectives.