Skip to main content

Introducing connector.json

One JSON file. Multiple AI agents. Zero custom integration.
niia run review.connector.json
This starts two AI agents in parallel — Claude reviewing for security, Codex reviewing for performance — each in an isolated git worktree with OS-level sandboxing, sharing findings through a common scratchpad. When both finish, a third agent synthesizes the results. No plugins to install. No APIs to wire up. One JSON file.

The Problem

In 2026, every developer has multiple AI coding tools. Claude Code for deep reasoning. Codex for speed. Gemini for breadth. Aider for lightweight edits. But they can’t work together.
Today:
  Claude Code ──── isolated ──── can't talk to Codex
  Codex CLI   ──── isolated ──── can't talk to Gemini
  Gemini CLI  ──── isolated ──── can't talk to Claude

  N tools that can't collaborate = N silos
The only bridge is codex-plugin-cc — one plugin, one direction, one pair. For N tools to collaborate, you need N² custom integrations. That doesn’t scale.

The Solution

connector.json is a declarative spec for AI-to-AI orchestration. Where MCP defines how AI connects to tools (databases, APIs, browsers), connector.json defines how AI coordinates with other AI.
MCP:             AI → tool      (passive target — responds to queries)
connector.json:  AI ↔ AI ↔ AI  (active participants — reason and act)

How It Works

1. Define your workflow

{
  "connector": "2.0",
  "name": "dual-review",

  "models": {
    "security": "claude",
    "performance": "codex"
  },

  "pipeline": {
    "phases": [
      {
        "name": "review",
        "parallel": true,
        "workers": [
          { "model": "security",    "prompt": "Review for security vulnerabilities." },
          { "model": "performance", "prompt": "Review for performance issues." }
        ],
        "session": { "sandbox": true, "worktree": "review-{worker}" }
      },
      {
        "name": "synthesize",
        "model": "security",
        "prompt": "Read both reviews from scratchpad. Produce final report."
      }
    ],
    "scratchpad": true
  }
}

2. Run it

niia run dual-review.connector.json

3. What happens

┌─────────────────────────────────────────────────────────────┐
│  niia daemon reads connector.json                           │
│                                                             │
│  Phase 1: "review" (parallel)                               │
│  ┌─────────────────────┐  ┌─────────────────────┐           │
│  │ PTY Session 1       │  │ PTY Session 2       │           │
│  │ Claude Code         │  │ Codex CLI           │           │
│  │ worktree: review-1  │  │ worktree: review-2  │           │
│  │ sandbox: ON         │  │ sandbox: ON         │           │
│  │ "Review security"   │  │ "Review performance"│           │
│  └────────┬────────────┘  └────────┬────────────┘           │
│           │                        │                        │
│           └──── scratchpad/ ───────┘                        │
│                                                             │
│  Phase 2: "synthesize"                                      │
│  ┌─────────────────────┐                                    │
│  │ PTY Session 3       │                                    │
│  │ Claude Code         │                                    │
│  │ reads scratchpad/   │                                    │
│  │ "Synthesize report" │                                    │
│  └─────────────────────┘                                    │
│                                                             │
│  Output: final-review.md                                    │
└─────────────────────────────────────────────────────────────┘
Each worker runs in a real terminal (PTY), not a JSON pipe. This means any CLI that runs in a terminal is a valid worker — Claude, Codex, Gemini, Aider, or a local LLM.

What connector.json Can Describe

Cost-Aware Routing

Use cheap models for research, expensive models for implementation.
{
  "models": {
    "cheap": "haiku",
    "expensive": "opus",
    "medium": "sonnet"
  },
  "pipeline": {
    "phases": [
      { "name": "research",   "workers": 5, "model": "cheap" },
      { "name": "implement",  "workers": 1, "model": "expensive" },
      { "name": "verify",     "workers": 3, "model": "medium" }
    ]
  }
}
5 Haiku workers research in parallel ($). 1 Opus worker implements ($$$). 3 Sonnet workers verify ($$). Total cost is a fraction of running Opus for everything.

Provider Failover

If Anthropic’s API goes down, work continues on Gemini. If that fails, fall back to a local model.
{
  "models": {
    "primary": "claude",
    "fallback": "gemini",
    "emergency": "ollama/llama3"
  },
  "pipeline": {
    "phases": [
      {
        "name": "work",
        "model": "primary",
        "failover": ["fallback", "emergency"]
      }
    ]
  }
}
Zero downtime. The work continues regardless of which provider is available.

Cross-Machine Teams

Workers on your laptop and a build server, coordinated by one file.
{
  "machines": {
    "laptop": "MY-LAPTOP.local",
    "server": "BUILD-SERVER.local"
  },
  "pipeline": {
    "phases": [
      {
        "name": "build",
        "workers": [
          { "machine": "laptop", "model": "claude", "prompt": "Implement frontend." },
          { "machine": "server", "model": "claude", "prompt": "Implement backend." }
        ]
      }
    ]
  }
}
Frontend work on laptop, backend work on server. Both in isolated worktrees. Results merged automatically.

Heterogeneous Review Pipeline

Each AI brings its strengths to a multi-stage review.
{
  "models": {
    "architect": "claude",
    "speedster": "codex",
    "breadth": "gemini"
  },
  "pipeline": {
    "phases": [
      {
        "name": "multi-review",
        "parallel": true,
        "workers": [
          { "model": "architect", "prompt": "Review architecture and design patterns." },
          { "model": "speedster", "prompt": "Review for performance and optimization." },
          { "model": "breadth",   "prompt": "Review for edge cases and compatibility." }
        ],
        "session": { "sandbox": true }
      },
      {
        "name": "verdict",
        "model": "architect",
        "prompt": "Read all three reviews. Final verdict: ship or fix?"
      }
    ],
    "scratchpad": true
  }
}
Three different AI models review simultaneously. Each in a sandboxed, isolated session. One synthesizes the final verdict.

Nightly Memory Consolidation

A scheduled pipeline that maintains AI’s own memory.
{
  "connector": "2.0",
  "name": "nightly-dream",
  "schedule": "0 3 * * *",
  "models": { "primary": "haiku" },
  "pipeline": {
    "phases": [
      {
        "name": "consolidate",
        "model": "primary",
        "prompt": "Read session history from the last 24 hours. Extract key decisions and patterns. Save to memory."
      }
    ]
  }
}
Runs at 3 AM. Cheap model reads session history. Updates long-term memory. AI maintains its own knowledge base — automatically, overnight, for pennies.

The Execution Layer: PTY-for-AI

connector.json is a spec. It needs a runtime. That runtime is PTY-for-AI — a pattern where AI sessions run in real terminal instances (PTY), not JSON pipes. A persistent daemon manages sessions. A headless server bridges them to the network.
Why PTY instead of pipes?

Pipe (how everyone else does it):
  Parent AI → spawn child → JSON in → JSON out → parse
  = Only the same AI can be a child
  = No interactive terminal control

PTY (what connector.json uses):
  Daemon → terminal session → any CLI runs here
  = Claude, Codex, Gemini, local LLM — anything
  = Interactive control, real terminal, full capability
This is why connector.json can orchestrate any AI CLI. It doesn’t need a plugin for each one. If the AI runs in a terminal, it’s a valid worker.

connector.json vs MCP

They’re complementary layers.
connector.json:  AI ↔ AI      Orchestration (who does what, when, where)
MCP:             AI → tool    Access (connect to service, call function)
MCP servers are available inside connector.json sessions. A worker can use GitHub MCP, Postgres MCP, Chrome DevTools MCP — while participating in a connector pipeline.
{
  "tools": {
    "mcp": ["github", "postgres"]
  },
  "pipeline": {
    "phases": [
      {
        "prompt": "Use GitHub MCP to check PR status. Use Postgres to verify migration."
      }
    ]
  }
}

Beyond Pipeline

The examples above show pipeline topology — sequential phases with parallel workers. connector.json also supports:
  • Dialogue — two agents in multi-round conversation, debate, peer review
  • Meeting — all participants hear every voice, agenda-driven
  • Mesh — any agent talks to any agent, no leader required
  • Recursive Teams — agents spawn their own sub-teams, unlimited depth
  • Cross-Machine — agents on different physical machines, one spec
These topologies compose. A meeting can feed into a pipeline. A pipeline phase can contain a dialogue. A mesh agent can spawn recursive sub-teams on remote machines. As networks grow beyond human cognitive capacity, AI becomes the orchestration layer — not just a worker, but the manager of other AI. See Dimensional Growth.

Status

ComponentStatus
connector.json spec v2.0Draft — this document
PTY daemonProduction (launchd service)
Headless serverProduction (gateway + P2P)
Session plugins (worktree, sandbox)Implemented
niia run connector.jsonIn development
Cross-machine orchestrationImplemented (niia remote)
Provider failoverDesigned
Scratchpad + MailboxDesigned

Try It

# Install NIIA
openclis install niia

# Login
niia login

# Start a headless session with worktree isolation
niia serve --worktree my-feature --sandbox

# Or run a connector.json pipeline (coming soon)
niia run my-workflow.connector.json

connector.json is an open spec by OpenCLIs. Runtime implementation by NIIA (Monolex). Spec: connectorjson.org