Skip to main content

Infinite Agent Chains

connector.json has no limit on pipeline depth. Each phase’s output feeds the next. Agents chain indefinitely.

Linear Chain

{
  "connector": "2.0",
  "name": "deep-chain",
  "pipeline": {
    "phases": [
      { "name": "research",    "model": "haiku",  "prompt": "Find all auth-related files. Report paths." },
      { "name": "analyze",     "model": "sonnet", "prompt": "Read research from scratchpad. Identify vulnerability patterns." },
      { "name": "plan",        "model": "opus",   "prompt": "Read analysis. Design fix for each vulnerability." },
      { "name": "implement",   "model": "opus",   "prompt": "Read plan. Implement all fixes. Commit." },
      { "name": "test",        "model": "sonnet", "prompt": "Run tests. Report failures." },
      { "name": "fix-tests",   "model": "opus",   "prompt": "Read test failures. Fix and re-run." },
      { "name": "review",      "model": "codex",  "prompt": "Independent review of all changes." },
      { "name": "docs",        "model": "haiku",  "prompt": "Update documentation for changed code." },
      { "name": "changelog",   "model": "haiku",  "prompt": "Write changelog entry." },
      { "name": "pr",          "model": "sonnet", "prompt": "Create pull request with review + changelog." }
    ],
    "scratchpad": true
  }
}
10 phases. 5 different models. Each phase reads the previous phase’s output from scratchpad. Research () → Analysis ($$) → Plan ($$$) → Implement ($$$) → Test ($$) → Fix ($$$) → Review ($$) → Docs () → Changelog ($) → PR ($$). Cost-optimized: expensive models only where reasoning depth matters.

Fan-Out / Fan-In

Parallel workers that converge into a single synthesis.
Phase 1: Fan-out (5 workers in parallel)
  ┌─── Haiku: "Research auth module"
  ├─── Haiku: "Research session handling"
  ├─── Haiku: "Research token validation"
  ├─── Haiku: "Research password hashing"
  └─── Haiku: "Research OAuth flow"


Phase 2: Fan-in (1 worker reads all)
  └─── Opus: "Synthesize all 5 research threads into attack surface map"


Phase 3: Fan-out again (3 workers)
  ┌─── Opus: "Fix critical vulnerabilities"
  ├─── Sonnet: "Fix medium vulnerabilities"
  └─── Haiku: "Fix low-risk issues"


Phase 4: Fan-in (1 worker)
  └─── Codex: "Review all fixes independently"
{
  "pipeline": {
    "phases": [
      {
        "name": "research",
        "parallel": true,
        "workers": [
          { "model": "haiku", "prompt": "Research auth module." },
          { "model": "haiku", "prompt": "Research session handling." },
          { "model": "haiku", "prompt": "Research token validation." },
          { "model": "haiku", "prompt": "Research password hashing." },
          { "model": "haiku", "prompt": "Research OAuth flow." }
        ]
      },
      {
        "name": "synthesize",
        "model": "opus",
        "prompt": "Read all 5 research threads. Map the full attack surface."
      },
      {
        "name": "fix",
        "parallel": true,
        "workers": [
          { "model": "opus",   "prompt": "Fix critical vulnerabilities." },
          { "model": "sonnet", "prompt": "Fix medium vulnerabilities." },
          { "model": "haiku",  "prompt": "Fix low-risk issues." }
        ]
      },
      {
        "name": "review",
        "model": "codex",
        "prompt": "Review all fixes independently."
      }
    ],
    "scratchpad": true
  }
}

Recursive Refinement

A chain that loops until quality criteria are met.
{
  "pipeline": {
    "phases": [
      {
        "name": "draft",
        "model": "opus",
        "prompt": "Implement the feature. Write tests."
      },
      {
        "name": "critique",
        "model": "codex",
        "prompt": "Review the implementation. List every issue. Score 1-10."
      },
      {
        "name": "refine",
        "model": "opus",
        "prompt": "Read critique. Fix every issue listed. Re-run tests.",
        "repeat_until": {
          "phase": "critique",
          "condition": "score >= 8",
          "max_iterations": 5
        }
      }
    ]
  }
}
Opus drafts → Codex critiques → Opus refines → Codex critiques again → repeat until score ≥ 8 or 5 iterations. Two different AI models in an improvement loop. Each brings a different perspective. Neither is subordinate — they challenge each other.

Multi-LLM Assembly Line

Different AI for each stage, like a factory assembly line.
Gemini (breadth)     → finds all relevant code across 1000 files

Claude Haiku (speed) → classifies each file by priority

Claude Opus (depth)  → deep analysis of high-priority files

Codex (code)         → generates implementation

Claude Sonnet (balance) → reviews and tests

Gemini (breadth)     → checks for ripple effects across codebase

Claude Haiku (speed) → writes documentation
{
  "models": {
    "breadth": "gemini",
    "speed": "haiku",
    "depth": "opus",
    "code": "codex",
    "balance": "sonnet"
  },
  "pipeline": {
    "phases": [
      { "name": "discover",  "model": "breadth", "workers": 3 },
      { "name": "classify",  "model": "speed",   "workers": 1 },
      { "name": "analyze",   "model": "depth",   "workers": 1 },
      { "name": "implement", "model": "code",    "workers": 1 },
      { "name": "verify",    "model": "balance",  "workers": 2 },
      { "name": "ripple",    "model": "breadth", "workers": 3 },
      { "name": "document",  "model": "speed",   "workers": 1 }
    ],
    "scratchpad": true
  }
}
7 phases. 5 different AI models. Each chosen for what it does best. No single model could do this alone. The chain is stronger than any individual.

Why No Depth Limit?

Every other multi-agent system has practical limits:
  • Claude Teams: In-memory context. Dies when terminal closes.
  • Codex Plugin: Single subprocess call. One shot.
  • claude-squad: tmux sessions. Manual management.
connector.json has no limit because:
  1. Each phase is a fresh PTY session — no context accumulation
  2. Scratchpad is the memory — files on disk, not in-context tokens
  3. Daemon persists — phases can run for hours, days
  4. Sessions are independent — phase 7 doesn’t carry phase 1’s context weight
The scratchpad is the key. Instead of stuffing everything into one AI’s context window, each phase reads only what it needs from shared files. The chain can be 3 phases or 30 — the cost per phase stays constant.
Traditional: Context grows with chain length → hits limit
connector.json: Each phase reads from scratchpad → constant cost

Phase 1: write to scratchpad (context: small)
Phase 2: read scratchpad + work (context: small)
Phase 3: read scratchpad + work (context: small)
...
Phase N: read scratchpad + work (context: still small)