Skip to main content

Examples: Real-World connector.json

These examples show pipeline topology. For dialogue, meeting, mesh, and recursive team examples, see Use Cases, Bidirectional, Meeting Protocol, and N-to-N Topology.

Example 1: Sandboxed Research Session

Single session, sandboxed, no network. Safe exploration of untrusted code.
{
  "connector": "2.0",
  "name": "safe-research",
  "session": {
    "sandbox": true,
    "worktree": "research"
  },
  "models": {
    "primary": "claude"
  }
}
niia run safe-research.json
# Creates worktree, applies sandbox, launches Claude in isolated PTY

Example 2: Claude + Codex Code Review

Two LLMs review the same diff independently. Compare results.
{
  "connector": "2.0",
  "name": "dual-review",
  "pipeline": {
    "phases": [
      {
        "name": "review",
        "workers": [
          { "model": "claude", "prompt": "Review the diff on main. Focus on security." },
          { "model": "codex", "prompt": "Review the diff on main. Focus on performance." }
        ],
        "session": {
          "sandbox": true,
          "worktree": "review-{worker}"
        }
      },
      {
        "name": "synthesize",
        "workers": 1,
        "model": "claude",
        "prompt": "Read both reviews from scratchpad. Synthesize into final report."
      }
    ],
    "scratchpad": true
  }
}
Two workers run in parallel — Claude for security, Codex for performance. Each in its own worktree + sandbox. Results collected in scratchpad. Final synthesis by Claude.

Example 3: Cost-Optimized Feature Build

Research with cheap models. Implement with expensive ones. Verify with medium.
{
  "connector": "2.0",
  "name": "build-feature",
  "models": {
    "cheap": "haiku",
    "expensive": "opus",
    "medium": "sonnet",
    "fallback": "gemini"
  },
  "session": {
    "worktree": "feat-auth",
    "scratchpad": true
  },
  "pipeline": {
    "phases": [
      {
        "name": "research",
        "workers": 5,
        "model": "cheap",
        "prompt": "Investigate auth module. Report file paths, types, patterns. Do not modify files.",
        "session": { "sandbox": true }
      },
      {
        "name": "plan",
        "workers": 1,
        "model": "expensive",
        "prompt": "Read research from scratchpad. Design implementation plan."
      },
      {
        "name": "implement",
        "workers": 1,
        "model": "expensive",
        "prompt": "Implement the plan. Commit changes."
      },
      {
        "name": "verify",
        "workers": 3,
        "model": "medium",
        "prompt": "Run tests. Verify implementation works. Report issues.",
        "session": { "sandbox": true }
      }
    ]
  }
}
5 Haiku workers research in parallel (cheap). 1 Opus worker plans and implements (expensive, but only 2 phases). 3 Sonnet workers verify (medium). If Opus is unavailable, falls back to Gemini automatically.

Example 4: Cross-Machine Distributed Team

Workers on laptop and server, coordinated by a single connector.json.
{
  "connector": "2.0",
  "name": "distributed-build",
  "machines": {
    "laptop": "MY-LAPTOP.local",
    "server": "BUILD-SERVER.local"
  },
  "pipeline": {
    "phases": [
      {
        "name": "research",
        "workers": [
          { "machine": "laptop", "model": "claude", "prompt": "Research frontend auth flow." },
          { "machine": "server", "model": "gemini", "prompt": "Research backend auth middleware." }
        ]
      },
      {
        "name": "implement",
        "workers": [
          { "machine": "laptop", "model": "claude", "prompt": "Implement frontend changes." },
          { "machine": "server", "model": "claude", "prompt": "Implement backend changes." }
        ],
        "session": { "worktree": "impl-{machine}" }
      }
    ],
    "scratchpad": true,
    "mailbox": true
  }
}
Frontend work on laptop, backend work on server. Both coordinated by one connector.json. Scratchpad and mailbox sync across machines via gateway.

Example 5: Provider Failover

Primary is Claude. If Anthropic API is down, switch to Gemini. If that fails, use local LLM.
{
  "connector": "2.0",
  "name": "resilient-task",
  "models": {
    "primary": "claude",
    "fallback": "gemini",
    "emergency": "ollama/llama3"
  },
  "session": {
    "worktree": "task"
  },
  "pipeline": {
    "phases": [
      {
        "name": "work",
        "workers": 1,
        "model": "primary",
        "failover": ["fallback", "emergency"]
      }
    ]
  }
}
Tries Claude first. If unavailable, tries Gemini. Last resort: local Ollama. Work continues regardless of provider status.

Example 6: Nightly Dream (Auto Memory Consolidation)

Runs every 24 hours via daemon scheduler. Reads session history, consolidates memory.
{
  "connector": "2.0",
  "name": "nightly-dream",
  "schedule": "0 3 * * *",
  "models": {
    "primary": "haiku"
  },
  "pipeline": {
    "phases": [
      {
        "name": "consolidate",
        "workers": 1,
        "model": "primary",
        "prompt": "Read the last 24 hours of session transcripts. Extract key decisions, discoveries, and patterns. Save to memory."
      }
    ]
  }
}
Cheap model (Haiku) runs at 3 AM. Reads session history. Updates memory. AI maintains its own knowledge base automatically.

Running Examples

# Run any connector.json
niia run connector.json

# Run with override
niia run connector.json --model-override primary=sonnet

# Dry run (show what would happen)
niia run connector.json --dry-run

# Run specific phase only
niia run connector.json --phase research