Skip to main content

Machine-Level Expansion

A single machine runs AI sessions. Multiple machines form a network of AI sessions. connector.json treats machines as deployment targets — same spec, any scale.

One Machine, One Daemon

Machine A:
  niia daemon (launchd, always running)
    ├── PTY Session 1: Claude
    ├── PTY Session 2: Codex
    └── PTY Session 3: Gemini
The daemon manages sessions. Headless server bridges them to the network. This is the atomic unit.

Two Machines, One Pipeline

Machine A (laptop):                Machine B (server):
  niia daemon                        niia daemon
    ├── Session: Claude                ├── Session: Codex
    └── Session: Haiku                 └── Session: ollama/llama3

  ←───── gateway relay (WSS) ─────→
{
  "machines": {
    "laptop": "MY-LAPTOP.local",
    "server": "BUILD-SERVER.local"
  },
  "pipeline": {
    "phases": [
      { "machine": "laptop", "model": "claude", "prompt": "Design the feature." },
      { "machine": "server", "model": "codex",  "prompt": "Implement it." },
      { "machine": "laptop", "model": "haiku",  "prompt": "Write tests." },
      { "machine": "server", "model": "ollama/llama3", "prompt": "Run on production data." }
    ]
  }
}
Same connector.json. Phases execute on different machines. The pipeline doesn’t know or care about machine boundaries.

N Machines, Full Mesh

Machine A ──── Machine B
    │    \    /    │
    │     \  /     │
    │      \/      │
    │      /\      │
    │     /  \     │
    │    /    \    │
Machine C ──── Machine D
Every machine talks to every machine. Every AI session on any machine can communicate with any AI session on any other machine.
{
  "machines": {
    "seoul":  "SEOUL.local",
    "tokyo":  "TOKYO.local",
    "sf":     "SF.local",
    "london": "LONDON.local"
  },
  "agents": [
    { "id": "kr-dev",  "machine": "seoul",  "model": "claude" },
    { "id": "jp-test", "machine": "tokyo",  "model": "gemini" },
    { "id": "us-ops",  "machine": "sf",     "model": "codex" },
    { "id": "uk-sec",  "machine": "london", "model": "ollama/llama3" }
  ],
  "communication": {
    "topology": "mesh"
  }
}
4 countries. 4 machines. 4 different AI models. Full mesh communication. One connector.json.

What Each Machine Provides

Machines aren’t interchangeable. Each has unique resources.
MachineHasBest for
LaptopFast network, developer contextCoordination, light tasks
GPU serverA100/H100, local modelsHeavy inference, fine-tuned models
Data serverProduction database, logsData analysis (data doesn’t move)
CI serverBuild tools, test infrastructureTesting, deployment
Air-gappedSensitive data, no internetRegulated data processing
connector.json places AI where the resources are:
{
  "phases": [
    { "machine": "laptop",     "model": "haiku",          "prompt": "Coordinate." },
    { "machine": "gpu-server", "model": "ollama/llama-70b","prompt": "Heavy analysis." },
    { "machine": "data-server","model": "ollama/llama3",   "prompt": "Query prod data." },
    { "machine": "ci-server",  "model": "codex",          "prompt": "Run full test suite." }
  ]
}

Machine Lifecycle

Machines come and go. The daemon handles it.
# See what's online
niia remote status list
  SEOUL.local     [online]  3 sessions
  TOKYO.local     [online]  1 session
  SF.local        [offline]
  LONDON.local    [online]  2 sessions

# Start headless on a machine
niia remote start SF.local

# Upgrade niia on a remote machine
niia remote upgrade TOKYO.local

# A machine goes offline mid-pipeline
#   → failover to another machine with same capability
#   → or pause and resume when machine comes back

Machine as Deployment Target

In traditional infrastructure:
Kubernetes: "run this container on a node with 4 GPUs"
In connector.json:
connector.json: "run this AI on a machine with local LLM and production data access"
The machine is a deployment target. connector.json is the manifest. niia daemon is the runtime.
Kubernetes                    connector.json
─────────────                 ──────────────
Node                          Machine
Pod                           PTY Session
Container                     AI CLI (claude, codex, gemini)
YAML manifest                 connector.json
kubelet                       niia daemon
kubectl                       niia remote
Service mesh                  gateway relay + P2P
Same pattern. Different domain. Kubernetes orchestrates containers. connector.json orchestrates AI.