Skip to main content

N-to-N Topology

Most multi-agent systems are hub-and-spoke: one leader, many workers. connector.json supports full mesh — any agent talks to any agent.

Topologies

Hub-and-Spoke (Claude Teams, Codex Plugin):
       Leader
      / | | \
     W  W  W  W
  Workers only talk to leader. Never to each other.

Star (with mailbox):
       Leader
      / | | \
     W──W──W──W
  Workers can message each other, but leader manages.

Full Mesh (connector.json N2N):
     A ─── B
     |\ /|
     | X  |
     |/ \|
     C ─── D
  Every agent talks to every agent. No hierarchy required.

Why Full Mesh Matters

In a hub-and-spoke system:
Worker A finds a bug → tells Leader → Leader tells Worker B to fix it
= 2 hops. Leader is bottleneck. Leader must understand everything.
In full mesh:
Worker A finds a bug → tells Worker B directly → B fixes it
= 1 hop. No bottleneck. Peer-to-peer collaboration.

The Spec

{
  "connector": "2.0",
  "name": "mesh-team",
  "type": "mesh",

  "agents": [
    { "id": "frontend", "model": "claude",  "prompt": "You own frontend code. Implement and maintain." },
    { "id": "backend",  "model": "codex",   "prompt": "You own backend API. Implement and maintain." },
    { "id": "devops",   "model": "gemini",  "prompt": "You own infrastructure. Deploy and monitor." },
    { "id": "qa",       "model": "sonnet",  "prompt": "You own quality. Test everything." }
  ],

  "communication": {
    "topology": "mesh",
    "any_to_any": true,
    "broadcast_channel": true
  },

  "task": "Build and deploy user authentication feature."
}
Four agents. No leader. Each owns a domain. They coordinate directly.

How N2N Works

Frontend needs a new API endpoint:
  frontend → [mailbox] → backend: "I need POST /auth/refresh"
  backend → [mailbox] → frontend: "Done. Returns { token, expires_at }"
  frontend → [mailbox] → qa: "New endpoint ready for testing"
  qa → [mailbox] → backend: "POST /auth/refresh returns 500 with expired token"
  backend → [mailbox] → qa: "Fixed. Retry."
  qa → [broadcast]: "All auth endpoints passing. Ready to deploy."
  devops → [mailbox] → qa: "Deployed to staging. URL: ..."
  qa → [broadcast]: "Staging verified. Ship it."
No orchestrator needed for routine communication. Agents self-organize around the task.

Broadcast Channel

Any agent can send to all:
{
  "communication": {
    "broadcast_channel": true
  }
}
qa → [broadcast]: "Tests passing. Ready to deploy."
  → frontend sees it
  → backend sees it
  → devops sees it → starts deploy

Observer Mode

The main session (you) can observe the entire mesh without participating:
# Watch all communication
niia mesh --observe

  [frontend → backend] "Need POST /auth/refresh"
  [backend → frontend] "Done. Returns { token, expires_at }"
  [frontend → qa] "New endpoint ready for testing"
  [qa → backend] "POST /auth/refresh returns 500"
  ...

# Intervene when needed
niia mesh --broadcast "Stop. Change of plans. Use OAuth instead of custom tokens."
 all agents receive the message
 all agents adjust their work
You see everything. You can inject commands to any agent or broadcast to all. But you don’t have to — the mesh self-organizes.

Mesh + Hierarchy

Full mesh doesn’t mean no structure. Agents can have specializations and natural leadership:
{
  "agents": [
    { "id": "lead",     "model": "opus",   "prompt": "Technical lead. Make final decisions when agents disagree." },
    { "id": "frontend", "model": "sonnet", "prompt": "Frontend specialist. Consult lead for architecture decisions." },
    { "id": "backend",  "model": "codex",  "prompt": "Backend specialist. Consult lead for architecture decisions." },
    { "id": "qa",       "model": "haiku",  "prompt": "QA. Report to all. Block deploy if tests fail." }
  ],
  "communication": {
    "topology": "mesh",
    "escalation": {
      "disagreement": "lead"
    }
  }
}
Everyone talks to everyone. But when there’s disagreement, it escalates to the lead. Natural hierarchy emerges from rules, not from wiring.

N2N + Cross-Machine

Full mesh across multiple machines:
{
  "agents": [
    { "id": "frontend", "model": "claude",  "machine": "laptop" },
    { "id": "backend",  "model": "codex",   "machine": "server" },
    { "id": "devops",   "model": "gemini",  "machine": "server" },
    { "id": "qa",       "model": "sonnet",  "machine": "laptop" }
  ],
  "communication": {
    "topology": "mesh",
    "transport": "gateway"
  }
}
Frontend and QA on laptop. Backend and DevOps on server. Full mesh communication across machines via gateway relay. Every agent can message every other agent regardless of location.

The Topology Spectrum

connector.json supports all topologies:
Pipeline:    A → B → C → D           phases in connector.json
Dialogue:    A ↔ B                    type: "dialogue"
Meeting:     A ↔ B ↔ C               type: "meeting"
Hub-spoke:   L → [W, W, W]           pipeline with parallel workers
Star:        L ↔ [W ↔ W ↔ W]        pipeline + mailbox
Mesh:        A ↔ B ↔ C ↔ D           type: "mesh"
Tree:        A → [B → [C, D], E]     recursive teams

Choose the topology that fits the task.
One spec. All patterns.