Week 6 AIPass update – answering the top questions from last post (file conflicts, remote models, scale)

Followup to last post with answers to the top questions from the comments. Appreciate everyone who jumped in.

The most common one by a mile was "what happens when two agents write to the same file at the same time?" Fair

question, it's the first thing everyone asks about a shared-filesystem setup. Honest answer: almost never happens,

because the framework makes it hard to happen.

Four things keep it clean:

  1. Planning first. Every multi-agent task runs through a flow plan template before any file gets touched. The plan

    assigns files and phases so agents don't collide by default. Templates here if you're curious:

    github.com/AIOSAI/AIPass/tree/main/src/aipass/flow/templates

  2. Dispatch blockers. An agent can't exist in two places at once. If five senders email the same agent about the

    same thing, it queues them, doesn't spawn five copies. No "5 agents fixing the same bug" nightmares.

  3. Git flow. Agents don't merge their own work. They build features on main locally, submit a PR, and only the

    orchestrator merges. When an agent is writing a PR it sets a repo-wide git block until it's done.

  4. JSON over markdown for state files. Markdown let agents drift into their own formats over time. JSON holds

    structure. You can run `cat .trinity/local.json` and see exactly what an agent thinks at any time.

    Second common question: "doesn't a local framework with a remote model defeat the point?" Local means the

    orchestration is local - agents, memory, files, messaging all on your machine. The model is the brain you plug in.

    And you don't need API keys - AIPass runs on your existing Claude Pro/Max, Codex, or Gemini CLI subscription by

    invoking each CLI as an official subprocess. No token extraction, no proxying, nothing sketchy. Or point it at a

    local model. Or mix all of them. You're not locked to one vendor and you're not paying for API credits on top of a

    sub you already have.

    On scale: I've run 30 agents at once without a crash, and 3 agents each with 40 sub-agents at around 80% CPU with

    occasional spikes. Compute is the bottleneck, not the framework. I'd love to test 1000 but my machine would cry

    before I got there. If someone wants to try it, please tell me what broke.

    Shipped this week: new watchdog module (5 handlers, 100+ tests) for event automation, fixed a git PR lock file leak

    that was leaking into commits, plus a bunch of quality-checker fixes.

    About 6 weeks in. Solo dev, every PR is human+AI collab.

    pip install aipass

    https://github.com/AIOSAI/AIPass

    Keep the questions coming, that's what got this post written.

submitted by /u/Input-X
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top