I replaced my entire team with 19 Claude-powered agents. Here’s the architecture.

I run a local business audit platform with zero employees. The entire operation is handled by 19 AI agents built on Anthropic's Claude models, deployed on Railway with staggered boot schedules. Here's how the system actually works.

The Audit Pipeline (6 parallel agents)

When a business submits their name, a resolver hits the Google Places API to pull business data. Then 5 agents run in parallel:

  • SEO analyst (Haiku) - scores search presence against vertical benchmarks
  • Review analyst (Sonnet 4.6) - analyzes review sentiment, generates response templates
  • Website speed analyst (Haiku) - evaluates Core Web Vitals and mobile performance
  • AI visibility analyst (Haiku) - checks how the business appears in ChatGPT/Perplexity/Bing AI
  • Citation analyst (Haiku) - audits directory listings across major platforms

A competitor analyst (Sonnet 4.6) runs next, dependent on the SEO results for SERP competitor names. Finally, an executive summary agent (Sonnet 4.6) synthesizes everything into an overall score and findings.

Why Sonnet vs Haiku?

Anything customer-facing uses Sonnet 4.6. The executive summary, competitor analysis, and review response templates need to read like a human consultant wrote them. The structured scoring agents (SEO, citations, website speed, AI visibility) use Haiku because they output JSON scores, not prose. Quality doesn't matter for a number. Cost does.

Total cost per audit: roughly $0.08-0.12 in API calls.

The Operations Layer (13 agents on cron schedules)

Beyond the audit pipeline, 13 agents run on scheduled intervals:

  • Pipeline monitor (every 30 min) - catches failed jobs, alerts me
  • Sales closer (every 2 hours) - scores leads by revenue-at-risk, drafts personalized follow-ups
  • Outreach manager (daily 8am) - pulls prospects, enriches with Perplexity research, drafts cold emails
  • Self-improvement reviewer (weekly Sunday 6am) - this is the meta agent. It reviews system logs, error rates, conversion data, and writes a report on what to fix. It's basically a weekly operations consultant that costs $0.02 to run.
  • 3 conversion agents - abandoned audit closer, email reply monitor, checkout escalator. These chase the funnel leaks.

The other 7 agents (client success, content strategist, review responder, competitor tracker, etc.) are built but suspended. They serve paying subscribers and I only have one right now, so they'd be querying empty tables.

Prompt Injection Defense

Since the pipeline ingests untrusted external content (Google Business Profile descriptions, SERP data, competitor websites, review text), every piece of third-party data runs through a sanitizeExternalContent() function before it gets interpolated into any prompt. This strips common injection patterns. Without this, a competitor could theoretically put prompt injection text in their Google Business Profile description and corrupt the audit output.

Self-Improvement Loop

The self-improvement reviewer deserves its own callout. Every Sunday it: 1. Pulls the week's audit completion rate, error rate, and conversion metrics 2. Compares against the previous week 3. Analyzes which agents failed and why 4. Writes a prioritized recommendation list

I review the list Monday morning and implement the top items. It caught 5 bugs in its first run that I'd missed during manual testing.

Infrastructure costs: ~$350/month total (Railway hosting + Vercel + Resend email). API spend runs $200/month depending on audit volume. The entire 19-agent operation costs less than a car payment.

The system runs 24/7 without me. I spend my time on distribution now, not operations.

Happy to answer questions about the architecture, agent communication patterns, or model selection tradeoffs.

submitted by /u/TheShortestWayIsThru
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top