Four Claude Agents and Me

I automated my affiliate site’s entire publishing pipeline. I still hit publish manually.

Foto oleh Bernd 📷 Dittrich di Unsplash

The first time the script finished and I opened WordPress to find a draft already waiting, title set, meta description filled, Pexels photo attached and credited, Rank Math showing green, I felt something I didn’t expect.

Not pride. Not excitement. Something closer to unease.

I had written four lines of configuration. The rest had run while I was making coffee. The post was technically ready to publish.

I waited twenty minutes before touching it.

The Problem Wasn’t the Writing

When people say they want to automate their affiliate site, they usually mean they want AI to write the articles. That part is easy, honestly. Ask any large language model to produce “best running shoes Indonesia 2026” and you get something readable in thirty seconds.

The problem is that readable isn’t the bar. Consistent is.

I run outfitcheck.id, a fashion affiliate site in Indonesian. Each article needs a focus keyword that real people actually search for, not marketing language, just what a 22-year-old in Jakarta types into Google when they want a jacket recommendation. It needs a meta description under 155 characters. It needs Shopee affiliate link buttons formatted precisely. It needs a featured image from Pexels with the right credit. It needs to not read like software wrote it, because Indonesian readers notice the tells, and so do the editors on the platforms I want to publish on.

Raw AI output fails all of those requirements, sometimes in the same paragraph.

What Four Agents Actually Do

I run the entire pipeline through Claude Code (Anthropic’s CLI tool) using my monthly subscription, not an API key. No per-token billing. The pipeline runs from a terminal, triggered by a slash command.

Four agents run sequentially, each handing off to the next.

The full pipeline — from slash command to WordPress draft. The last two steps are always mine.

Agent 1: The Keyword Researcher
Give it a topic like “tas kulit pria” (men’s leather bag), and its first job is to decide the angle. Should this be an “outfit” article featuring three complementary products (bag, belt, and shoes), or a deep-dive single-item review? It selects a focus keyword based on actual Indonesian search behavior — what a real person types into a search bar when they’re ready to buy, not what sounds “authoritative” or corporate.

Agent 2: The Writer
This agent’s target is 620–680 words of natural-sounding Indonesian. The rules are strict: the focus keyword must appear in the first sentence, at least one heading, the meta description, and the URL slug.

To keep it human, I’ve blacklisted “AI-isms” — those dead giveaways like “tentunya,” “pastinya,” or “tidak dapat dipungkiri.” No real blogger talks like that. It’s also forbidden from using repetitive sentence lengths, opening with rhetorical questions, or mentioning specific prices and materials. At this stage, Shopee links remain as placeholders.

Agent 3: The Quality Gate
This is the filter. It reviews the draft against 22 specific criteria covering SEO mechanics, readability, and AI-detection tells.

  • Score below 70: Agent 2 has to rewrite the entire thing from scratch.
  • Score between 70 and 84: It flags specific sections for targeted revisions. I’ve seen this agent kill the pipeline mid-run and send the work back more times than I can count. That’s exactly why it’s there.

Agent 4: The Publisher
The final agent handles the WordPress heavy lifting via the REST API. It creates the post as a draft, pushes SEO metadata to Rank Math, and calls the Pexels API to find a relevant photo. It uploads the image, sets it as the featured image, and replaces every IMAGE_PLACEHOLDER in the body text with the actual URL. By the time I log in, the post is sitting there, polished and ready.

The draft appears in WordPress. Metadata complete. Image attached.

136 published, 28 scheduled. Every SEO field filled automatically — none of it touched by hand.

What the Pipeline Cannot Do

Here is what the automation does not handle, and what I have stopped trying to make it handle.

Choose the right Shopee product. I open every article and manually paste the real affiliate links. Partly legal — I need to verify the product exists and matches what the article describes. Partly strategic — the wrong product at the wrong price point breaks conversion regardless of how good the writing is.

Publish automatically. Every article sits as a draft until I read it and decide it belongs on the site. After 133 articles, the pipeline produces mostly clean work. But the site is mine. The editorial judgment is mine. I am not delegating that final decision to a script.

Know when a keyword is not worth pursuing. Sometimes the category is too competitive for a site that is still indexing. Sometimes there are no good products on Shopee for the term. The agents complete their task regardless of strategic fit. That assessment is still mine to make.

The pipeline does not share my risk. That is the line I kept coming back to while building it.

What the Quality Gate Is Really For

The anti-AI detection layer in Agent 3 is not primarily about Google’s content policies. It exists because Indonesian readers are observant, and because the platforms I want to publish on, IDN Times being one, have human editors who review submissions before anything goes live.

AI-generated Indonesian text has recognizable patterns. Certain transitional phrases that no real blogger uses. Paragraphs where every sentence is approximately the same length. No casual interjections. A smoothness that reads as uncanny once you learn to look for it.

The gate checks for these specifically. It counts blacklisted phrase occurrences. It measures sentence length variation across the article. It flags zero informal language as a failure condition. An article that scores poorly on the anti-AI criteria fails regardless of how clean its SEO looks.

This layer took the most time to calibrate. Writing the articles was fast. Teaching the system what sounds human in Indonesian — what someone in their twenties would actually write when recommending a jacket to a friend — required reading a lot of published content and understanding what editorial rejection patterns actually looked like. [1]

Foto oleh Parker Byrd di Unsplash

After Enough Articles

Most writing about AI automation focuses on what gets removed from the process. The manual labor. The repetitive formatting. The hours spent sourcing images and filling in metadata fields.

That framing is accurate but incomplete. Because what actually changes is where your attention goes.

Before the pipeline, I spent most of my time on the parts I was bad at: forgetting meta descriptions, inconsistent keyword placement, uploading photos without proper credit. The parts where being human mostly meant being inconsistent.

After the pipeline, I spend my time on the parts that still require a person: deciding which keywords are worth pursuing, choosing the right product, reading each article before it goes live.

The output. Each article written, optimized, and published by the pipeline — I just chose the keywords. See it live at outfitcheck.id

The automation did not remove me from the process. It removed me from the wrong parts of the process.

I am still reading every draft before it publishes. Not to catch errors, though I still catch them. But because the decision of what belongs on the site is mine, and I want to make it consciously rather than by default.

That part did not get automated.

I’m not sure it should.

The pipeline does not share my risk. That is the line I kept coming back to while building it.

References

[1] Rank Tracker (2026): Why Quality Content in 2026 Is About Human Connection, Not Just Keywords.


Four Claude Agents and Me was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top