How We Built a Marketing Agent That Markets Itself
When we started building Reach, we faced the same problem every early-stage product faces: no marketing budget, no marketing team, and a product that needed to find its audience before it had the revenue to fund a proper go-to-market.
The obvious answer — hire an agency — was both too expensive and too ironic. We were building an AI marketing agent to replace agency costs. Hiring an agency to market it would have made the point for us, just in the wrong direction.
So we did the thing that seemed obvious in retrospect but took us a while to commit to: we ran Reach on itself.
This article is an honest account of what happened, what worked, what broke, and what it taught us about building a system that can genuinely automate the content loop without human involvement at every step.
---
Why Eating Your Own Cooking Matters
There's a version of this story that's just a marketing angle: "we use our own product, isn't that authentic." We're not making that argument.
We ran Reach on Xis10Z because it was the only way to find out if it actually worked.
Lab testing content tools is limited. You can generate articles, assess their structure, run them through readability scores and SEO analysis tools. But you can't know if they rank, convert, or build authority without publishing them in a real domain and waiting.
Running Reach on ourselves gave us:
1. A real environment to test in. Xis10Z.ai is a real domain, in a real competitive market (B2B SaaS, marketing tools, AI content). We're not testing on a dummy site — we're competing in the actual search landscape where our customers compete.
2. Dogfood feedback that forces product improvement. When you're both the builder and the user, every friction point is immediately apparent. Features that seemed fine in spec became problems in daily use. The agent's brief quality, its tone consistency, its handling of the editorial review step — we experienced all of this as users before we could rationalise it as builders.
3. Proof that's demonstrable to customers. The most compelling thing we can show a prospective customer is our own Search Console data. Not a case study. Not a testimonial. The actual performance of the product running on a real competitive domain.
---
The System We Built (And How It Works)
Reach isn't a writing tool. It's a pipeline agent — the distinction matters.
A writing tool takes a prompt and produces content. It's a faster typist. The human still has to manage the strategy, the brief, the editing, the publishing, the distribution, and the reporting.
A pipeline agent owns the loop. Here's what the loop looks like in practice:
1. Strategy and cluster mapping. The agent reads our positioning documents, ICP definitions, and competitive context. From these, it identifies keyword clusters — groups of 10–20 related search terms that map to specific audience problems our product addresses. It surfaces these clusters with search data and competitive assessment for human review. We approve the clusters and set the priority order.
2. Content calendar population. The approved clusters become the content calendar. The agent generates a prioritised list of article titles, each mapped to a specific keyword, with a recommended publish sequence that builds from pillar content to supporting spokes.
3. Briefing. For each article in the calendar, the agent generates a brief: target keyword, reader intent, proposed H2 structure, key claims to make, internal link targets, and CTA direction. Briefs go into a review queue. A human approves, requests changes, or rejects. Approved briefs proceed to draft.
4. Draft production. Approved briefs become full article drafts — typically 1,200–2,000 words, structured for on-page SEO, with the voice calibrated to our brand guidelines. Drafts go into a review queue. A human reviews for accuracy, tone, and editorial quality. Approved drafts proceed to publishing.
5. Cross-channel asset generation. Each published article automatically generates LinkedIn copy, email subject lines and preview text, and social posts. These are cut from the article — not generic social content, but posts that carry the specific angle and evidence from the piece.
6. Performance monitoring. The agent monitors Search Console and analytics data, surfaces weekly summaries to the human reviewer, and flags content that's ranking well (and should be linked to more) or underperforming (and should be refreshed or expanded).
The human in this system is a reviewer, not an executor. They make judgment calls, approve output, and handle the things that require current context. They don't manage the pipeline — the agent does.
---
What Broke First
Running this on ourselves exposed problems we didn't anticipate. Here are the honest failures.
Tone drift. Early versions of the agent produced content that was technically correct but tonally inconsistent. Individual articles read well in isolation but the corpus, read together, had a slightly mechanical quality — polished but generic. The fix was more precise brand voice documentation and a voice calibration step in the brief generation. Not fixed by a single prompt change; fixed by better foundational positioning documents.
Fact confidence without fact accuracy. The agent would state specific things — statistics, named examples, technical details — with complete confidence that sometimes turned out to be wrong. This is the single biggest risk in AI content production and we addressed it with an explicit factual verification step in the review workflow. Every claim that could be verified must be verified before publication. This is non-negotiable and we treat it as a hard requirement, not a nice-to-have.
Internal linking consistency. The agent would generate internal link suggestions for new articles but wouldn't consistently flag when existing articles needed to be updated to link to new content. This created clusters where the structure existed on paper but the internal link density was incomplete. We built a dedicated internal linking pass into the workflow to address this.
Brief approval lag. In the early version, briefs sat in review queues waiting for approval, which meant drafts were only produced in batches when someone cleared the queue. This broke the publishing cadence. The fix was a daily brief review habit — 15 minutes each morning — rather than a weekly batch review. Small operational change, large cadence impact.
Overfitting to our own positioning. Because the agent's context documents are all about Reach and Xis10Z, early drafts were subtly self-referential in ways that didn't serve the reader. We were writing about content strategy through a lens that was too clearly a sales lens. The editorial review step is where this gets caught, but it required the reviewer to be actively critical of positioning overreach, not just approving output that looked good.
---
What the Data Shows
We're not going to put precise numbers in this article because they'll be out of date before the article ranks. But directionally:
Indexing is fast. New articles are indexed within 2–5 days consistently. We haven't experienced any crawling or indexing issues attributable to AI content production.
Impressions precede clicks. The typical pattern is: article publishes, earns impressions at positions 15–40 for the target keyword, then improves position as related cluster content builds and internal link density increases. This matches the expected cluster authority building pattern.
Long-tail performance is strong. We rank for many terms we never explicitly targeted. This is the signature of content that genuinely covers a topic with depth — semantic coverage of related terms beyond the primary keyword.
Distribution multiplies reach. The articles that get the most organic sessions aren't always the ones that rank highest. They're the ones that got properly distributed via LinkedIn and email — the channel assets we generate automatically from each piece. Organic search and owned distribution compound together.
No penalties. No manual actions in Search Console. No traffic cliff drops attributable to algorithm updates targeting AI content. Our quality gate — human review and factual verification on every piece — appears to be sufficient.
---
What "Self-Serve" Actually Means for a Marketing Agent
The phrase "AI marketing agent" is used to describe a wide range of things. At one end: a chatbot that can write a social post if you give it a brief. At the other: a system that can plan, produce, distribute, and report on a content programme with a human in an oversight role rather than an execution role.
Reach is the second kind. The practical test for whether a marketing agent is genuinely self-serve or just a fancy prompt UI is: can it run the programme for a week without the human having to initiate anything?
In our case: yes. The agent's pipeline is always populated. Briefs are generated from the approved cluster map. Drafts are generated from approved briefs. Channel assets are generated from published articles. Performance reports are surfaced on a weekly cadence. The human doesn't have to remember to do things — the things come to them for approval.
This is a qualitative difference from tools that are faster typewriters. A typewriter still requires you to sit down and start typing. A pipeline agent requires you to sit down and read, review, and approve.
The shift from execution to oversight is what makes one person able to do the work of a team. Not because they're doing more — because the agent is doing most of the execution and they're doing the judgment.
---
What We'd Tell Someone Starting This
If you're building a product and you want to run your own AI marketing agent on it, a few things we'd tell you from experience:
Invest in the positioning documents first. The quality of every piece of content the agent produces is bounded by the quality of the context documents it has access to. Weak positioning documents produce tonally confused content. Invest the time to make the ICP, value proposition, brand voice, and competitive context documentation genuinely good before you run the agent.
The review step is a feature, not a bottleneck. Early versions of our thinking about the product framed the human review step as something to minimise. This was wrong. The review step is where quality control, brand accuracy, and editorial judgment happen. It should be fast — minutes per article — but it should never be optional.
Eat your own cooking in the actual competitive environment. Don't test on a sandbox domain. Run the agent on the domain that matters. The learning from a real competitive environment is worth an order of magnitude more than controlled testing.
Measure the right things. Vanity metrics — total articles published, total words generated — are not useful. The metrics that matter are impressions growth by cluster, keyword coverage expansion, and content-sourced pipeline. Build the measurement cadence before you need it.
---
The Meta-Point
The fact that we use Reach to market Reach is not incidental to the product's value proposition. It is the value proposition, demonstrated.
Every article on this site was planned by the agent, briefed by the agent, drafted by the agent, and distributed by the agent. Every piece went through human review. The pipeline runs without requiring a marketing hire, an agency relationship, or heroic effort from the founders.
That's the thing we're selling: the pipeline. Not the writing. Not the research. The system that keeps the pipeline moving so one person can run a content programme that previously required a team.
If it works well enough that we use it on ourselves, it works well enough for you.