Your pages keep shipping with broken titles and duplicate links even though you reviewed them. The problem isn't your process. It's that one review pass can't catch everything.

Wire catches structural SEO problems at three points: before Claude writes, when the file saves, and when you run an audit. Most teams assume one of those is enough. It isn't. Claude follows style rules about 80-85% of the time. That means roughly 1 in 6 pages leaves the prompt with a pipe in the title or a duplicate internal link. The sanitizer catches most of those on save. Audit catches what neither could see: duplicate titles across pages, orphan pages, keyword overlap between two vendor profiles.

Which part of this is breaking down for you right now?

The styleguide ships with every Claude call. It covers title length, description format, heading structure, internal linking rules, and citation requirements. Claude reads it before writing anything. That prevents most issues from being created at all. But "most" is doing real work in that sentence. Claude still produces a pipe separator or a bracketed title roughly 15-20% of the time, even with explicit rules in the prompt. You can tighten the rules by placing your own `docs/_styleguide.md` in the docs directory. Wire picks it up automatically.

So the styleguide is in place, but errors are still getting through. Where are they surviving?

Every save runs `_sanitize_content()`: nine fixes applied before the file hits disk. Pipes become dashes. Brackets are stripped. A missing H1 is inserted. A mismatched H1 is aligned to the title. Duplicate internal links become plain text, keeping only the first mention. Broken slugs are corrected via slug matching. These fixes are silent, deterministic, and cost nothing. No API call. No human review. The corrected file is what you see. But the sanitizer only sees one page at a time. It cannot know that two pages share the same title, or that a vendor profile has no inbound links from anywhere else on the site.

The audit command scans all pages and reports what Layers 1 and 2 cannot see: duplicate titles, thin content, orphan pages with zero inbound links, keyword cannibalization between pages, broken internal links, and source concentration where one domain provides more than 40% of your citations. It also tells you exactly what to run to fix each problem. The `ACTION` section only appears when problems exist, and each entry includes the command. What audit does not do is fix anything itself. It reports. You run the commands. If you're looking at an audit report and the numbers feel wrong, the evidence section explains why each threshold exists.

Wire's thresholds aren't arbitrary. The 2024 Google API leak confirmed signals Google publicly denied using for years, including click and dwell data via NavBoost. That changes which rules matter. Readability scores, keyword density, image alt text for web search, Core Web Vitals as a growth lever: Wire deliberately skips all of them. The evidence doesn't support them for a content pipeline. What the evidence does support is H1 alignment, orphan page linking, title format, and source diversity. Each auto-fix rule traces to a specific study, experiment, or leak-confirmed signal. If a threshold feels wrong, the evidence table shows what it's based on.

The three content layers work on markdown. Build verification works on what search engines actually crawl. After Wire renders markdown to HTML, 44 automated checks run against the finished site. These catch problems that only appear in the rendered output: meta tags a template should have inserted but didn't, broken links between pages that looked valid in markdown, invalid structured data, sitemap entries that contradict the actual page set. A page can pass all three content layers and still fail build verification. The content pipeline and the rendered output are two different surfaces. Both need checking.

No single layer covers everything. Prompts miss about 15-20% of structural issues. The sanitizer catches most of those but only sees one page at a time. Audit catches cross-page problems but only reports them. Build verification catches rendering failures that none of the above can see. The layers are additive, not redundant. Each one catches a category of problem the previous one structurally cannot. Together they cover close to 100% of structural SEO issues: prevented, fixed, or flagged before anything reaches production.

Wire enforces content quality at three levels. Each layer catches what the previous one missed. Together they make it nearly impossible for a page to ship with structural SEO problems.

To see this in action, follow a single page through all three layers. CompareStack runs wire.content create vendors/acme-ocr. Claude writes a vendor profile, Wire saves it, then audit checks the result.

Layer 1: Prevention Through Prompts

Every Claude call includes your site's styleguide. The styleguide teaches Claude the rules before it writes anything. This prevents most issues from being created in the first place.

The default styleguide (wire/prompts/_style.md) covers:

  • Title rules. 51-55 characters, dashes not pipes, no brackets, must match H1.
  • Description rules. Under 160 characters, starts with a verb or noun.
  • Internal linking. First-mention only, 2-5 word descriptive anchor text, never "click here" or "learn more."
  • External citations. At least one per page. Sources are append-only. Claude cannot remove existing citations.
  • Heading structure. One H1 per page, no numbered headings, no skipped levels.

You override the styleguide by placing docs/_styleguide.md in your docs directory. Wire picks it up automatically. See Prompt Engineering for the full system.

What Prevention Catches

Claude writes this title for the Acme OCR vendor page:

title: Acme OCR | Enterprise Document Processing Platform [2026 Review]

Three violations in one line: pipe separator, brackets, 64 characters (over the 51-55 limit). Because the styleguide says "51-55 characters, dashes not pipes, no brackets," Claude typically produces:

title: Acme OCR - Enterprise Document Processing

52 characters, dash separator, no brackets. Prevention caught 3 of 3 issues before they existed.

But Claude is not perfect. About 15-20% of the time, it still produces a pipe or bracket despite the rules. That is what Layer 2 is for.

Layer 2: Auto-Fix on Save

Every time Wire saves a page, it runs _sanitize_content(): nine automatic fixes applied before the file hits disk.

The Nine Auto-Fixes

# Fix Before After
1 Pipe to dash Acme OCR \| Document Processing Acme OCR - Document Processing
2 Strip brackets Acme OCR [2026 Review] Acme OCR
3 Insert H1 (body has no H1) # Acme OCR - Document Processing inserted
4 Align H1 Title: Acme OCR, H1: About Acme H1 changed to # Acme OCR
5 Downgrade H1 Two # H1 headings in body Second becomes ## H2
6 Dedup internal links [Rossum](/vendors/rossum/) appears 4 times First kept, others become plain text
7 Restore sources Claude dropped 2 external citations during rewrite Citations restored from previous version
8 Dedup external links Same URL cited 5 times First kept, rest become plain text
9 Fix broken links [Rossum](/vendors/rosum/) (typo) Fixed to /vendors/rossum/ via slug match

These fixes are silent, deterministic, and free (no API call). The operator sees the corrected result in the saved file.

Auto-Fix in Practice

Claude writes a page for CompareStack's Acme OCR profile. The raw output contains:

---
title: Acme OCR | Enterprise Document Processing [Review]
description: Acme OCR offers best-in-class document processing.
---

# About Acme OCR Platform

Acme OCR processes documents with high accuracy. Similar to
[Rossum](/vendors/rossum/), it supports multiple formats.
[Rossum](/vendors/rossum/) also offers cloud deployment.
[ABBYY](/vendors/abbyy/) competes in the same market.
Read more about [ABBYY](/vendors/abbyy/) here.
See the [Rosum comparison](/vendors/rosum/) for details.

After _sanitize_content():

---
title: Acme OCR - Enterprise Document Processing
description: Acme OCR offers best-in-class document processing.
---

# Acme OCR - Enterprise Document Processing

Acme OCR processes documents with high accuracy. Similar to
[Rossum](/vendors/rossum/), it supports multiple formats.
Rossum also offers cloud deployment.
[ABBYY](/vendors/abbyy/) competes in the same market.
Read more about ABBYY here.
See the [Rossum](/vendors/rossum/) comparison for details.

What changed:

  • Fix 1: pipe | replaced with dash -
  • Fix 2: [Review] brackets stripped
  • Fix 4: H1 aligned to match the corrected title
  • Fix 6: duplicate internal links to Rossum and ABBYY became plain text (first mention kept)
  • Fix 9: broken /vendors/rosum/ fixed to /vendors/rossum/ via hyphen normalization

Five fixes, zero cost, zero human review. The description still says "best-in-class". That is a content problem, not a structural one. Layer 1 (banned word list) should have caught it. If it did not, the site's styleguide needs a banned word rule.

Why Auto-Fix Beats Manual Review

Manual content QA at scale is a known failure mode. At 500 pages with 3.2 structural issues per 100 pages, that is 16 issues per audit cycle, each requiring someone to find, understand, and fix. Wire fixes them on write, before they reach production.

Merge Guard

When merging two pages, Wire checks that the output preserves at least 80% of the keeper's body. If the merge would lose too much content, the result goes to index.md.preview only. The donor is not archived, and no content is lost.

Layer 3: Detection Through Audit

The audit command scans all pages and reports problems that exist despite Layers 1 and 2. This catches issues in pages that were not created by Wire, or issues that accumulated over time.

What Audit Detects

Check Threshold Example
Duplicate titles Exact match across pages Two pages titled "Acme OCR - Document Processing"
Long titles Over 60 characters Google truncates in search results
Long descriptions Over 160 characters Google truncates in snippets
Missing citations Zero external links Page has no third-party evidence
H1 issues Missing, multiple, or mismatched H1 says "About Us" but title says "Acme OCR"
Thin content Under 200 words Stub pages that may hurt rankings
Heading hierarchy Skipped levels H1 followed by H3 without H2
Source concentration One domain provides >40% of links Over-reliance on a single source
Orphan pages Zero inbound internal links Invisible to navigation and crawlers
Underlinked pages Fewer than 3 inbound links Weak internal link equity
Broken internal links Target page does not exist Wasted crawl budget

Reading an Audit Report

Here is what CompareStack's audit output looks like after running python -m wire.chief audit vendors:

HEALTH: vendors (142 pages)
  + GSC data loaded (8,530 keywords)
  + No dead pages
  - Cannibalization: 23 overlap pairs (3+ shared keywords)
  - Duplicate titles: 2 page(s)
  + Descriptions OK
  + News: all current
  + Refinement: none pending
  + No orphan pages
  - Broken links: 4 source page(s)
  + Source diversity OK
  - 3 underlinked page(s) (<3 inbound links)

ACTION: vendors
  Merge (hard overlap, ratio > 0.4):
    acme + acme-ocr — 4 shared kw, ratio 0.80, acme gets 85%
    → python -m wire.chief deduplicate vendors

  Differentiate (soft overlap):
    betacorp + echocorp — 3 shared kw, ratio 0.50, split 52/48
    → python -m wire.chief deduplicate vendors

  Broken links (4 source pages):
    vendors/acme — 2 broken (slug fix available: rosum → rossum)
    vendors/betacorp — 1 broken (stripped: /vendors/old-page/)
    → python -m wire.chief sanitize vendors

  Duplicate titles:
    "Document Processing Platform" — vendors/acme, vendors/deltacorp

  Underlinked (< 3 inbound):
    vendors/newvendor (0 inbound)
    vendors/acme-ocr (1 inbound)
    vendors/smallcorp (2 inbound)
    → python -m wire.chief crosslink vendors

Each section tells you what is wrong and how to fix it. The +/- indicators show pass/fail at a glance. The ACTION section only appears when problems exist.

The Four Audit Sections

Section Purpose
HEALTH Pass/fail checklist: is data loaded, are there overlaps, broken links, etc.
ACTION Only if problems exist: specific pages to fix with commands to run
SEO Reword opportunities ranked by impression volume
INFO Untracked pages, archived count, summary statistics

Why Three Layers?

Prompts alone miss approximately 15-20% of issues. Claude follows rules well but not perfectly. It occasionally produces pipes in titles, duplicate links, or broken slugs despite clear instructions.

The sanitizer catches structural problems Claude introduces. But it only sees the current page. It cannot detect cross-page issues like duplicate titles, orphan pages, or keyword cannibalization.

Audit catches cross-page issues that neither prompts nor sanitizer can see. But audit only reports. It does not fix. The operator runs the suggested commands to resolve issues.

Layer Scope Cost Coverage
Prevention (prompts) Per-call Included in Claude API cost ~80-85% of structural issues
Auto-fix (sanitizer) Per-save Free ~10-12% (what prompts missed)
Detection (audit) Cross-page Free ~3-5% (cross-page issues)

Together: close to 100% of structural SEO issues are either prevented, fixed, or flagged.

Layer 4: Build Verification

After rendering markdown to HTML, Wire runs 44 automated checks against the finished site. This catches problems that only appear in the rendered output: missing meta tags that templates should have inserted, broken links between pages, invalid structured data, sitemap contradictions.

The content quality layers above work on markdown. Build verification works on what search engines actually see. Together, the four layers cover the full path from writing to production.

Evidence-Based Decisions

Wire's quality rules are backed by independent research, not Google's public statements. The 2024 Google API leak proved they use signals they publicly denied: click data via NavBoost, domain authority, Chrome data.

Evidence Hierarchy

  1. Leak-confirmed. Signals found in the 2024 Google API documentation leak (NavBoost click/dwell data, entity extraction, titleMatchScore, badBacklinks). Ground truth.
  2. A/B tested. SearchPilot runs controlled experiments on live production sites. H1 alignment (+28% traffic), orphan page linking (+significant), schema markup (no change).
  3. Correlation studies. Zyppy analyzed 81K title tags and 23M links. Backlinko studied 15K keywords. Shows what correlates with rankings, not what causes them.
  4. Case studies. HubSpot pruned 3,000 posts (+106% traffic). 201Creative deleted thin ecommerce pages (+867% traffic, +291% sales).
  5. "Google says". Noted but never sufficient. Google publicly denied using click data for years while NavBoost was their strongest ranking signal.

Each Auto-Fix Rule Traced to Evidence

Fix Evidence Source Finding
Pipe to dash Zyppy 81K title study Pipes correlate with higher Google title rewrite rates
Strip brackets Zyppy 81K title study Bracketed titles rewritten 61.6% of the time
Insert/align H1 SearchPilot A/B test 28% traffic increase from keyword-aligned H1
Downgrade extra H1 Google API leak 2024 Entity extraction uses heading structure; multiple H1s dilute topic signal
Dedup internal links Google (John Mueller) First link to a URL carries the most weight
Restore sources Reboot Online experiment Outbound links to authoritative sources improve rankings
Dedup external links Source diversity principle Concentration in one domain signals lazy research
Fix broken links Google API leak 2024 badBacklinks is a negative ranking signal

What Wire Does Not Check

Some popular SEO factors are deliberately excluded:

Factor Why Wire skips it
Readability score Portent 750K study: zero correlation with rankings
Keyword density NavBoost shifted ranking power to user behavior signals
Image alt text Moz study: ranking factor for Google Images only, not web search
Core Web Vitals Perficient study: weak correlation, gate not growth lever
Schema markup SearchPilot A/B: no direct ranking change
Canonical tags Server configuration, not content pipeline scope

These factors may matter for other tools. They do not matter for a content pipeline focused on what moves rankings.