On this page

Wire's batch commands follow a specific sequence. Each step produces data or changes that the next step depends on. Running them out of order wastes API calls or produces suboptimal results. How often you run them depends on your site.

Working with teammates on the same repo? Read Team Workflow first. This guide assumes a single operator. Multi-user sites need a git pull --rebase envelope around every Wire command and one human per topic, otherwise you will lose work to merge conflicts.

The Full Sequence

# 1. Pull fresh search data (after restructure: migrate-gsc first)
python -m wire.chief migrate-gsc  # Only after page moves — rekeys GSC data
python -m wire.chief data

# 2. Read-only analysis (all topics, or narrow to one)
python -m wire.chief audit
python -m wire.chief audit products

# 3. Resolve keyword cannibalization
python -m wire.chief deduplicate

# 4. Gather industry news
python -m wire.chief news products

# 5. Integrate news into pages
python -m wire.chief refine products

# 6. SEO rewrite based on opportunity score
python -m wire.chief reword products

# 7. Combined analysis + improvement
python -m wire.chief enrich products

# 8. Add internal links to underlinked pages
python -m wire.chief crosslink

# 9. Fix broken internal links
python -m wire.chief sanitize

# 10. Analyze GSC coverage gaps, generate redirects
python -m wire.chief redirects

# 11. Build the site
python -m wire.build --site .

# 12. Fix broken links for free (no AI, re-save through sanitize pipeline)
python -m wire.chief sanitize

# 13. Fix ALL remaining lint issues (redirect links free + AI content via subscription)
python -m wire.chief lint-fix

# 14. Rebuild to verify
python -m wire.build --site .

The topic name (e.g. products) is optional for most commands. Without it, Wire scans all topics. With it, Wire focuses on one subdirectory. Use topics to run faster on large sites or to work through one section at a time.

Files Excluded from Build

Wire automatically excludes these patterns from rendering. They exist in docs/ but are not pages:

Pattern What it matches
YYYY-MM-DD.md Pending news files (integrate with refine)
_*.md Underscore-prefixed files (_styleguide.md, _style.md)
*/comparisons/* Auto-generated comparison pages
*.steps.md / steps.md Discovery step definition files

These files serve other purposes in Wire's pipeline. They are not rendered as HTML pages.

What lint-fix does

lint-fix runs two steps automatically:

Step 1: Redirect link rewriting (free, no AI). When you restructure pages and add redirects in wire.yml (/old-path/ → /new-path/), your markdown files still have links pointing to /old-path/. The redirect fix scans all markdown, finds these stale links, and rewrites them to the redirect target. For 410 (Gone) redirects, it strips the link to plain text. This is mechanical find-and-replace, zero cost.

Before: See [AI Agents](/build/ai-agents/) for details.
After:  See [AI Agents](/en/build/ai-agents/) for details.

Step 2: AI content fixing (minimal tokens via Haiku). Builds the site, lints the HTML, groups all issues by page, then sends each affected page to Claude Haiku with its specific lint issues. The AI rewrites only what is necessary. Sequential, one page at a time. At startup, prints a summary with rule breakdown, model, and estimated cost.

You do not run every step every day. The typical daily workflow is steps 1-2 (data and audit) to see what needs attention. The other steps run as needed.

Why Order Matters

Data before audit. Audit reads from the local database. Without fresh data, audit reports stale information or reports no search data at all.

Deduplicate before restructure. If you plan to reorganize pages into topic directories, resolve all merge and differentiation pairs first. Wire can merge across topics, but deduplicating while pages are flat preserves GSC data and avoids redirect chains. See URL Management for the full workflow.

Deduplicate before news. Deduplication merges or differentiates overlapping pages. If you gather news for a page that is about to be merged, that news is wasted effort.

News before refine. Refine integrates pending news files. If there are no news files, refine has nothing to do.

Refine before reword. Refine adds new content from news. Reword optimizes the page's SEO including that new content. If you reword first, the news integration may undo the SEO improvements.

Discovery auto-regenerates. When refine, reword, or enrich changes a page's H2 structure, Wire automatically regenerates its discovery steps. No separate step needed. RULE-53 in the build linter catches any stale step IDs that slip through.

Reword before enrich. Reword handles the top 50% of pages by opportunity score. Enrich handles the remaining actionable pages with a different strategy. Running reword first means enrich does not duplicate effort on high-priority pages.

Workflow Gates (Cascade Blocking)

Wire enforces these gates automatically. Each step blocks downstream steps for the same page:

MERGE -> DIFFERENTIATE -> NEWS -> REFINE -> REWORD -> ENRICH

The blocking is asymmetric:

  • NEWS is only blocked by MERGE (a page about to be merged should not gather news)
  • REFINE is blocked by both MERGE and DIFFERENTIATE (if a page will be reworded for differentiation, integrating news first would conflict)
  • REWORD is blocked by MERGE, DIFFERENTIATE, and pending REFINE

Pages blocked at any stage show (blocked: merge) or (blocked: differentiate) in the audit output. Resolve upstream blocks first. The cascade clears automatically.

Cadence by Site Size

How often you run the pipeline depends on how fast your content landscape changes, not on a fixed schedule.

Small sites (20-50 pages)

A local business, a niche authority site, a personal brand. Content changes slowly. Search data shifts over weeks, not days.

Every 2 weeks: data + audit. Review the output. Most runs will show all + lines and nothing to do.

Monthly: news + refine for topics in fast-moving industries. Skip for evergreen content.

Quarterly: reword pass to capture new keyword opportunities. Enrich pass to strengthen thin pages.

After creating new pages: sanitize + crosslink + build. New pages need internal links immediately.

Medium sites (50-200 pages)

A SaaS comparison site, a consulting firm's knowledge base, a regional service provider.

Weekly: data + audit. At this size, cannibalization and stale content appear regularly.

Every 2 weeks: deduplicate (if overlaps detected), news + refine per topic.

Monthly: reword + enrich + newsweek. Build and deploy.

Large sites (200+ pages)

A multi-topic content operation. At scale, compounding problems (cannibalization, orphan pages, stale content) appear faster than you can manually track them.

Weekly, in order:

# Monday: assess
python -m wire.chief data
python -m wire.chief audit

# Tuesday: resolve conflicts
python -m wire.chief deduplicate products

# Wednesday-Thursday: update content
python -m wire.chief news products
python -m wire.chief refine products

# Friday: optimize + report
python -m wire.chief reword products
python -m wire.chief newsweek
python -m wire.build --site .

Work through one topic per week on large sites. A 500-page site with 5 topics cycles through each topic roughly monthly.

Cost Tracking

Each step has a different cost profile:

Command AI calls Token usage
data 0 (search API only) None
audit 0 None
deduplicate 1-2 per overlap pair Minimal per pair
news 10-20 per page Minimal per page
refine 1 per page Minimal per page
reword 1 per page Minimal per page
enrich 1 per page Minimal per page
crosslink 1 per page Minimal per page
sanitize 0 None
redirects 0 None
newsweek 20-25 total Moderate per report
build 0 None

The most expensive operations are news gathering (many web searches + evaluations) and newsweek reports. The cheapest are data, audit, sanitize, redirects, and build, all free.

Which Model Runs Each Command

Wire splits Claude into two tiers in wire.yml:

quality_model: claude-sonnet-4-6         # Writing, rewriting, long-form synthesis
simple_model:  claude-haiku-4-5-20251001 # Extraction, classification, analysis

You see the active tiers printed at startup: quality model: claude-sonnet-4-6 and simple model: claude-haiku-4-5-20251001. Override both in wire.yml if you need to benchmark a different model, but Wire's defaults are tuned for the Wire prompts. Cheaper writing models produce measurably worse content.

Command Tier Notes
init quality Generates styleguide and starter content
data none GSC API only, no Claude calls
audit none Local analysis against the GSC database
deduplicate quality Merges or differentiates cannibalized pages
news gather simple Junior pass: fetch, evaluate, shortlist
news combine quality Senior pass: write the final news file
refine quality Integrates pending news into the page body
reword quality Full and light rewrite paths both use quality
enrich quality Local brief build, then a single quality rewrite
consolidate quality Hub page creation from comparisons
crosslink quality Adds internal links with prose context
sanitize none Deterministic Python, no Claude
lint-fix hardcoded haiku Pins claude-haiku-4-5-20251001 regardless of simple_model. Mechanical pattern fixes across many pages
translate hardcoded sonnet Pins claude-sonnet-4-6 regardless of quality_model. Translation quality collapses on smaller models
newsweek quality Three phases (extract, synthesize, review), all quality
images none (Claude) Calls BFL / FLUX via BFL_API_KEY, not Claude
build none Static site render, no Claude

Two commands pin their model and ignore the wire.yml tiers: lint-fix always runs Haiku (cheap mechanical fixes across many pages), and translate always runs Sonnet (translation quality collapses on smaller models). Everything else flows through the quality_model / simple_model dials.

Dry Run Everything First

Every content command supports --dry-run. Use it to preview changes before committing.

python -m wire.chief reword products --dry-run

In dry-run mode, Wire writes to index.md.preview files and shows a diff. No pages are saved, no news is archived, no timestamps are updated. This is the safest way to verify Wire's output before accepting changes.

Resuming Interrupted Runs

Long batch operations track progress in .wire/progress-{command}-{topic}.json. If a run is interrupted (network error, process killed, rate limit), use --resume to continue from where it stopped.

python -m wire.chief news products --resume

Failed items are not marked as complete. They retry on resume. Progress files clean up automatically when a batch finishes successfully.

Timeouts and Monitoring

Wire commands that call the AI can take minutes per page. Set timeouts based on site size.

Expected Durations

Command Per page 100 pages 500 pages
data N/A 10-30s 10-30s
audit instant 2-5s 5-15s
refine 5-15s 10-25 min 45-90 min
reword 5-15s 10-25 min 45-90 min
news 3-10s 5-15 min 25-60 min
lint-fix 2-5s 3-8 min 15-40 min
build 0.1-0.3s 5-15s 30-90s

Model Selection

Wire picks the cheapest model that produces good output for each command. You do not configure this.

Command Model Why
Content ops (refine, reword, enrich, etc.) quality_model (default claude-sonnet-4-6) Quality writing needs Sonnet
News evaluation (junior) simple_model (default claude-haiku-4-5-20251001) Structured extraction, Haiku is sufficient
News combine (senior) quality_model Editorial synthesis needs Sonnet
lint-fix claude-haiku-4-5-20251001 (hardcoded) Mechanical text replacement
translate claude-sonnet-4-6 (hardcoded) Translation quality, not configurable via wire.yml
Images flux-pro-1.1 (BFL) Not a Claude model, separate API
data, audit, sanitize, build No AI Zero tokens, free

Wire checks for the Claude CLI at startup (shutil.which("claude")). If installed, all commands use the CLI. If not, all commands use the Anthropic API with the model shown above. There is no fallback between modes.

Detecting Hangs

If a command produces no new output for 3+ minutes, it likely hung:

  • API timeout: Claude CLI or API call exceeded its limit on a large prompt
  • Rate limit: API returned 429
  • Stuck process: kill and re-run with --resume (picks up where it left off)

Wire writes progress to .wire/progress-*.json. If the progress file stops updating, the command is stuck. Stale progress files from crashes are harmless and cleaned up on next successful run.

For Automation (bots, CI, cron)

  1. Always set a timeout on the subprocess
  2. Monitor output file size or .wire/progress-*.json modification time
  3. If no growth for 3 minutes, kill and log the failure
  4. Use --resume on the next run to continue from where it stopped
  5. Never run two Wire commands on the same site simultaneously (file conflicts)

The Economics of Automated Content Operations

The workflow sequence is designed around a cost asymmetry that most content teams ignore: analysis is free, generation is expensive. Wire exploits this asymmetry systematically.

The data and audit steps cost nothing. They pull search data and run local analysis: database queries, SQL self-joins, statistical comparisons. The output is a complete picture of your site's search performance: which pages cannibalize each other, which pages are dead, where content gaps exist, and which keywords represent untapped demand.

Manual SEO audits produce the same information at dramatically higher cost. An agency charges $2,000-$15,000 per month for ongoing SEO management (WebFX pricing data). An independent SEO consultant charges $150-$300 per page for a detailed audit and rewrite recommendation. Wire produces the audit in seconds at zero cost, then executes the recommendations using minimal AI tokens covered by your AI subscription.

The compound effect over time is significant. HubSpot documented their pruning journey: first pass removed 3,000 posts (72% of audited content), producing 106% more organic views. Continued pruning drove a 458% traffic increase. The lesson is not "delete content." The lesson is that systematic content lifecycle management (identify weak pages, merge duplicates, strengthen survivors) produces compounding returns. Wire automates this lifecycle on a weekly cadence.

Monthly cost comparison for a 500-page site:

Approach Monthly cost What you get
SEO agency retainer $5,000-15,000 Monthly audit, recommendations, partial execution
Freelance SEO consultant $3,000-7,500 Audit + rewrite for 20-50 pages per month
Wire automated workflow Your AI subscription Full audit, all pages enriched, news integrated, weekly reports

The orders-of-magnitude cost advantage is not marketing. It is arithmetic. The analysis phase (audit, overlap detection, keyword routing) costs zero because it runs locally. The generation phase uses minimal AI tokens because the AI receives precise instructions from the analysis, not open-ended prompts. Precise prompts produce shorter, more accurate output, which uses fewer tokens.

When to Skip Steps

Not every step runs every week. The workflow is a maximum sequence, not a mandatory checklist.

Skip deduplicate when the audit shows no overlap pairs above the minimum threshold. This is common after the first few weeks of running Wire. Once overlaps are resolved, new ones appear slowly.

Skip news when all topics are within their freshness intervals. If products were updated 10 days ago and the interval is 21 days, there is nothing to gather yet.

Skip reword when the audit shows no keywords scoring above the opportunity threshold. This happens on well-optimized sites where most pages already target the right keywords.

Never skip data and audit. These are free. They take seconds. Skipping them means operating on stale information.

See the Guides overview for all Wire documentation. For step-by-step setup, start with Adding a Site. For detailed component reference, see Components.