On this page
The editorial model explains how these prompts map to newsroom roles.
Wire's prompt system determines what Claude sees for every content operation. It uses three layers: styleguide (site-wide rules), topic-specific instructions, and fallback built-in templates. Understanding this system is essential for customizing how Wire writes. See Writing for Wire for practical editing guidance, Writing Quality for how these layers affect output, and the Architecture overview for where the prompt system fits in the broader module map.
Resolution Order
When Wire calls load_prompt(topic, action), it resolves the prompt in this order:
1. Styleguide: docs/_styleguide.md → prompts/_style.md
2. Topic prompt: docs/{topic}/_{action}.md → prompts/{action}.md
3. Assembly: styleguide + topic prompt = final prompt
The styleguide is always prepended to the topic prompt. If the topic prompt contains a {styleguide} placeholder, the styleguide is injected there instead.
Variable Labeling Rule
Every {variable} that injects a block of content must have a ## heading before it. Without the heading, the receiving model sees an unlabeled blob of text and cannot distinguish what the content IS.
## styleguide
{styleguide}
## current page content
{current_page}
Inline variables (site name, topic, slug) use >>markers<< instead: SITE: >>{site}<<. See translate.md for a clean example of both patterns.
The 18 Built-in Prompts
| Prompt | Used by | Key variables | Purpose |
|---|---|---|---|
_style.md | All prompts | (none) | Shared editorial rules (always prepended) |
content_create.md | create() | research, example | Generate new page from web research |
content_update.md | refine() | news, search_terms, current, source_diversity | Integrate news into existing page |
content_expand.md | expand() | research, expansion_topic | Add depth on specific aspect |
content_compare.md | compare() | page_a, page_b | Side-by-side comparison |
content_consolidate.md | consolidate() | comparisons | Hub page from comparisons |
content_seo.md | seo() | target_keywords, search_terms, current | Full SEO rewrite for keywords |
content_seo_light.md | seo_light() | target_keywords, search_terms, current | Title and description only |
content_crosslink.md | crosslink() | current_page, targets, site_directory | Add internal links |
content_merge.md | merge() | keeper_page, donor_page, donor_title, shared_keywords | Merge donor into keeper |
content_differentiate.md | differentiate() | current_page, competing_page, competing_title, shared_keywords | Reduce keyword overlap |
content_improve.md | improve() | amendment_brief, current_page, research, news, search_terms, site_directory | Combined improvement from brief |
news_evaluate.md | analyze_article() | article, source_type, from_date, to_date | Junior: evaluate one article |
news_combine.md | combine() | submissions | Senior: synthesize reports |
news_search.md | News search | hints, source_gaps | Build search context |
newsweek_extract.md | Newsweek P1 | batch_label, batch_content, from_date, to_date | Extract and rate news items |
newsweek_synthesize.md | Newsweek P2 | styleguide, trending_keywords, site_directory, previous_report, extracts | Thematic market report |
newsweek_review.md | Newsweek P3 | site_directory, draft_report | Editorial review |
Variable Injection
Wire auto-injects two variables into every prompt:
{site}: Site object (name, url, description){topic}: Topic object (title, description, directory)
Never pass these manually in load_prompt(). Doing so causes a TypeError: got multiple values.
Additional variables (the "Key variables" column above) are passed as keyword arguments:
prompt = load_prompt("products", "create",
research=web_results,
example=reference_page)
Here is what Wire's own content.py does when calling refine(). Notice how site and topic are absent (auto-injected), while news, search_terms, current, and source_diversity are passed explicitly:
prompt = load_prompt(topic_name, "content_update",
news=news_content,
search_terms=seo_data,
current=current_page,
source_diversity=diversity_warning)
Prompt Patterns
All prompts follow consistent patterns documented in prompts/_prompts.md:
Role line: ## Your Role: {Specific Role} for {site.title}
Data sections: ## LABEL [Context: {variable}] headers with full content in code fences.
Output format: Set by the caller, not by the prompt. Three modes: markdown_file (frontmatter + body), decision (WHY: prefix lines), none (raw text).
Overriding Prompts
To customize how Claude writes for a specific topic, create a prompt file in the topic directory:
docs/
products/
_create.md # Overrides prompts/content_create.md for products
Wire uses the topic prompt if it exists, otherwise falls back to the built-in. The styleguide is always prepended regardless.
You can also override the styleguide itself by placing _styleguide.md in your docs root. This replaces the built-in prompts/_style.md for all prompts on the site.
Meta-Prompt Guide
Wire includes prompts/_prompts.md, a guide for writing new prompts. It covers the compliance checklist, variable naming conventions, and common patterns. Read this before creating custom prompts.
Key rules from the guide:
- Use
{item.title}and{item.summary}for Content objects (news pipeline passes Content objects, not strings) - Keep role descriptions under 3 lines
- Test with
--dry-runbefore deploying - Avoid conflicting rules between styleguide and topic prompt
Why Prompts Are Architecture, Not Configuration
Most AI content tools treat prompts as user-facing text boxes. Wire treats them as code. The distinction matters because prompt quality determines output quality, and output quality at scale determines whether a 1,000-page site ranks or stagnates.
The LfM-Band 60 study observed 235 journalists and coded 21,145 research actions. They found that verification accounts for only 7.9% of research time, and source checking accounts for 0.9% of all research actions. Professional content is already poorly verified. AI content without structured constraints is worse.
Wire's prompt system addresses this through architecture, not instructions. The styleguide is prepended to every prompt, not appended, not optionally included. This means verification rules (cite external sources, preserve existing citations, classify source types) execute before task instructions. Claude processes the constraints before it processes the request.
The three-layer resolution makes this enforceable across teams. A site operator who writes a custom topic prompt cannot accidentally bypass the styleguide. The styleguide always runs first. A developer who adds a new content operation gets styleguide enforcement automatically through load_prompt(). No opt-in required.
This is why Wire uses 18 built-in prompts instead of a single configurable template. Each prompt encodes domain-specific knowledge about its operation. The content_merge.md prompt knows about merge guards (output must preserve 80% of keeper body). The news_evaluate.md prompt knows about source classification (vendor-origin vs. third-party). The content_improve.md prompt knows about amendment briefs (structured keyword routing data). A single generic prompt cannot encode this knowledge without becoming unmanageably complex.
The cost of prompt engineering is amortized. Writing a good prompt takes hours. Running it 10,000 times costs the same as running a bad prompt 10,000 times. Wire front-loads the prompt investment so every Claude call benefits from accumulated editorial decisions.
Common Pitfalls
- Auto-streaming: Prompts larger than 50KB auto-stream in API mode.
- Explicit mode dispatch:
claude_text()checksshutil.which("claude")once at startup. CLI installed = CLI mode, no CLI = API mode. No fallback between them. - Escaping braces: Python
.format()treats{and}as variable delimiters. Use{{#slug}}for literal HTML anchor IDs like{#slug}, and{{Brand}}for literal braces in output examples.
Prompt Testing Without Cost
Every prompt can be tested with --dry-run. Wire assembles the full prompt, calls Claude, and writes the output to .preview files without saving. This means prompt iteration is safe. You see exactly what Claude produces before committing to it.
For newsweek prompts, --resynth loads cached Phase 1 extracts and re-runs only the synthesis and review phases. This saves roughly $2.50 per iteration when tuning the newsweek_synthesize.md prompt.