You're spending real money on competitive intelligence, or you're flying blind. Either way, the information is stale before you act on it.

Wire runs a news pipeline that searches the web, evaluates each article for relevance, and integrates findings into your existing pages. The cost for a full week across 100 pages is roughly $15-20. A human analyst doing equivalent work costs $1,500-$3,000 for the same week. But the cost comparison only matters if the output is actually good. The architecture behind the evaluation is the part most people don't expect: it deliberately mimics how newsrooms assign work between junior and senior staff. That choice has real consequences for accuracy.

Wire doesn't batch 20 articles into one Claude call. It sends each article through a separate evaluation first, then combines only the relevant ones for synthesis. The reason matters: when Claude sees 20 articles at once, facts bleed across sources. A funding round from one company ends up attributed to another. Isolated evaluation prevents this. There's also a cost angle. Typical relevance rates run 30-40%, so 60-70% of articles get filtered before the expensive synthesis step. But the filtering only works if the evaluation is asking the right question about each article.

Not every page needs the same update frequency. A vendor comparison page in a fast-moving market goes stale in weeks. A reference guide might hold for four months. Wire reads freshness intervals from your config and flags pages that have exceeded their window. The default intervals are more aggressive than what most teams run manually: 21 days for vendor pages, 60 for capabilities, 120 for guides. The reason they're aggressive is that the marginal cost of checking is near zero when the pipeline is automated. The question is whether your config reflects how fast your specific topics actually move.

When a search finds nothing relevant, Wire saves a tracker file instead of an empty result. The tracker records that a search happened, so Wire skips it on the next run rather than burning tokens on a repeat. When refine runs, it detects tracker-only pages and skips the Claude call entirely. No cost for empty cycles. This matters because a site with 100 pages across five topics will have a lot of empty cycles on any given week. The tracker system is what keeps the pipeline economical at scale. The complication: tracker files also mean you can have pages that haven't been touched in months without Wire flagging them as stale.

Refine reads pending news files, feeds them to Claude alongside the current page content and search console data, and produces an updated version. The prompt includes rules that prevent Claude from removing existing content, breaking internal links, or pulling from too few external domains. After integration, news files archive to a dated directory. Wire never re-processes archived news. The part that surprises most operators: refine is where source concentration can get worse, not better. LLMs tend to cite sources that appeared frequently in their training data. Without an explicit constraint in the prompt, refine would recreate the same sourcing problem the news pipeline was designed to fix.

The newsweek command runs a three-phase pipeline across all accumulated news files for a date range. Phase 1 rates every article for newsworthiness and filters out anything below three stars. Phase 2 combines the curated extracts with trending keyword data and your full site directory into a single synthesis call. Phase 3 is an optional editorial pass. Total cost for a full report: roughly $3. The output is a thematic market report organized by strategic significance, not by date or source. The part worth knowing before you run it: if you're iterating on the synthesis prompt, re-running Phase 1 each time costs money. There's a resynth flag that skips extraction and replays only Phases 2 and 3 from cached results.

Wire replaces the gathering-and-synthesis phase of competitive intelligence, not the judgment phase. A dedicated market analyst costs $60,000-$90,000 annually. Wire's full news pipeline runs $15-20 per week for a 100-page site. The framing that clarifies the comparison: analysts spend roughly 80% of their time finding, reading, and summarizing articles. Wire handles that 80%. The analyst, or the operator, still reviews the output and makes strategic calls. This is the junior-senior pattern applied to the entire workflow. The complication for teams evaluating this: Wire's output quality depends on how well your site's editorial rules are configured. The pipeline encodes methodology, but the methodology has to match your domain.

Wire monitors industry developments for every topic in your site. It searches the web, evaluates each article for relevance, synthesizes findings into executive summaries, and integrates updates into existing pages. The entire pipeline runs with one command per step.

How News Gathering Works

python -m wire.chief news products

For each page in the topic, Wire runs a web search for recent developments. Each result goes through a junior-senior evaluation pattern, a deliberate architectural choice borrowed from how newsrooms actually work.

Why Junior-Senior, Not Single-Pass

A single Claude call processing 20 articles at once produces worse results than individual evaluation followed by synthesis. Three reasons:

  1. Context contamination. When Claude sees 20 articles simultaneously, it blends facts across sources. A funding round from Company A leaks into Company B's summary. Junior-senior prevents this because each junior evaluation is isolated.
  2. Source classification accuracy. Each article needs classification as vendor-origin (press release, company blog) or third-party (analyst report, news outlet). This classification affects how much weight the information carries. A single-pass system cannot maintain source discipline across 20 articles.
  3. Cost efficiency through early filtering. Junior evaluations that return nothing (irrelevant articles) prevent those articles from consuming tokens in the synthesis pass. With typical relevance rates of 30-40%, this saves 60-70% of synthesis tokens.

Junior evaluation. Claude reads each article and decides: is this relevant to the page? If yes, it extracts key facts, classifies the source (vendor-origin vs third-party), and scores relevance. If no, it returns nothing. Each article is one Claude call.

Senior synthesis. All relevant junior reports for a page are combined into a single executive summary. The senior sees all evaluations together and produces a coherent news update that captures the full picture, not just individual articles.

The output is saved as a pending news file alongside the page (docs/products/acme/2026-03-10.md). It waits there until you run refine.

News Freshness by Topic

Different topics need different update frequencies. A vendor page in a fast-moving market needs monthly checks. A reference guide needs updates every four months.

Wire reads freshness intervals from your site config:

# wire.yml
extra:
  wire:
    refresh_days:
      products: 21       # Check every 3 weeks
      capabilities: 60   # Every 2 months
      guides: 120         # Every 4 months
      comparisons: 0      # Never (auto-generated)

The audit command flags pages that have not been updated within their freshness window. The news command respects these intervals and skips topics set to 0.

Tracker Files

When a news search finds nothing relevant, Wire saves a tracker file instead of an empty result. The tracker records that a search was performed, so Wire does not repeat it unnecessarily.

When refine runs, it detects tracker-only pages and skips the Claude API call. No cost for empty news cycles. The tracker still archives so Wire knows when the last search happened.

Integrating News into Pages

python -m wire.chief refine products

Refine reads all pending news files for each page, feeds them to Claude alongside the current content and GSC data, and produces an updated version. Claude sees editorial rules that prevent it from removing existing content, breaking internal links, or creating source concentration.

After integration, news files move to news/YYYY-MM-DD.md in the page directory. Wire never re-processes archived news.

Weekly Market Intelligence Reports

Wire produces Gartner-quality market reports from accumulated news across all topics. This is a three-phase map-reduce pipeline.

python -m wire.chief newsweek --from 2026-03-03 --to 2026-03-10

Phase 1: Extract and Rate. Wire batches all news files from the date range and sends them to Claude in groups. Each article gets a newsworthiness rating (1-5 stars). Only three-star and above items pass to the next phase. This costs roughly $2.50 for 20 batches.

Phase 2: Synthesize. All curated extracts, GSC trending keywords, and the full site directory go into a single Claude call. The output is a thematic market report organized by strategic significance, not by date or source. Cost: roughly $0.50.

Phase 3: Review. An optional editorial pass verifies internal links, tightens prose, and ensures the report matches the site's editorial voice. Skippable with --skip-review.

Resynth mode. If you want to iterate on the synthesis prompt without re-running extraction, use --resynth. It loads cached Phase 1 extracts and re-runs only Phases 2-3. Saves time and money during prompt development.

python -m wire.chief newsweek --resynth --skip-review

Total cost for a full report: roughly $3. Output goes to docs/news/YYYY-MM-DD-news.md.

Source Diversity

Content that cites only one or two external sources looks thin to both readers and search engines. Reboot Online's controlled experiment demonstrated that pages with outbound links to authoritative sources rank measurably higher than identical pages without them. But linking to the same domain repeatedly is worse than not linking at all. It signals lazy research or paid placement.

Wire tracks which external domains your pages cite. The detection uses a dual threshold to avoid false positives:

  • Volume + proportion: More than 3 links AND more than 20% share from one domain. This catches real concentration without flagging pages that cite a primary source twice alongside 15 other sources.
  • Dominance: More than 40% share from one domain with at least 5 total links. This catches cases where one domain dominates even with moderate link counts.

The news pipeline uses source gaps to guide its search. If a page has concentrated sources, Wire specifically looks for articles from different domains. The refine prompt warns Claude against creating new concentration from the news it integrates. This is a real problem, since LLMs tend to cite whatever sources appear in their training data most frequently.

The audit system reports source concentration in its HEALTH section and lists affected pages in ACTION.

Content Freshness and NavBoost

Google's leaked API revealed that content freshness feeds into NavBoost through bylineDate and syntacticDate signals. Pages with stale dates get lower freshness scores, which compound with click-through rate signals. When a user sees "Updated: 2024" on a result about 2026 developments, they skip it, and that skip feeds back into NavBoost as a negative signal.

Wire's news pipeline directly addresses this feedback loop. Regular news integration updates the page's date signals. The refine step preserves the page's existing content while adding recent developments, so the page stays current without losing the depth that earned its ranking in the first place.

Siege Media's research found that the average page-one result was updated every 2 years. Wire's default intervals (21 days for vendors, 60 for capabilities, 120 for guides) are more aggressive because the pipeline is automated; the marginal cost of checking for news is near zero.

Cost Comparison: Wire vs. Human Analysts

Competitive intelligence is traditionally expensive. A dedicated market analyst costs $60,000-$90,000 annually. An outsourced competitive intelligence service runs $2,000-$5,000 per month. Industry analyst reports from Gartner, Forrester, or IDC cost $2,000-$10,000 per report.

Wire replaces the gathering-and-synthesis phase of competitive intelligence at a fraction of the cost:

Component Human analyst Wire
News gathering (per topic) 4-8 hours/week $0.50-1.00 per run
Article evaluation 30-60 min per article $0.02-0.05 per article
Weekly synthesis report 8-16 hours $3 per report
Source diversity monitoring Manual, inconsistent Automated, every run
Keyword-aligned prioritization Not available Built-in from GSC data

The total cost for Wire's full news pipeline (gathering news across all topics, evaluating relevance, synthesizing weekly reports) is roughly $15-20 per week for a site with 100+ pages across 5 topics. A human analyst producing equivalent output would cost $1,500-$3,000 per week.

Wire does not replace the analyst's judgment on strategic implications. It replaces the 80% of their time spent finding, reading, and summarizing articles. The analyst (or the site operator) reviews Wire's output and makes strategic decisions. This is the junior-senior pattern applied to the entire workflow: Wire does the junior work, the human does the senior work.

The Journalism Research Behind Wire

Wire's news pipeline encodes findings from three German journalism research publications. These are not theoretical. They are based on observed behavior of working journalists.

The Verification Gap

The LfM-Band 60 study is Germany's largest observational study of journalistic research behavior: 235 journalists observed, 30,057 action steps coded, 1,952 hours of observation, plus a survey of 601 journalists and a search experiment with 48 journalists.

The study found that journalists spend 43% of their workday on research, split into topic discovery (40.8%), expansion research (51.3%), and verification (7.9%). Source checking (verifying who said something and whether they are credible) accounts for just 0.9% of all research actions.

Wire's junior-senior pattern addresses this verification gap structurally. Each junior evaluation is isolated: one article, one Claude call. The junior must classify the source (vendor-origin vs. third-party) and assess relevance before any synthesis happens. This forces a verification step that the LfM study found journalists almost never perform.

Depth Beats Breadth

The LfM study included a search experiment with 48 journalists. The most successful journalists used a "depth" strategy: fewer, better-targeted queries with fewer pages visited. The least successful used a "breadth" strategy: many queries, many pages clicked, but unfocused.

Wire's analyze_article() function implements the depth approach. Rather than skimming headlines from 50 sources, it reads and evaluates each article fully. The junior evaluation extracts specific facts, classifies the source, and scores relevance, producing structured intelligence from fewer, deeper reads.

The PR Problem

Thomas Leif's Trainingshandbuch Recherche states that two-thirds of journalistic material comes from PR sources or interest groups. Wire's vendor-origin vs. third-party classification in ArticleEvaluation directly implements this distinction. When the junior evaluator identifies an article as vendor-origin (company blog, press release), the senior synthesis weights it differently than independent coverage.

The Trainingshandbuch also emphasizes source transparency: "Es genugt nicht, die Quellen befragt zu haben. Soweit moglich, sollte man sie auch offen legen und nennen." (It is not enough to have consulted sources. As far as possible, one must disclose and name them.) This principle is encoded in Wire's styleguide as the external citation requirement: every page must cite its sources with inline links.

Googleisierung and Source Diversity

The LfM study documented that Google's 90.4% market share among journalists creates homogeneous sourcing. All researchers find the same sources through the same search engine. The study calls this "Selbstreferentialitat" (self-referentiality): journalism referencing journalism rather than original sources.

Wire's source diversity detection counters this by tracking which external domains your pages cite and directing new searches toward underrepresented sources when concentration is detected. The source gap information flows into the news_search.md prompt, telling Claude to find diverse sources for flagged topics.

Methodology as Code

The Trainingshandbuch argues that research methodology is "Handwerk und Haltung" (craft and attitude), both learnable, neither requiring talent. Wire takes this literally by encoding methodology into prompts and pipelines. The junior-senior pattern, source classification, diversity detection, and citation requirements are not features. They are the methodology itself, running as code instead of relying on individual discipline.