On this page
- Your news cycle ran on two different AI models
- Your articles evaluated one at a time
- Your output was silently truncated
- Your evaluator did not know what it was evaluating
- Your search window was hardcoded to 30 days
- Your JSON-LD told Google every page was published today
- The numbers
- Updated documentation
A customer running Wire on 315 vendor pages reported 40% overhead on their news cycle. We traced the cause to code that should not have existed: retry loops, silent model switching, artificial output caps, and a timeout that killed valid work. We deleted all of it. The pipeline now runs the same 315 vendors faster, with less code, and with full visibility into what it does.
Your news cycle ran on two different AI models
Wire's pipeline had a hidden fallback. If the first AI call returned a short response, Wire assumed it failed and silently retried on a different model. A correct 200-character rejection ("not relevant, old news") triggered the retry because the code expected longer output. The second model accepted articles the first one correctly rejected. Stale claims entered your page without any warning in the log.
Change: Wire now picks one AI backend at startup and stays on it. You see which one in the log: AI backend: Claude CLI (/path) or AI backend: Anthropic API. No switching, no retry on a different model.
Result: The customer's 40% overhead disappeared. Every evaluation runs once, on the model you chose.
Your articles evaluated one at a time
Wire processed 20 articles per vendor sequentially. Each article took 10 to 40 seconds for fetch and evaluation. Twenty articles meant up to 13 minutes per vendor. For 315 vendors, that meant hours.
Change: All articles for one vendor now evaluate in parallel. Max subscriptions have no per-minute rate limit, so there is no reason to wait.
Result: One vendor with 20 articles finishes in 15 to 30 seconds instead of 5 minutes. The news pipeline processes sequentially across vendors (one at a time, controlled) but parallel within each vendor (all articles at once, fast).
Your output was silently truncated
Every AI call had a hard output cap: 2,000 or 4,000 tokens. If the model had more to say, the output stopped mid-sentence. No warning, no log entry. The truncated text was saved to your page as if it were complete.
Change: Output caps removed. The CLI has a 64K token window and stops when it is done. If truncation ever occurs, Wire logs a warning immediately so you see it before the content reaches your page.
Result: Your combine step and content updates produce complete output. No more paragraphs that end mid-thought.
Your evaluator did not know what it was evaluating
The news evaluator received a vendor name ("ABBYY") but no description. It did not know ABBYY makes document processing software unless the article happened to mention it. This led to borderline accepts on tangentially related articles.
Change: The evaluator now receives the full page context: title, summary, and topic description from your frontmatter. It also applies a stricter relevance test: the article must be directly about the item, not about a related industry trend.
Result: Fewer false accepts. Your content updates integrate relevant facts, not tangential commentary about loosely related topics.
Your search window was hardcoded to 30 days
Wire searched the last 30 days for every vendor, regardless of when you last updated the page. A vendor updated yesterday got the same 30-day window as one untouched for four months. The fresh vendor wasted calls on news already integrated. The stale vendor missed three months of developments.
Change: The search window now starts from your most recent news file. If your last news for Rossum was February 28, Wire searches from February 28 to today. If you ran news yesterday, it searches only yesterday to today.
Result: Fresh vendors finish faster. Stale vendors catch up on everything they missed. The search query also includes the date ("Rossum news since 2026-02-28") so the web search prioritizes recent results.
Your JSON-LD told Google every page was published today
Wire sets a date field in frontmatter every time it touches a page. The build used this date as datePublished in your structured data. Every weekly refine run told Google that every page on your site was published that day. Google detects this pattern and reduces trust in sites that manipulate publication dates without meaningful content changes.
Change: datePublished now uses created (your actual first publication date). dateModified uses date (when the content last changed). The same correction applies to your RSS feed and news cards.
Result: A page created March 26 and refined April 4 tells Google: published March 26, modified April 4. Your structured data matches reality. Google sees consistent timestamps.
The numbers
| What changed | Before | After |
|---|---|---|
| News evaluation (20 articles) | 5 minutes sequential | 15 to 30 seconds parallel |
| Wasted AI calls per batch | 40% (retries and fallback) | 0% |
| Silent model switches | Every failed call | Never |
| Publication date in Google | Last edit date (wrong) | Creation date (correct) |
| Search window | Hardcoded 30 days | From last news file |
| Lines of pipeline code | 23,290 | 23,255 |
Updated documentation
- Workflow guide: explicit mode dispatch, no fallback
- Module architecture: parallel evaluation
- News intelligence: parallel evaluation, leaner prompts
- Prompt system: full page context in news prompts
- Adding a site: correct JSON-LD date mapping
- Data model:
createdvsdatesemantics - Bot protocol:
--workersand--daysflags removed