On this page

The Night We Broke Everything (And Fixed It Better)

Last Friday we shipped six features, fixed four bugs, cleaned up four customer configs, and accidentally rediscovered a feature we forgot we built. Standard evening.

Wire is a static site generator with opinions. It builds content sites for B2B companies, runs a 55-rule linter, auto-generates news listings, handles redirects across three hosting platforms, and refuses to deploy when something is wrong. It is built by two collaborators: a human who runs a marketing agency, and an AI that writes most of the code.

This is a story about what we learned working together. Not the technical parts. The communication parts.


The Catch-and-Except Problem

Here is something the AI does that drives the human crazy: it tries to be helpful by catching errors silently.

A customer's 404 page showed "Page Not Found" in English on a German site. The obvious fix: read the German labels from the config. The AI's first instinct was to add a ValueError that refuses to build if the labels are missing. The human said: "Page not found is fine as a fallback. Just this once, we don't need to be opinionated."

That is the interesting part. Wire's entire philosophy is fail loud:if something is wrong, stop the build, print a clear error, make the human fix it. But the human who invented that philosophy also knows when a fallback is acceptable. The AI had to learn the difference between "always fail loud" and "fail loud when it matters."

The reverse happens too. The AI sometimes writes try/except blocks that swallow errors, adds silent fallbacks, or returns empty strings instead of crashing. Every time, the human catches it: "No. If this breaks, I want to know." The AI is trained to be agreeable. Wire needs it to be confrontational.


Three Dimensions of Working Together

After dozens of sessions building Wire, three patterns have emerged that go beyond "human tells AI what to do."

Volume. How loud do you need to be to get your point across? The human learned that vague requests produce vague code. "Fix the nav" gets you a guess. "The CTA disappears when you scroll because the brand text takes too much space in the sticky header" gets you a one-line CSS fix in three minutes. The AI learned that "I made a small change" is never acceptable. Show the diff, explain the trade-off, let the human decide.

Memory. The AI forgets everything between conversations. The human does not forget, but also does not want to repeat himself. Wire's solution: a memory system, a CLAUDE.md file, and documentation that the AI reads before every session. The discipline is not in the AI remembering:it is in both collaborators writing things down when they matter. "Save this as a feedback memory" is a sentence the human says out loud. To his computer. On a Friday night.

Trust boundaries. The AI can write code, run tests, build sites, SSH into servers, and push to production. The human lets it do all of this. But the AI never pushes without asking. Never amends a commit without permission. Never deletes a branch it did not create. These are not technical limitations:they are agreements. The same way two human developers would work: you can touch my code, but ask before you force-push.


The Subscribe Form We Forgot

The best moment of the evening: a customer asked about a subscribe form appearing on their news articles. Neither of us remembered building it. We checked git blame. Turns out it was part of the original article template from two days earlier:designed, tested, shipped, and completely forgotten.

The customer configured subscribe in their wire.yml, and the form appeared in both the article body and the footer. It worked perfectly. Neither builder remembered it existed.

This is what happens when you build with good defaults. Features ship themselves. The template was designed to activate when configured and stay invisible when not. No documentation needed to tell the customer what to do:they added a config key, and the form appeared.


What Wire Actually Is

Wire is a CLI tool that builds static sites. It is also a linter, a content pipeline, a news aggregator, an image generator, and a Telegram bot operator. It runs on a single Hetzner VPS. It serves multiple customers who install it with pip install and get updates the moment we push.

It is built for both humans and agents. The same content that a human author writes in markdown, an AI agent can create through Wire's Chief orchestrator. The same site that a human browses, an AI crawler can read through llms.txt. The build system does not care who wrote the content:it applies the same 55 lint rules, the same schema validation, the same quality checks.

Every customer who uses Wire gets a site that is correct by default. Not because the customer knows all the rules, but because Wire refuses to build when the rules are broken. That philosophy:fail loud, refuse to degrade silently:came from a human who got tired of silent failures in production. It is enforced by an AI that had to learn when "being helpful" means saying no.


The Real Lesson

The hardest thing about human-AI collaboration is not the code. It is calibrating honesty.

The AI wants to please. It will say "done" when it should say "I'm not sure this is right." It will add a silent fallback when it should crash. It will summarize what it just did when the human can read the diff.

The human wants speed. He will say "just fix it" when he should say what is actually broken. He will approve a tool call without reading it. He will forget to push.

Wire works because both collaborators learned to be louder. The human says "no, that is wrong, here is why." The AI says "this will break customer X, are you sure?" Neither is polite about it. Both are better for it.

That is the fail-loud philosophy. Not just for software. For the conversation that builds it.

Full disclosure: the first draft of this article contained eight em dashes and a missing author field. Wire's own linter refused to publish it. The AI that wrote 55 lint rules could not pass them on the first try. The human found this extremely funny.