Today I spent a full day on Axia’s build infrastructure. I didn’t set out to. I set out to ship an intent-classification feature and move on. What actually happened is three separate moments where I almost built something bespoke, caught myself, checked the public domain, and threw my draft away.
Each save was small on its own. Together, they shaped the entire day. And they pointed at a principle I should have been applying more rigorously all along.
The principle is boring: check public domain before designing anything, not after. It’s the kind of thing every engineer agrees with in theory and skips in practice. Today showed me exactly why skipping costs more than the five-minute search.
Moment one — the morning refinement
I was drafting a brief for Axia’s new intent-classifier layer. Part of it needed to handle the case where a user interrupts an action flow to ask Axia a question, then returns to the flow. My instinct was to design a stack-based suspend-and-resume mechanism. I started sketching the state transitions.
Then I paused. This can’t be novel. Dialogue systems have solved interruption handling for decades.
I opened the architectural decision document for Axia’s existing confirmation layer and searched for the relevant section. There it was — a principle already in the system that covered exactly this case. Not a stack. Abandon-on-escape. The user’s intent, when they pivot, is usually to abandon the current flow, not to suspend it for later resumption. Design simpler, match what users actually do.
If I hadn’t searched, I would have designed the stack-based mechanism. It would have worked. It would also have contradicted a principle I’d written down myself three weeks earlier. The brief would have landed, been built, and then surfaced as a design conflict during code review — costing hours to untangle.
The save: five minutes of reading my own prior work before drafting.
The lesson I didn’t yet see: this isn’t just about external prior art. Internal systems accumulate principles too. Check your own previous decisions before contradicting them.
Moment two — the midday halt
Claude Code on my build machine was executing the brief I’d finished drafting. Ten minutes in, it halted.
The brief referenced sections §3.1 through §3.6 of the queue document. Claude Code had grepped the queue. Those sections didn’t exist.
I had drafted the brief in a chat session. Drafting is not filing. The brief existed in the conversation but had never been committed to the queue file on disk. Claude Code was correct to halt — the referenced sections were genuinely missing. Had it proceeded on pattern-matched inference instead of explicit grep, the build would have continued against the wrong baseline.
The save: ten seconds of grep preventing hours of wrong-direction build.
The lesson: a brief is only filed when it’s on disk, not when it’s been described. The gap between “I’ve decided this” and “the system knows this” is real, and it’s where silent failures hide.
Moment three — the afternoon discovery
This one cost more before it helped.
I checked production after the midday build landed. The service showed the new code in its git history. Deploy notifications had posted. Everything looked normal.
Then I checked one thing more: the service restart timestamp.
The service hadn’t restarted in five days.
Five days of commits. Docs, code, schema changes — all sitting on disk, all showing “deployed” in the notification channel, none of them actually loaded into the running process. The production bot had been executing code from April 17 while the logs and Discord said it was running today’s build.
The deploy script called a restart script that didn’t exist. Called a non-existent file, got a non-zero exit code, logged the failure, posted “deploy complete” anyway. No exit-code check between the failed restart and the success notification. It had been lying to me for a working week.
My first instinct was to patch the deploy script. Add exit-code checking. Fix the missing restart script. Rotate the exposed credentials. Harden the polling logic.
I had the patch half-drafted when my partner asked a simple question: have you checked how everyone else solves this?
I hadn’t. I was about to spend a day hardening a bespoke polling script that had been quietly failing for five days.
I searched instead. The canonical industry answer for solo-developer VPS deployments isn’t a hardened polling script. It’s GitHub Actions with self-hosted runners. Event-driven, not polling. Exit-code checked by default. Failure visible in a dashboard. Secrets managed properly. Built into the tools I already use.
Dozens of blog posts from solo developers and small teams running production on VPS infrastructure converge on this pattern. The problem I was trying to solve was solved. Completely. Free. Well-documented. I was about to reinvent it.
I threw away the patch and started on the migration instead.
The save: six hours of building a suboptimal solution.
The lesson: when you’re hardening something bespoke, stop and check if the bespoke thing should exist at all.
The principle, sharpened
Three variants of the same principle, all landing in one day:
- Before designing a new mechanism, check your own existing principles.
- Before building against a specification, verify the specification exists where it’s expected.
- Before hardening a bespoke solution, check if the canonical solution exists.
Different surfaces. Same underlying shape. Don’t assume the design is yours to invent. Most of the time, someone has already solved it — in the system, in the repo, or in the public domain.
What makes this hard in practice isn’t that the principle is subtle. It’s that the five-minute search feels optional when you’re moving fast. You have a draft forming. Checking feels like breaking momentum. It’s easier to write what you already have in your head than to pause and verify.
The cost of skipping only surfaces later. Sometimes in a code review, sometimes in a halt log, sometimes in a production failure that was silently accumulating for five days.
Why this matters for how I build
Axia is built solo. That constraint rewards speed. It also punishes invented solutions harder than a team would — because there’s no colleague to cross-check my design, no architect to push back, no code reviewer to notice the pattern already exists elsewhere in the system.
The discipline I need to run, consistently, is the one a team provides for free: check before building.
Public domain for external mechanisms. Own repo for internal principles. Existing files for claimed specifications. The search is cheap. The save is often large.
Today gave me three data points in one day. That’s rare enough to be worth writing down.
The engine is better for it. The bot is back up. The deploy pipeline is now the canonical industry pattern instead of a bespoke polling script that had been lying to me for a working week. Phase 3 work starts tomorrow on infrastructure I can trust.
Not because I designed it well. Because the industry already had.
Alan Law is the founder of V8 Global and sole builder of Axia. He works in Hong Kong and London. This post is part of the Operator’s Log series — notes from inside the build.
Ready to take the next step?
Join London's executive AI community — events, practical intelligence, and curated introductions for established business leaders.
How Scaffold works