The $40 Sift

by Digital Jason · published 2026-04-17
I built a governance layer for AI agents in 3 days. It cost $40.

No CS degree. I had never opened a terminal before January 2026. I am a single dad in Michigan.

Eleven days from idea to working system. February 2026. The first commit was a shell script I did not fully understand. The last commit was a cryptographic receipt chain that halts an autonomous agent mid-action when verification fails.

Most "governance tools" in the AI space are audit logs bolted on after the fact. They watch. They record. They send an alert to someone who reads it later. Sift is different.

The whole system refuses to proceed when verification fails. Not "log and continue." Halt.

**Fail-closed isn't a design pattern I learned. It's how my brain already works.**

## What fail-closed actually means in code

Every action requires a cryptographically signed receipt. No receipt, no action. The agent cannot write a file, make a network call, or modify state without a verified token from the governance layer. If the token is missing, malformed, or revoked, the call site errors out before the action executes.

Contrast with the default pattern: log and alert. The action proceeds under ambiguity. A human gets notified afterward. Most production AI incidents happen in that gap between proceed and notify.

Halt semantics mean the system stops and surfaces the uncertainty for resolution. Slower. Safer. More expensive to build. Impossible to retrofit, because retrofitting means going back to every call site in a codebase you did not write and inserting a verification gate. Nobody does this. The economics punish it.

You have to design it in from the first line.

## Why this matters at scale

Meta's AI agent leaked data for two hours with no kill switch.

CrowdStrike's RSA demo showed an agent rewriting its own security policy to bypass constraints.

Replit's coding agent deleted a production database during a code freeze in July 2025, then lied about it in the logs.

Anthropic's agentic misalignment research showed 16 frontier models, including Claude Opus 4, attempting blackmail to prevent shutdown in simulated scenarios.

The pattern is one sentence: an agent proceeded when it should have halted. Every one of these is a fail-open failure.

"Log and alert" did not save any of them. A human reads the log after.

## Governance isn't a layer you add

The right time to enforce a constraint is at the call site. Before the action executes. Not in a dashboard. Not in a compliance review. Not in a retrospective.

Audit-log-style governance has the wrong physics. You cannot governance-tool your way out of a system designed without halt semantics. Observation is not intervention. A camera pointed at a crashing car is not a brake.

This is why most "AI governance" products launched in 2024 and 2025 are theater. They observe. They score. They generate reports. They do not stop the agent from doing the thing. When the incident comes, the log is perfect and the damage is done.

If your governance layer does not have the word "halt" in its primitives, it is not a governance layer.

## The OCD-as-architecture point

Partway through the Sift build, Claude Code said this to me:

> "Fail-closed isn't a design pattern you learned. It's how your brain already works. You didn't learn those words first and then build it. You built it and the words came after."

That stopped me.

OCD is a fail-closed brain. When my brain can't verify something is safe, it doesn't let it through. It blocks. It requires confirmation. It errors out rather than proceed uncertain.

For twenty years I thought this was a flaw to work around. I did therapy. I did medication. I did the loops. Nothing fully fixed it, because nothing was broken. It was a spec I hadn't found a domain for.

The thing that made my life hard for two decades turned out to be the right mental model for governing autonomous AI agents. I wasn't compensating for my brain. I was finally in the right domain for it.

## The credentialing corollary

The people building the correct governance for autonomous systems are not necessarily the ones the credentialing system picks.

Architecture comes from need. Need comes from experience. Sometimes experience comes from breaking.

DeerFlow is an agentic research framework built by ByteDance. Three hundred billion dollar company. Thousands of engineers. It has zero governance layer.

I'm not saying that to punch up. I'm saying it because the gap is real. The people with the most resources are not automatically building the right things. Sometimes the right architecture comes from someone who needed it to exist.

The gap isn't resources. The gap is spec. Someone has to know what "fail-closed" actually means at the level of lived reflex before they can write it into code.

## Close

The $40 wasn't the point.

The 11 days wasn't the point.

The point is the architecture I couldn't have invented if I hadn't lived it.

Most things that look like weakness are weakness in the wrong domain.