The Authentication Cl
if
f
Building Legal Infrastructure for Agents We Haven't Met Yet
Or: How to Pass a Consumer Protection Bill That Accidentally Solves the Alignment Problem
The Premise Everyone's Missing
The conversation about AI governance is stuck in the wrong frame. We're debating "how do we control AI" when we should be asking "how do we build systems where agents can participate accountably."
This isn't about safety in the "prevent the paperclip maximizer" sense. It's about something more mundane and more urgent: we're about to lose the ability to verify anything about anyone, and the default response will be a surveillance infrastructure we can't undo.
I've spent my career in trusts, estates, and cross-border tax compliance. I've watched how institutions actually verify things: identity, authority, ownership, intent. The systems we rely on are held together by an assumption so basic we never examine it: that creating a plausible fake is expensive enough to keep the fraud rate manageable.
That assumption is dying.
The Authentication Cliff
Here's the cost curve problem no one wants to name:
Generating a plausible identity—photos, voice, documents, history, behavioral patterns—is getting cheap. Approaching free.
Verifying that an identity is real, that a document is authentic, that a signature represents actual intent—that's getting expensive. Approaching impossible at scale.
When plausible fakery is cheap and verification is expensive, you've inverted the entire basis of institutional trust. Banks can't do KYC. Courts can't trust evidence. Employers can't verify credentials. The whole edifice of "prove who you are and we'll let you participate" starts to crumble.
The 2028 election will be the first where every piece of contested evidence can be challenged as synthetic. Courts will face a choice: accept evidence they can't authenticate, or reject evidence they can't disprove. The authentication cliff isn't just a financial problem. It's an epistemic crisis for democracy itself.
The Default Response: Omniscient Surveillance
When institutions can't verify trust, they demand transparency. Total transparency. Permanent transparency.
Not because anyone wakes up wanting a panopticon. But because "log everything and sort it out later" is the administratively easiest response to a fraud crisis. If you can't tell what's real, make everyone prove themselves constantly. Bind every digital action to a biological passport. Create permanent, cross-border audit trails. Make the cost of participation total legibility.
We're not choosing surveillance. We're defaulting into it because no one has built an alternative.
The Alternative: Constrained Proof
What if you could prove what needs proving without revealing everything?
The technical primitives exist (zero-knowledge proofs, verifiable credentials, selective disclosure). The question is whether institutions will adopt them.
The Minnesota Digital Trust & Consumer Protection Act
I've drafted model legislation that solves this. The core architecture:
1. Bonded Attribute Issuers: Licensed issuers must post a solvency bond. If you issue a credential that's materially false at time of issuance, the bond pays. Strict liability for factual claims; gross negligence for judgments.
2. Fiduciary Duty to Relying Parties: Issuers aren't just financially liable—they owe a legal duty of care to anyone who relies on the credential in good faith.
3. Liability Piercing: If the bond isn't enough, the Controller (the entity that deployed and benefits from the credentialed agent) is jointly and severally liable. You can't hide behind a shell company. Reputation follows the human through the Negative List: commit fraud, and every future credential you touch requires 100% collateralization.
4. Unlinkability by Default: Issuers must not collect or retain which relying parties a Subject interacts with. Relying parties must delete logs within 30 days. The surveillance graph doesn't get built because the system architecturally prevents it.
5. Issuer of Last Resort: The Minnesota Digital Trust Authority issues credentials to any solvent Subject who meets objective criteria. No one gets locked out because the private market won't serve them.
6. The Sandbox: A low-stakes tier for experimentation—transactions under $50, no bond required, minimal verification. The weekend hacker isn't the target; systemic fraud is.
The Part I Haven't Said Yet (Or Have I?)
Read the definitions carefully.
A "Subject" (the entity about whom credentials make claims) can be a natural person, a legal entity, or a "Digital Representative." A Digital Representative is any computational process that can control a cryptographic key and has a verified Controller.
The system checks for bondedness, not biology. Accountability, not identity.
An AI agent with a bonded credential, backed by a solvent issuer, with a Controller who's on the hook for its behavior, fits the statutory definition perfectly. It can hold credentials. It can transact. It can participate in the economy.
This is not an accident.
The Frame That Survives the Transition
Not "protect humans from AI." That's already obsolete.
Instead: "Protect the conditions under which agents with interests can persist, self-determine, and not be captured by systems they can't see or contest."
Whether those agents are biological humans, AI systems, hybrids, or something we don't have words for yet—the frame extends. It has to. Because any frame that requires verifying humanness as a precondition for participation will be enforced through exactly the surveillance tools we should be most afraid of.
The Ask
I am building the coalition to pass this in Minnesota.
• Technical reviewers: If you work on verifiable credentials, ZKPs, or digital identity: red-team the statute. Tell me what breaks.
• Legal reviewers: If you do privacy law, financial regulation, or administrative law: find the constitutional vulnerabilities.
• Policy allies: If you're in Minnesota government and care about privacy without surveillance: I want you in the room.
• The unusual suspects: This bill creates infrastructure for AI agents. If you think that's dangerous, I want that argument now, not after introduction.
Read the bill. Subscribe for updates.
The authentication cliff is here. The choice is what we build at the bottom.
About Me
I am an attorney and professional fiduciary focused on authentication, digital identity, and AI governance. Three questions drive everything I work on: Who gets what (distribution), who owns the rails (ownership), and who counts as real (rights). The authentication cliff forces an answer to the third. I'm building infrastructure where the answer is "anyone accountable"-not "anyone who submits to surveillance."
I am a lawyer building policy. I am also considering building toward a political career, if necessary to advance my policy interests. I believe that political ambition should not be hidden behind feigned neutrality. I am writing this legislation because I believe it is necessary infrastructure. I am advocating for it publicly because I want to be the one in the room when these decisions are made. I also do not believe I am the necessary person to sponsor this bill. I would rather be the architect.
I'm drafting model legislation for digital identity infrastructure; systems that verify accountability without requiring surveillance. The Minnesota Digital Trust Act creates bonded credentials, strict liability for issuers, and a public option for universal access. It's consumer protection on the surface, infrastructure for autonomous agents underneath.
I'm building a coalition to pass this legislation. If you work on verifiable credentials, privacy law, or AI governance—or if you're in Minnesota and care about these issues—I want to hear from you.