The Right to Ask Why

I build systems that businesses trust to handle their customers, their payments, and their data. Three hundred thousand lines of x86-64 assembly across two project trees, all of it running production work. I say this not to flex but to locate where I'm standing when I write the rest of this post.

What I'm going to say is: we are about to live under systems that decide things about us, and we won't be able to ask why.

This Is Already Happening

Right now, today, AI systems are deciding whether you get a loan, what your insurance pays out, how your job application gets ranked, what your kids see on their screens, and which neighborhoods get extra police attention. Tomorrow they will decide more. None of these decisions come with a real explanation. You get a number. You get a yes or no. You don't get an answer.

"The model said so" is not an answer. "The model produced output sequence X" is not an answer. These are descriptions of what happened. They are not explanations of why.

What's Actually Inside

An AI model is a giant pile of numbers — for a serious one, hundreds of billions of them. Nobody wrote the numbers. Nobody designed them. They were learned from training data, and almost nobody — including the people who built the model — knows what most of them actually do.

When the model decides you're a high-risk applicant, somewhere inside that pile of numbers, a pattern fired. We could find that pattern. We could read it. There is a young field of research called mechanistic interpretability that has shown, repeatedly, in real published work, that the internal mechanisms of these models are legible if someone bothers to look. Researchers have identified specific patterns that correspond to specific concepts — "the user is being flattering," "this code has a bug," "the speaker is uncertain." They've mapped out small circuits inside models that perform specific jobs. They've shown you can identify why a model said what it said and intervene on it, in real time, without retraining.

The science exists. The tools to make it accessible to anyone outside a research lab do not.

What an Answer Would Look Like

Imagine you get denied for a mortgage. Today, you get a one-line rejection. Maybe a phone number that goes to a call center where nobody can tell you why.

What you should get: "The model identified patterns in your file matching its 'high-risk applicant' feature. The strongest contributing inputs were these three lines from your credit report. Here is how the same model would have responded with a different value on line two. Here are the seven other applicants in the last month with similar patterns and what happened to them."

That's not science fiction. The methods to produce that explanation exist in research papers right now. What's missing is the engineering — somebody has to build the tools that turn these methods into something a normal person can use, demand, and rely on.

This Is a Civil Rights Question

The European Union's GDPR Article 22 already says, in principle, that you have the right not to be subject to a decision based solely on automated processing. The United States has the Fair Credit Reporting Act, the Equal Credit Opportunity Act, requirements for adverse action notices. The legal idea that you should know why a system decided against you is older than the systems themselves.

The technical reality is that those rights are mostly fictional. Banks send adverse action notices that say "credit score" and nothing else. Hiring platforms decline you with form emails. Insurance algorithms quote you a number with no breakdown. The right to ask why exists on paper. In practice it doesn't, because nobody has built the tooling that would make it enforceable.

The Real Stakes

An auditable system is one you can live with even when it goes wrong, because going wrong is debuggable. An unauditable system is one you have to trust — and "trust" is an emotional word for what happens when you have no other option.

The difference between living under a god and living under a system you can audit is the entire history of how human institutions get better. Courts publish opinions. Doctors write notes. Engineers leave logs. The premise of an accountable society is that decisions about you are readable, by someone, somewhere, eventually.

If the next layer of decision-making is AI systems whose insides are unreadable to the public, then we are choosing — quietly, by inattention — to walk away from that premise. We are accepting verdicts we cannot question, from systems we cannot inspect, run by people who will tell us with a straight face that they don't fully understand them either.

I don't accept that. Not because I'm worried about AI in some abstract science-fiction way. Because I build systems for a living and I know the difference between a system that can be inspected and a system that can't, and I know which one a free society needs.

The work in front of us — the people building software, the people writing law, the people demanding answers — is to make sure that when an AI tells you no, you can ask why, and get a real answer.

It's the difference between living under a god and living under a system you can audit. We get to choose which.

← Back to blog