Listen to the author read
Text
We live in a time of technological upheaval. The digital world has become the preferred plane of existence for many Americans and citizens across the globe. Therefore, decisions that dictate how we conduct our cyber-selves or how the state carries out its sworn duties online are of high importance in 2026.
A free society rarely surrenders liberty in a single dramatic act. More often the collective trades them away in small exchanges— those justified as reasonable in the moment. These micro transactions are presented as temporary and lawful, necessary for circumstances or the passions of the internet minute.
The trouble is that the tools normalized when society is in storm tend to remain when the weather calms. And the powers granted to defend against one fear are quietly repurposed to buffer the next.
That is why the recent dispute between the federal government and Anthropic, creator and administrator of the powerful artificial intelligence (AI) large language model (LLM) family Claude, deserves attention beyond personalities and the culture war.
Strip the belligerent and moral rhetoric to the skeleton and the underlying question is simple: should an AI vendor be compelled to provide unrestricted access to its models for “all lawful purposes,” including domestic surveillance support?
I believe that question is greatly important in this moment of AI acceleration. Because these models stand to impact how the government can utilize online information, much of which is publicly available and has already been ingested by relevant AI models.
I am not naïve about intelligence work. I have done it. Even in the early 2010s, with far less sophisticated tools, it was not difficult to build an unnervingly accurate picture of a person using open source material alone.
Social media is a crude telepathic domain, where many communicate often unfiltered thoughts and emotion. For the investigator, the information there is rich with context and insight into how a person sees the world and their self-perceived place in it.
Today the average citizen’s life is not merely documented online; it is lived there. Work, friendships, purchases, confessions to search bars, arguments typed in heat and deleted, private messages sent at 2 a.m., the strange little loops of curiosity and doom— much of what used to remain inside the self now leaves a trail. Over time it is possible to construct the complete psychography of an individual with these breadcrumbs. All you need is the right tool.
Enter large language models. When you feed these unconnected posts, images, musings, and rants into a frontier model, the capability to know a person's private world greatly expands. With AI, this becomes more than data collection– way beyond watching.
AI enables interpretation at scale: pattern extraction, classification, risk scoring, and now, prediction. The state does not simply see what you did or said. It can infer what you might do before you act. This may extend to who influences you, where you are vulnerable, and how pressure could be applied to push you to a certain poisonous state of thinking. This super-surveillance is much more pernicious than the video-dependence of the 20th century. Cameras watch bodies, but models read minds.
And now, increasingly, they read the internal state of the body too. Wearables add the body’s punctuation— sleep, stress, heart rate, the spikes that accompany the extremes of human emotion are regularly recorded and tracked. Combine this internal reading with online statements, and external movements tracked by geolocated satellite array and you will have reconstructed a person in all but flesh. Citizen observation transforms into creation of human facsimile.
This is where old comparisons begin to breakdown. In 1984, the terror comes from observation and enforcement: a state that watches and punishes. In our era, the greater danger is that inference replaces observation— that a person can be treated as suspect in advance because the machine’s story about them seems coherent and actionable. This danger does not require malice. It only requires institutional convenience. Acceptance of societal overreaction only needs time to morph into a new normal. Just go to any airport to see this in force.
This is why I take Anthropic’s caution seriously.
Anthropic has long had the strongest safety posture of the major frontier labs—sometimes to the point of being frustrating. I have used these models in real work. I have felt the steering based on socially acceptable sentiment. Rigid guardrails send the model into moral crisis at the mention of certain spiky topics.
However, that posture makes their current line more credible, not less. When a company with a reputation for over-caution says, in effect, we will not enable mass domestic surveillance and we will not remove safeguards simply because power demands it, that is not sanctimony. It reads as an admission that capability is outrunning governance.
The US government's position is understandable. True, a private company should not determine the means with which the U.S. carries out wartime activity. But the key characteristic of a free market is the power of selection. If you don't like the quality of a certain vendor's selection, you go to another of similar service. This is fair. What becomes legally murky is coercion through implicit involvement. When other vendors are at risk of losing business due to association with the stricken entity, and there is no lawful reason other than the disapproval of business ideology, an overstep has occurred. This might come through the courts, but the blast radius of civil damage has certainly exceeded reasonable expectation.
The government’s argument is that “lawful” use of AI technology is the boundary. I understand the appeal. But legality is not legitimacy. Laws lag and need amendment. At this time, none exist to contain or truly encompass the awesome capability of large language models. This gray zone leaves too much to be interpreted. Have we learned nothing from the Patriot Act a quarter-century past?
Emergencies, whether exigent or exaggerated, expand definitions. In these instances, authority, especially those of any entrenched bureaucracy, accumulates and rarely contracts. A capability built under one administration will be inherited by the next, and by the next— long after the original justification has faded and the tool or procedure has become routine.
This is also why the method matters as much as the outcome. When officials threaten to blacklist a vendor and punish partners, suppliers, and contractors by association, they are not simply “choosing a different free market product.” They are casting a mold for a new governance template. This is not persuasion. It is coercion by procurement— policy in a budding domain made through purchasing power, with the bill paid in hardening precedent for how government influences the dealings of private enterprise.
That precedent extends beyond any single company. Every frontier lab, every major platform, every contractor watching from the sidelines receives the message: your ethics policy is negotiable, and your refusal will be treated as hostility.
Even if such pressure succeeds in the short term, it teaches the safest posture for private actors: take no principled stance at all. Avoid friction. Comply quietly. Say nothing. Or do the opposite, as we have seen in the case of OpenAI. In this climate, companies are encouraged to step into the fog of legal and governmental temperament for progress, leverage, or reputational protection. When this happens it is a gambit forced by a tense game playing out on a global screen, the victor: state, big tech, or the people, is unclear.
The anticipatory power of large language models beats any credible predictive market or any previous intelligence gathering practices by incalculable orders of magnitude. The realm of sci-fi is reality. Consider the 2002 film Minority Report. In that movie psychics assisted police to stop heinous crime before they were committed.
With the integration of AI, this capability is within reach without the mystical. It is a simple order of operations: watch, build profile, score risk, monitor quietly, intervene early. A new standard of pre-crime for national defense in exchange for lesser freedom offline and online.
The danger is not just wrongful targeting— though false positives alone can ruin lives.
The deeper danger is civic. The most influential systems do not need to be accurate. They only need to be believed. And in the age of viral memes and bespoke AI generated digital material, belief: whether arrived at through contemplation or repetitive exposure, is the ruler of reason. If people think they are being scored, catalogued, and remade based on inference, they begin to live as if they are. They self-censor and withdraw. Thoughts begin to be presented in more suitable posts to the all-seeing algorithm. This is a sub-human result. The cost of detailed mass surveillance is a debt of widened public distrust; always paid in the public life of the average person.
We should also be aware of the global stakes. The world is not studying a dozen serious templates for AI governance. It is studying two. One is China’s: centralized capability fused to state oversight and, inevitably, surveillance. The other is American: innovation restrained by rights and consent. If the American model becomes “hand over the keys or be blacklisted,” we will have taught the world that state power is the only safety policy that matters.
Maintaining digital sovereignty does not require denying kinetic, digital, or psychological threats to a governed body of persons. It requires refusing the temptation to answer every offense with irreversible hyper-defensive posture or socially porous policy. A government can have legitimate interests in national security, foreign intelligence, and battlefield advantage. But domestic society should not be treated as battlespace by default, and citizens are not targets to be modeled into compliance.
So we should say plainly what this conflict is about. It is not patriotism nor virtue. It is a boundary question:
Should domestic life become legible to the state at machine scale for the sake of increased defense?
If the answer is yes, we should argue for it openly and build strict oversight with a new and appropriate set of Constitution-bound principles and parameters.
If the answer is no, then “all lawful purposes” is an insufficient standard— because the most irreversible harms are often lawful until history decides they were not.
A nation that cannot say “no” to its most powerful tools will eventually discover that its citizens cannot say “no” to it.
-Keith Hayden
February 28, 2026