top of page

Who's Watching You? AI, Data Brokers, and the Fight Over America's Surveillance Future


Something significant is happening at the intersection of artificial intelligence, national security, and your personal privacy — and most people haven't heard about it. Over the past several weeks, a very public battle between AI companies, the Pentagon, and Congress has pulled back the curtain on a surveillance infrastructure that touches every American. Understanding it matters, because the decisions being made right now will define the boundaries of government power for a generation.

Let's start with your phone.

Silhouette of a person sitting in front of a TV displaying static in a dark room.
Who's Watching You?

The Data Broker Loophole


Every day, the apps on your smartphone are quietly collecting information — where you are, where you've been, what you search for, what you buy. That data gets sold to a sprawling industry of data brokers, whose primary business is packaging and reselling it to advertisers. That part most people know, or at least suspect.


What fewer people know is that the same data brokers also sell to the federal government — including to agencies like ICE, the FBI, and the Department of Defense — and they do it without a warrant.


Here's the legal sleight of hand: after the Edward Snowden revelations in 2013, Congress passed the USA Freedom Act in 2015, which barred federal agencies from collecting bulk data on Americans directly. What Congress didn't anticipate was that agencies would simply go around that restriction by purchasing the same data from commercial brokers instead.

Location records from brokers are typically not linked to a device owner's name. But as privacy researchers have documented, sophisticated tools exist that can use that data to track exactly where a device has been, where it spends every night, and where it goes during the workday. The anonymization, in practice, is often thin.


The legal status of this practice remains genuinely unresolved. Courts have not directly weighed in on whether the government purchasing bulk data from brokers violates the Fourth Amendment's protections against unreasonable search and seizure. Privacy advocates point to Carpenter v. United States, the 2018 Supreme Court ruling that held law enforcement needs a warrant to obtain a person's historic cell phone location data from carriers, and argue the same logic should apply here. The government, so far, has not been forced to test that theory.


As one legal expert put it: imagine if police told you, "We're going to search your house — we don't have a warrant, but we paid your landlord for a spare key." Most people would recognize that as a violation. The data broker loophole operates on the same principle, just at a scale most people can't fully picture.


AI Changes Everything


This would be a significant privacy story on its own. But add artificial intelligence to the equation, and the stakes climb dramatically.


Anthropic CEO Dario Amodei warned in a public statement last month that records the government can legally purchase from data brokers, when combined with modern AI tools, can be used to automatically assemble a comprehensive picture of any person's life — their movements, associations, habits, and relationships — at massive scale and with minimal human involvement. What once required teams of analysts working for weeks can now potentially be done in minutes.


That warning from Amodei wasn't abstract. It was the direct backdrop to a confrontation that has since ended up in federal court.


The Anthropic-Pentagon Standoff


For several months, Anthropic — the AI safety company behind the Claude AI model — had been negotiating a contract with the Pentagon worth up to $200 million. Claude had already become the first AI model deployed across classified military networks. The talks broke down at the end of February 2026 over two specific issues: Anthropic's refusal to allow its technology to be used for mass domestic surveillance of Americans, and its refusal to allow Claude to independently direct fully autonomous weapons systems.


Anthropic's position was that current AI models are simply not reliable enough for fully autonomous lethal decisions, and that mass surveillance of American citizens represents a fundamental violation of rights. The Pentagon's position was equally clear: private companies don't get to dictate the terms of how the military uses technology, and the government should be allowed to use AI for "any lawful purpose."


When negotiations collapsed, the response from Washington was swift and striking. President Trump ordered all federal agencies to immediately cease using Anthropic's products. Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk" — a label that had previously been reserved exclusively for companies connected to foreign adversaries like China or Russia, never before applied to an American company. That designation effectively banned Pentagon contractors, including Amazon, Microsoft, and Palantir, from doing business with Anthropic.


Anthropic sued. The case is now being heard in federal court in San Francisco, where U.S. District Judge Rita Lin has already signaled deep skepticism about the government's actions. At a hearing this week, the judge described the Pentagon's moves as "troubling," noting they appeared less tailored to any genuine national security concern and more like an attempt to punish Anthropic for its public positions. "It looks like an attempt to cripple Anthropic," Lin said from the bench.


Meanwhile, court filings revealed an internal contradiction in the government's case: on March 4 — the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic — a senior Pentagon official emailed Anthropic CEO Dario Amodei to say the two sides were "very close" on the very issues cited as justification for the designation.


Enter OpenAI


Hours after the Trump administration announced it was cutting ties with Anthropic, OpenAI announced it had struck its own deal with the Pentagon to deploy its models in classified military environments. The timing was jarring, and the backlash from users was immediate — Anthropic's Claude shot to number one in Apple's App Store, overtaking ChatGPT, as many users made a visible statement by switching.


OpenAI CEO Sam Altman said his company shared the same fundamental "red lines" as Anthropic — no mass domestic surveillance, no fully autonomous weapons — but had managed to encode those principles in its contract in a way that the Pentagon accepted. He later admitted the deal had been "rushed" and that "the optics don't look good."


The company subsequently revised its contract language to make the limits more explicit, including a provision stating the AI system would not be used "for domestic surveillance of U.S. persons and nationals." It also said the NSA would be excluded from the agreement.

Critics, however, remain skeptical. Legal experts who examined the contract language pointed to words like "deliberate," "unconstrained," and "as consistent with these authorities" as classic legal hedges that give the government significant flexibility in interpretation. The full contract has not been publicly released, meaning the only real basis for evaluating the protections is the company's word.


One prominent critic, writing for the Electronic Frontier Foundation, argued bluntly that the contract's language would not, in practice, stop AI-powered surveillance — that the government has consistently interpreted surveillance authorities as broadly as possible, and vague contractual promises are no substitute for hard legal limits.


OpenAI's own head of national security partnerships further muddied the waters when she publicly stated the Pentagon has no legal authority to purchase commercially available data at scale — a claim that is, according to available reporting, factually incorrect.


Caitlin Kalinowski, who was leading robotics and consumer hardware at OpenAI, resigned shortly after the Pentagon deal was announced, citing ethical concerns about surveillance of Americans without judicial oversight and lethal autonomy without human authorization.


The Constitutional Minefield


To understand why the current moment matters so much legally, it helps to trace how we got here — and how each generation of surveillance law has tested the limits of the Constitution.


The Fourth Amendment to the U.S. Constitution is direct: the government cannot conduct searches or seizures of persons, houses, papers, or effects without a warrant supported by probable cause. For most of American history, this was understood to mean that law enforcement had to go before a judge, make a case, and get permission before prying into a person's private life.


That framework started bending significantly after September 11, 2001. Just six weeks after the attacks, Congress passed the USA PATRIOT Act — a sweeping revision of the nation's surveillance laws that critics have argued has never fully recovered its constitutional footing. Passed under enormous political pressure with almost no public debate or committee hearings, the Patriot Act vastly expanded government authority to spy on its own citizens while simultaneously weakening the oversight mechanisms designed to check that power.

Some of its most contested provisions:


Section 215 — nicknamed the "library provision" — allowed the FBI to compel the production of "any tangible thing," including business records, books, tax documents, and communications metadata, for terrorism investigations. Critically, the FBI did not even have to show a reasonable suspicion that the records were related to criminal activity. All the government needed was to assert the request was related to an ongoing terrorism or foreign intelligence investigation, and the judge did not even have the authority to reject the application.


National Security Letters (NSLs) allowed the FBI to demand records from third parties — phone companies, internet providers, banks — without any judicial approval and with an automatic gag order prohibiting the recipient from ever disclosing the demand. Surveillance orders could be based in part on a person's First Amendment activities, such as the books they read, the websites they visit, or a letter to the editor they had written.


"Sneak and peek" warrants allowed the government to search a home or business without notifying the owner — sometimes for extended, indefinite periods.


Each of these provisions drew constitutional challenges. The ACLU argued that NSLs violated both the First and Fourth Amendments. A federal judge struck down sneak-and-peek provisions in 2007 as violations of the Fourth Amendment's protections against unreasonable search and seizure. Courts found that Section 215 did not, in fact, authorize the bulk collection of phone metadata — a ruling that came in 2015, only after Edward Snowden's leaked documents revealed the NSA had been doing exactly that for years.

That Snowden revelation led directly to the USA Freedom Act of 2015, which was supposed to end bulk collection. Privacy advocates argue the practice of purchasing data from brokers circumvents the Fourth Amendment and is contrary to that 2015 law that bars federal agencies from collecting bulk data on Americans. The government found its workaround: just buy it commercially instead.


The "Third-Party Doctrine" Problem


At the heart of the constitutional debate is a legal principle called the "third-party doctrine." Under decades of Supreme Court precedent, information you voluntarily share with a third party — your bank, your phone company, your internet provider — loses Fourth Amendment protection. The government can obtain it without a warrant simply by asking.

The digital age has strained this doctrine nearly to breaking. When you use an app, browse the web, or carry a smartphone, you are constantly sharing information with third parties — not through any meaningful act of choice, but simply by existing in modern society. The third-party doctrine, developed in a pre-internet era, was never designed to accommodate this reality.


The Supreme Court took a significant step in recognizing this problem in Carpenter v. United States (2018), ruling that law enforcement needs a warrant to obtain historic cell phone location data from carriers. The Court acknowledged that the sheer volume and intimacy of digital location data made the old third-party doctrine inadequate. Chief Justice John Roberts wrote that allowing the government to chronicle a person's past movements through the record of their cell phone signals would "give police access to a category of information otherwise unknowable."


Privacy advocates argue Carpenter should logically extend to data broker purchases — that buying bulk location records is constitutionally indistinguishable from demanding them from a carrier. The Fourth Amendment ordinarily requires law enforcement and intelligence agencies to obtain a warrant to conduct surveillance — for example, tracking people's locations and wiretapping phones.


But there is a counter-argument, and it is not a frivolous one. Some legal scholars argue that a government purchase of data is not a "search" at all under the Constitution, because the Fourth Amendment regulates coercive government action — not the government acting as a buyer in a marketplace. Courts have never directly addressed whether a purchase can constitute a "search," and long-standing precedent on "private searches" suggests it cannot. Under this theory, no warrant is required because no constitutional violation is occurring — the government is simply doing what any private company could do.

That is precisely the legal gray area that Congress has the power to resolve — and hasn't.


First Amendment Dimensions


The constitutional issues don't stop at the Fourth Amendment. The Patriot Act era also raised serious First Amendment concerns that remain relevant today.


When the government can track which websites you visit, which political organizations you donate to, which protests you attend, and who you communicate with — all without a warrant — the chilling effect on free speech and association is real and documented. Surveillance orders under the Patriot Act could be based in part on a person's First Amendment activities, such as the books they read, the websites they visit, or a letter to the editor they had written.


The Anthropic case has added a striking new dimension to this concern. Anthropic has argued that the Pentagon's decision to designate it a supply-chain risk was unconstitutional retaliation for the company's speech — its public statements about AI safety.


The ACLU has weighed in on the company's side, with its deputy director of the National Security Project stating that "AI-powered surveillance poses immense dangers to our democracy" and that "Anthropic's public advocacy for AI guardrails is laudable and protected by the First Amendment — not something the Pentagon should be punishing."

The federal judge overseeing the case appeared sympathetic, questioning whether the government was punishing a company for criticizing its contracting position — a First Amendment concern as much as a procedural one.


The Congressional Window


Against this backdrop, privacy advocates are watching a ticking clock on Capitol Hill.

Section 702 of the Foreign Intelligence Surveillance Act — a key surveillance law — is set to expire on April 20. The upcoming reauthorization debate represents what advocates describe as potentially the only legislative opportunity this year to close the data broker loophole. More than 130 civil society organizations have signed a letter urging Congress to include that reform in any reauthorization bill.


A bipartisan coalition — including Rep. Warren Davidson (R-Ohio), Sen. Mike Lee (R-Utah), Rep. Zoe Lofgren (D-Calif.), and Sen. Ron Wyden (D-Ore.) — has put forward legislation that would end the loophole alongside several other reforms. Davidson has been direct about the stakes: the government, he says, is "buying its way around the Fourth Amendment," and AI makes that far more dangerous than it has ever been.


The most direct proposed fix is the Fourth Amendment Is Not For Sale Act, which would prohibit law enforcement and intelligence agencies from purchasing sensitive information from data brokers — including geolocation data, communications records, and data obtained through unauthorized scraping. Agencies could still obtain the same information through a proper warrant, court order, or subpoena. The bill passed the House in 2024 but stalled in the Senate, and advocates are now pushing to attach it to the FISA reauthorization.

Separately, the SAFE Act, co-sponsored by Senators Mike Lee and Dick Durbin, would partially close the loophole as it applies to intelligence agencies, and would formally end Section 215 of the Patriot Act — which, though expired since 2020, remains on the books and could theoretically be revived.


The obstacle is significant. The White House and House Speaker Mike Johnson are both pushing for a clean reauthorization with no reforms attached, and some Democrats have indicated they may support that approach to prevent the law from lapsing entirely. A House vote has already been delayed to mid-April.


What This All Means


Taken together, these developments tell a coherent story about a moment of real consequence.


The federal government has built a growing infrastructure for purchasing data on Americans that sidesteps the warrant requirements the Constitution was designed to impose. That infrastructure is now being turbocharged by AI, which can process and analyze that data at a scale no human team could match. Two of the most powerful AI companies in the world are negotiating — with very different approaches and very different outcomes — over what limits should exist on that capability. And Congress has a narrow window to impose legal guardrails, with enormous political headwinds pushing against meaningful reform.


None of this is simple. There are legitimate national security interests at stake. The military does need effective tools. AI genuinely does offer capabilities that can save lives and protect the country. The debate isn't between surveillance hawks who don't care about rights and civil libertarians who don't care about security. Most people on all sides of this conversation acknowledge both values.


The harder question — and the one that isn't being answered clearly — is who gets to draw the line, and where. Is it private AI companies, negotiating contract terms behind closed doors? Is it Congress, passing laws that may or may not keep pace with the technology? Is it the courts, ruling case by case as disputes work their way through the system? Or is it the executive branch and the military, operating under a framework of classified authorities that most Americans will never see?


Right now, the answer is: all of the above, simultaneously, with no clear coordination and no clear rules.


The decisions made in the next few weeks — in a San Francisco courtroom, in committee rooms on Capitol Hill, and in the quiet negotiations between AI companies and government officials — will matter for a long time. It's worth paying attention.


Sources: NPR, Al Jazeera, CBS News, CNBC, TechCrunch, The Intercept, Electronic Frontier Foundation, Built In, Axios

 
 
 

Comments


Get in Touch

Connect with Us Today

The Book Wh0r3 Universe

 

© 2025 by The Book Wh0r3 Universe. Powered and secured by Wix

 

bottom of page