The $1 Trillion Compliance Tax — and the Companies That Aren’T Paying It

By Jason Bier, General Counsel & Chief Privacy Officer, Adstra 

Most data teams treat privacy regulation like a fire alarm. React when it goes off, reset when it stops, go back to what you were doing. That model made sense when there was one law to track. There are now 20 state-level privacy statutes in force, with no federal framework to unify them, with eight taking effect in 2025 alone.

The annual cost for navigating the U.S. state privacy patchwork runs to an estimated $239 billion, with a projected 10-year total exceeding $1 trillion. But that number understates the actual damage. What doesn’t show up in any budget is the opportunity cost: campaigns that don’t get built because legal review takes too long, audience segments abandoned because data provenance is unclear, measurement infrastructure that stays broken because no one wants to touch the consent architecture. Companies are paying a compliance tax and an opportunity cost at the same time, and treating both as unavoidable. They aren’t.

The Patchwork Keeps Getting More Expensive

Eight new state privacy laws took effect in 2025 alone, each with its own definitions, thresholds, and carve-outs that don’t map cleanly to each other. For a platform operating across multiple states, that’s not a legal problem sitting in someone else’s lane. It lands directly on engineering, data architecture, and the product roadmap.

Every new law means a new legal review, new data mapping, new vendor negotiations, and new engineering tickets. Multiply that by 20-plus jurisdictions, still growing, and you have a machine that consumes resources without producing anything. The companies still running a react-and-patch model are going to feel that acutely and the ones that don’t feel it yet are just earlier in the cycle.

The Architecture Is the Argument

Privacy by design gets thrown around enough that it’s started to lose meaning.  But it’s still a very important principle. In practice, it means two things: data minimization and purpose limitation.

Data minimization means collecting only what a defined processing purpose requires. If a match graph needs an email hash and a device ID, the architecture shouldn’t be ingesting behavioral event logs that nothing downstream consumes. Every data element without a clear processing purpose is a liability — legally, operationally, and from a consent standpoint.

Purpose limitation means data collected for one use doesn’t get repurposed without a fresh legal basis. This is where platforms most commonly run into trouble. A user consents to personalization. That data flows into a lookalike model. That model output gets licensed to a third party. At each step, the original consent basis stretches further from the actual use until it breaks. The AI systems now embedded in identity resolution and audience modeling workflows face the same constraints, regardless of whether the underlying statute explicitly names AI.

Building these principles into the architecture from the start allows the compliance calculus to change. When minimization and purpose limitation are structural, you don’t need a legal review every time a new law passes. The architecture already answers the question.

The Trade-Off Was Always Fiction

The idea that privacy and performance are in tension has persisted because compliance was structured as a constraint function. Legal tells the data team what it can’t do. The data team works around it. That structure produces the trade-off framing as a natural output.

Flip the architecture and the framing goes with it. Consent documented properly at the point of data ingestion becomes a reusable asset across the pipeline. Data minimization enforced at the pipeline level produces cleaner data and a more defensible position in any regulatory context. Purpose limitation built into the data model means audience segments carry their provenance with them, which speeds up licensing and activation.

A federal privacy law would help. One coherent national framework would replace 20-plus conflicting state regulations and remove the compliance arbitrage that currently punishes smaller operators more than large ones. The industry has been circling that outcome for years without landing it.

Until it passes, every new state statute is a stress test. Enforcement is coordinating across jurisdictions. Attorneys general are sharing resources and comparing notes. The cost of the reactive model isn’t fixed — it scales with every new law that hits the books.

The brands still treating privacy and performance as a trade-off aren’t just leaving money on the table. They’re leaving themselves exposed — to enforcement, to reputational risk, and to a compliance model that gets more expensive every time a new attorney general decides to make adtech a priority.