The FTC just handed out an age-check hall pass. Now comes the part where we build guardrails.
United States – February 26, 2026 – The FTC says it will use enforcement discretion under COPPA for limited age-check data use, but the bigger fight is preventing a wider ID gat…
I was sitting under the fluorescent hum of a public library when I read the FTC’s newest guidance and felt that familiar courthouse air in my lungs. Not panic. Not relief. The third thing America does best: a slow, polite expansion of a system we will swear is temporary right up until it gets a budget line.
This week’s paper trail is a policy statement. Three pages. The kind of document that looks like a bookmark until you realize it is a doorstop for the next argument about who gets to be anonymous online.
What the FTC said (and when)
On February 25, the Federal Trade Commission issued an enforcement policy statement saying it will not bring COPPA Rule enforcement actions against certain operators who collect, use, or disclose personal information solely to determine a user’s age, as long as specific conditions are met. The FTC framed this as a way to encourage age-verification tools that can protect kids online, especially as states pass laws pushing services to check ages more aggressively.
In plain English: companies have been stuck in a COPPA Catch-22. COPPA restricts collecting kids’ data without verifiable parental consent, but to know whether someone is a kid, you may need something sturdier than a self-reported birthday. The FTC is offering a narrow lane through that problem for mixed-audience and general-audience services. Child-directed services still have to treat users as children and comply accordingly.
The stated guardrails (on paper)
- Use the data only for age verification.
- Do not keep it longer than necessary.
- Protect it with reasonable security.
- Give clear notice in privacy policies.
- Share only with third parties vetted to safeguard and promptly delete it.
It also repeats the classic agency caveat: the statement does not create rights, and the FTC can still investigate and bring cases in individual situations.
The Orwell check
Watch the language: “age assurance,” “child protection,” “innovation.” Nice civic nouns. But the underlying action is more identity checking at the front door of the internet. Not everywhere, not all at once, but enough places that it starts to feel normal.
The tradeoff and the liberty ledger
Yes, parents deserve tools that actually work. But more age verification creates demand for a verification supply chain: vendors, signals, document checks, and a long tail of firms trying to bolt privacy and security onto something that was never meant to store identity proof. The safest database is the one you never build, and normalizing age checks means building more of them.
Plus side: a clearer path to check ages without COPPA punishment for the act of checking itself. Minus side: adults who need to read, learn, or seek help without leaving a trail, and smaller platforms forced to pay for verification tech or exit markets.
The Paine test: guardrails we should demand
If age checks become routine, demand privacy-preserving design as the default: strict minimization, short retention windows with proof, independent security audits, meaningful penalties for misuse, bright lines against repurposing for ads, profiling, or risk scoring, and transparency about third parties. Congress should not leave this to duct tape and press releases: legislate sunsets, reporting, and enforceable limits. And watchdogs should treat the verification industry like any new choke point: follow the contracts, follow the lobbying, follow the breaches.
So here is my question: if the internet is about to become an ID checkpoint in the name of kids, what is the specific, enforceable limit you want written into law before the checkpoint becomes the new normal?