The Discord story, briefly, in case you missed it. On October 3, 2025, attackers compromised a single customer-service account at 5CA, a third-party vendor Discord uses for support. The attackers — who called themselves "Scattered LAPSUS$ Hunters" — kept the access for 58 hours. Among the data they stole were approximately 70,000 government-ID photos that Discord users had uploaded when appealing age-related decisions.
Four months later, in February 2026, Discord's partner for age verification — Persona, a Peter Thiel-backed identity-verification company — was found to have left its frontend code publicly accessible, with roughly 2,500 files exposed on a government-authorised endpoint. Discord, to its credit, cut ties with Persona publicly and delayed its mandatory age-verification rollout from March 2026 to "the second half of 2026."
The reporting on these incidents, understandably, focused on Discord. How it picked a third-party vendor that wasn't secure. How it let a single support account be compromised for 58 hours. How it didn't notice. These are legitimate critiques. But they miss the larger story, which is this: Discord is not the anomaly. Discord is the norm. Any system that requires a large company to hold government-issued identity documents at scale will, given enough time, leak them. The question isn't whether. The question is when.
§ 01This is the system working as specified
A bug is something that shouldn't happen — a programmer made a mistake, and if they correct the mistake, the problem goes away. What happened to Discord isn't a bug. It's the specified behaviour of the system playing out exactly as the system was built.
Mandatory age verification works like this: a platform is required by regulation (or chooses, in advance of regulation) to confirm that a user is over some age. To do so, it asks the user to upload a government-issued ID, typically via a third-party "identity verification provider." The provider runs some checks — human review, OCR on the ID, a liveness match against a selfie — and returns a verdict. The platform records the verdict. Somewhere in this process, a complete scan of the user's government ID exists in storage for some period.
The specified behaviour, then, is:
- Every user uploads a scan of their driver's licence, passport, or national ID.
- That scan is stored, for verification, audit, and appeal purposes, for at least some period.
- The storage exists at the provider, at the platform, or at both.
- The provider usually has thousands of employees, dozens of contractors, and at least one customer-service layer that staff will occasionally be phished out of.
Running this system at any meaningful scale means continuously operating a large database of government-issued identity documents. The database is, by definition, the most valuable target on the internet — millions of passport-and-face tuples, enough to commit identity theft against individual users for the rest of their lives. The only thing between that database and a determined attacker is the aggregate security posture of the provider, their vendors, their contractors, and their customer-service processes.
§ 02Every breach of the last twenty years
Here's a partial list of breaches that follow the same pattern: a large central database of sensitive identity-adjacent records, one successful attack, a Wikipedia entry. This is a representative sample, not exhaustive.
These organisations vary. Some had excellent security teams. Some had terrible ones. Some were following best practices and got hit by a zero-day. Some left an S3 bucket open. Some trusted a vendor, some trusted a contractor, some trusted an employee. What they had in common was that each of them held a large, centralised database of sensitive identity records, and in each case, given enough time, that database leaked.
There is no engineering response to this list that honestly concludes "but the next one will be fine." We have roughly three decades of data on what happens when companies are required to hold identity records at scale. The answer, averaged over the industry, is: they leak. Sometimes within a year. Sometimes after a decade. But the integral over time is close to 1.0.
§ 03The international trend
Discord's rollout is not happening in a vacuum. In 2025 and 2026, several jurisdictions passed or advanced laws that push mandatory age verification forward as policy:
- UK Online Safety Act — "highly effective" age assurance for pornography sites and parts of social platforms. Enforced by Ofcom.
- EU Chat Control 2.0 / CSAR — the broader regulation includes age-assurance provisions alongside the more-discussed message-scanning provisions. Status as of April 2026: Parliament blocked Chat Control 1.0; 2.0 is in trilogue with a target deal of July 2026.
- Various US states — Louisiana, Texas, Utah, Mississippi, Virginia, and others have passed age-verification laws for certain types of sites. Some are being challenged in court.
- Australia's Online Safety Amendment — rolling out age-assurance requirements for social media platforms for users under 16.
The common shape across all of these is: a legal requirement for private companies to collect, verify, and retain evidence of users' ages. The specifics vary — some require a strict government-ID check, some allow estimation from selfies, some are more lenient. But the architectural direction is consistent: more companies, required to hold more sensitive identity information, for more users.
The EFF has made the point repeatedly: "no company — Discord included — should be given custody of your driver's license to chat with your friends." This is a policy position, not a product one. It applies to Discord specifically and to the trend generally.
§ 04Why we can be confident about the outcome
The counter-argument from advocates of mandatory verification is usually some version of: "we'll use privacy-preserving technology; the companies won't actually store the IDs; it'll be fine." Let's take that seriously for a moment.
There are real privacy-preserving identity schemes. Zero-knowledge age proofs exist — you can, in theory, prove you're over 18 without revealing your date of birth, name, or document number. Apple's "Verify with Wallet" flow, the EU's proposed eIDAS wallet, and various academic systems all point toward architectures where the user's device produces a signed proof without the verifier ever seeing the underlying document. These designs are real and good.
But they are not what's being deployed. The systems being deployed today overwhelmingly ask users to upload photos of their actual government IDs to third-party providers. The providers run OCR. Humans review. Data is stored for regulatory compliance reasons, often for years. The systems that could, in principle, avoid this exist only as research papers, pilot programmes, or advocacy position papers. The implementations in the field are not them.
Even if a perfect zero-knowledge scheme were universally deployed tomorrow, there would still be a significant deployment window during which imperfect systems were the norm. That window is already several years long. The damage done during the window is cumulative and largely irreversible — once a scan of your passport is in a criminal marketplace, it stays in a criminal marketplace indefinitely.
§ 05What good child-safety regulation would look like
We think child safety online is a real problem. We do not think "make every company hold everyone's government ID" is a good solution to it. We think it is almost exactly the worst solution to it — a solution that creates a new, catastrophic harm (mass ID breaches) while providing modest, easily-circumvented protection against the original harm.
A better shape of regulation, if one wanted to regulate at all, would include some combination of:
- Device-level age attestation. The device confirms the user's age (via a trusted attestation, parental controls, or an OS-mediated check) and signs a proof. Platforms verify the proof without seeing the underlying ID. No central database.
- Liability shifting to platforms that target children. Instead of "every platform must verify every user's age," require platforms that specifically target children to be explicitly designed for that (no advertising, stronger moderation). Don't require every general-audience platform to suddenly become an identity-verification vendor.
- Funding better reporting mechanisms. Invest in organisations like the IWF and NCMEC, and in cross-platform tools for reporting and removing clearly-illegal content, without requiring platforms to scan everyone's messages or hold everyone's IDs.
- Not doing mandatory age verification at all. This is, by process of elimination, the best option, because the damage done by any reasonable implementation of mandatory verification outweighs the damage prevented.
We are not anti-regulation. We are anti-regulation-that-makes-the-problem-worse-on-net. Mandatory age verification, as currently specified in most of the laws being passed, falls into the second category.
§ 06What we built, and why
We built OpenDescent with the premise that a messaging app shouldn't require any identification at all. Not a phone number, not an email, not a government ID. We wrote a whole post about why phone numbers don't belong in messaging; the argument about IDs is stronger in the same direction.
When people ask us why OpenDescent doesn't have a "verified account" feature, or why we don't let hub owners require verified IDs for membership, the answer is: we don't want to be responsible for holding that information, and we don't think anyone else should be either. The Discord ID breach wasn't a failure of Discord's security posture. It was a predictable consequence of a system that asked Discord to hold 70,000 driver's licences in the first place. We are very deliberately not building a system that would ask us to hold anything similar.
If Discord's age verification is a reason you're looking for alternatives, OpenDescent is free, open source, and has no "upload your ID" flow for any purpose. Not for age. Not for appeals. Not for verification badges. We don't ask because we don't want the responsibility of holding the answer.
§ 07What to do now
If you live in a jurisdiction that has passed, or is passing, mandatory age-verification laws: engage your representatives. The argument that mandatory identity collection creates more harm than it prevents is technically sound, historically supported, and — based on polling in most jurisdictions — more popular than lawmakers assume. The EFF, Open Rights Group, and equivalent organisations in other countries are doing the detailed policy work, and they can use support.
If you use a platform that is rolling out age verification: consider whether the platform is worth the trade-off. Discord has many alternatives for community chat, including one we built specifically because age verification shouldn't be the price of having a group for your friends. If the alternatives aren't good enough yet, that is itself worth saying out loud — platforms respond to churn.
And if you build platforms yourself: please do not build identity verification into them. Especially please do not build it into them speculatively, in advance of regulation, to "get ahead of things." Every system that doesn't collect IDs is a system that doesn't lose IDs. That is, in the end, the only reliable defence.