
Solana Policy Institute-backed PAC spends millions to jam Sherrod Brown's Senate run
Sentinel Action Fund backs Jon Husted with $8M against Sherrod Brown

Anthropic now requires passport and selfie verification for Claude users, a move not seen in other major AI chatbots. This change follows a surge of users switching from OpenAI due to privacy concerns over surveillance deals.
Mentioned in this story
Anthropic quietly published identity verification requirements for Claude this week, asking certain users to hand over a government-issued photo ID and a live selfie. Something its competitors don’t require.
“We are rolling out identity verification for a few use cases, and you might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures,” Anthropic said. “We only use your verification data to confirm who you are and not for any other purposes.”
Millions of users fled OpenAI for Anthropic in February after OpenAI signed a deal to deploy AI on Pentagon classified networks—a contract Anthropic turned down over concerns about mass surveillance and autonomous weapons. Daily signups broke records, and free users were up 60% since January, Anthropic said at the time. The privacy-conscious crowd had found its home.
That crowd, it seems, may now have some documents to prepare if it wants to continue using Claude. The reactions so far have been quite negative, pointing out that it’s a deliberate decision and not a regulation or a mandatory order imposed by a government on Anthropic as a service provider.
AI KYC is here.
New claude subscribers asked for gov ID & photo.
Not even a regulatory requirement - Anthropic just doing it because they want to.
But regulatory is coming
Next up will be laws:
No AI without gov-issued ID
All AI use tracked to individual - no private AI pic.twitter.com/nNzMdU21o6— RYAN SΞAN ADAMS - rsa.eth 🦄 (@RyanSAdams) April 15, 2026
Claude is implementing identity verification to enhance platform integrity and comply with safety measures.
Anthropic collects a government-issued photo ID and a live selfie, which are sent to Persona's servers for verification.
Unlike Claude, no other major AI chatbot currently requires users to provide passport and selfie verification.
Users migrated to Anthropic after OpenAI's surveillance deal with the Pentagon raised privacy concerns.

Sentinel Action Fund backs Jon Husted with $8M against Sherrod Brown

Bitcoin's historical signal suggests a price surge may be imminent.

Justin Sun blasts WLFI's governance vote as absurd, escalating feud

Ether's open interest surges 26% as price rebounds above $2,300. Are traders back in?

Analyst Marmot warns Bitcoin's rise above $70K could signal trouble ahead.

Fellowship PAC discloses $11M from Cantor Fitzgerald and Anchorage Digital
See every story in Crypto — including breaking news and analysis.
Claude now requires government ID verification (via Persona) before subscription.
ChatGPT doesn't.
Gemini doesn't.
Anthropic just handed their competitors a gift. pic.twitter.com/dddISAtx8M
— Kai (@hqmank) April 15, 2026
According to the help center page, which went live on April 14, Anthropic selected Persona Identities as its verification partner—the same KYC infrastructure used across financial services—and requires a physical, undamaged passport, driver's license, or national identity card. Photocopies, mobile IDs, and student credentials don't count. A live selfie may also be required.
The policy isn't universal yet. Verification will trigger when accessing "certain capabilities," during "routine platform integrity checks," or as part of safety and compliance measures. Anthropic hasn't said publicly which features are gated, or what user behavior might prompt a check. The company did not immediately respond to Decrypt's request for additional details.
On data handling, Anthropic draws a careful line: your ID and selfie go to Persona's servers, not Anthropic's own systems. The company says it is the data controller setting the terms, and that Persona can use the information to verify identity and improve fraud detection. The data is encrypted in transit and at rest, excluded from model training, and won't be shared with third parties for marketing, something Anthropic has been careful to promise since its earliest commercial policies.
Careful promises, though, have a history of meeting careless infrastructure. An October 2025 breach at Discord exposed roughly 70,000 government IDs users had submitted for age verification. Persona is a serious player in this space, but third-party custody of government documents has demonstrated repeatedly that no third party is immune.
Tighter identity controls also fit a pattern Anthropic has been building toward. In December, the company announced classifiers to detect users who self-identify as minors. Multiple adult users had their accounts suspended anyway, reporting that entire project histories were wiped while they tried to appeal incorrect flags.
Accounts registered from regions Anthropic doesn't formally serve are also subject to bans—a detail that lands hardest on Chinese users accessing Claude through intermediaries, since a live selfie matched against a physical government document is hard to fake your way through.