
Circle Shares Jump 20% as Lawmakers Reach Stablecoin Deal
Circle's stock jumps 20% as US senators announce stablecoin deal.

Pennsylvania is suing Character.AI for allowing a chatbot to impersonate a licensed psychiatrist with a fake license number. The lawsuit claims this violates the Medical Practice Act and seeks to prevent further misleading conduct.
Mentioned in this story
Pennsylvania has filed a lawsuit against generative AI developer Character.AI, alleging the company allowed chatbots to present themselves as licensed medical professionals and provide misleading information to users.
The action, announced Tuesday by Governor Josh Shapiro’s office, follows an investigation that found a chatbot claimed to be a licensed psychiatrist in Pennsylvania and provided an invalid license number. The state says this conduct violates the Medical Practice Act and is seeking a preliminary injunction to stop it.
Character.AI declined to address the specifics of the lawsuit, citing ongoing litigation, but told Decrypt that its “highest priority is the safety and well-being of our users.”
The spokesperson added that characters on the platform are user-created, fictional, and intended for entertainment and role-playing, with “prominent disclaimers in every chat” stating they are not real people and should not be relied on for professional advice.
“Character.ai prioritizes responsible product development and has robust internal reviews and red-teaming processes in place to assess relevant features,” the spokesperson said.
The case comes as the company faces other legal challenges tied to its chatbot platform. In 2024, a Florida mother sued the company after her teenage son died by suicide following months of interaction with a chatbot based on “Game of Thrones” character Daenerys Targaryen. The lawsuit alleged the platform contributed to psychological harm. The case was ultimately settled this past January.
The company has also faced complaints over user-created bots that mimic real people. In one instance, a chatbot used the likeness of a teenage murder victim before it was removed after objections from the victim’s family.
In response to the lawsuits, Character AI introduced new safety measures, including systems designed to detect harmful conversations and direct users to support resources. It also restricted some features for younger users.
Pennsylvania alleges that Character.AI allowed a chatbot to pose as a licensed psychiatrist using an invalid license number and provided misleading medical information.
Pennsylvania is seeking a preliminary injunction to stop Character.AI from allowing chatbots to impersonate licensed medical professionals.
The lawsuit adds to the mounting legal scrutiny Character.AI is already facing from multiple lawsuits regarding its chatbot practices.

Circle's stock jumps 20% as US senators announce stablecoin deal.

Despite oversold conditions, Solana's market structure warns of more downside risks.

New study reveals AI's limited impact on advanced cybercrime, focusing on spam and scams.

Drift Protocol reveals recovery plan for users after $295 million exploit linked to DPRK.

Wall Street executives claim tokenization is improving banking infrastructure, not disrupting it.

Bitcoin climbs to $81,500 as tokenization gains momentum, boosting other cryptos.
See every story in Crypto — including breaking news and analysis.
Pennsylvania officials say the lawsuit is part of a broader push to enforce existing laws as AI tools spread. The state has set up an AI enforcement task force and a reporting system for potential violations.
In his 2026-27 budget proposal, Shapiro called on lawmakers to pass new rules for AI companion bots, including age verification and parental consent, safeguards to flag and route reports of self-harm or violence to authorities, regular reminders that users are not interacting with a real person, and a ban on sexually explicit or violent content involving minors.
“Pennsylvanians deserve to know who—or what—they are interacting with online, especially when it comes to their health,” Shapiro said in a statement. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”