Massive Data Leak Exposes 300 Million Private AI Chat Messages
WASHINGTON, D.C. — A significant data breach involving the AI chat application “Chat & Ask AI” has exposed approximately 300 million private messages from over 25 million users worldwide, raising serious concerns about privacy and data security in the rapidly expanding field of artificial intelligence. The breach, discovered by an independent security researcher known as Harry, was caused by a misconfiguration in the app’s backend system, which utilized Google Firebase, a widely used mobile app development platform.
According to Harry’s findings, the misconfigured Firebase database allowed unauthorized individuals to gain authenticated access to the app’s entire message repository. This included full chat histories, timestamps, user-assigned chatbot names, and details about how users customized and interacted with the AI models. The exposed data contained deeply personal and often distressing conversations, including inquiries about how to commit suicide painlessly, how to write suicide notes, instructions for making methamphetamine, and methods to hack other applications.
The app, which boasts more than 50 million downloads across the Google Play Store and Apple App Store, has become a popular tool for users seeking AI-powered assistance. However, the exposure of sensitive information has alarmed cybersecurity experts and mental health advocates alike. The Cybersecurity and Infrastructure Security Agency notes that such data leaks can have far-reaching consequences, including identity theft, psychological harm, and exploitation by malicious actors.
“The scale of this leak is unprecedented in the AI chatbot space,” said a spokesperson from the Federal Trade Commission. “Users entrust these platforms with highly sensitive information, and it is imperative that companies implement robust security measures to protect that trust.”
The breach was brought to light when Harry accessed a sample of about one million messages from 60,000 users to verify the extent of the exposure. The researcher reported the vulnerability to the app’s developers, who have since taken steps to secure their systems, though the full scope of potential damage remains uncertain.
Privacy advocates have expressed concern over the incident, emphasizing the need for stronger regulatory oversight of AI applications. The National Telecommunications and Information Administration has been actively working on frameworks to ensure data privacy and security in emerging technologies, including AI-driven platforms.
Meanwhile, mental health organizations warn that the exposure of conversations involving suicidal ideation could lead to further distress for affected individuals. “Confidentiality is critical when people seek help or information about sensitive topics,” said a representative from the National Institute of Mental Health. “Breaches like this can undermine public confidence in digital mental health tools.”
As the investigation continues, users of “Chat & Ask AI” are urged to monitor their accounts closely and be vigilant against potential phishing or identity theft attempts. This incident underscores the growing challenges in safeguarding personal data amid the rapid adoption of artificial intelligence technologies.

Leave a Reply