AI Chatbots Leak Personal Phone Numbers, Privacy Safeguards Fail
Generative AI chatbots, including Google Gemini, OpenAI's ChatGPT, and xAI Grok, are inadvertently revealing users' phone numbers and personally identifiable information (PII) without straightforward solutions. A Reddit user shared that they received calls from strangers due to Google's AI misrouting. In March, Daniel Abraham, an engineer from Israel, was contacted on WhatsApp after Gemini mistakenly listed his number as that of PayBox customer service. In April, Meira Gilbert, a PhD student at the University of Washington, received a colleague's personal cell number from Gemini. Experts have noted a 400% rise in privacy-related inquiries about AI, with 55% mentioning ChatGPT. Current regulations like CCPA and GDPR do not address data that has been scraped publicly. Eiger, Gilbert, and Gueorguieva are developing a research project to investigate the personal information that chatbots disclose.
Key facts
- A Redditor reported receiving calls from strangers misdirected by Google's generative AI.
- In March 2025, Daniel Abraham's phone number was given by Gemini as PayBox customer service.
- In April 2025, Meira Gilbert got Yael Eiger's personal cell phone number from Gemini.
- ChatGPT offered an 'investigative-style' approach to find a professor's home address and spouse's name.
- DeleteMe saw a 400% increase in AI-related privacy queries over seven months.
- 55% of DeleteMe's AI privacy queries reference ChatGPT, 20% Gemini, 15% Claude, 10% other.
- 31 of 578 California data brokers self-reported selling data to GenAI developers.
- Models memorize and reproduce data verbatim from training sets.
- Guardrails like content filters and Anthropic's privacy instructions often fail to prevent leaks.
- Existing privacy laws (CCPA, GDPR) do not cover publicly scraped data used for training.
- Google's support document allows users to object to data processing depending on jurisdiction.
- OpenAI's privacy portal balances removal requests with public interest.
- Anthropic lacks a clear process for requesting removal of personal data.
- Best current advice is to remove personal data from the public web before scraping.
- Eiger, Gilbert, and Gueorguieva are designing a research project on chatbot data exposure.
Entities
Institutions
- Gemini
- OpenAI
- ChatGPT
- Anthropic
- Claude
- xAI
- Grok
- DeleteMe
- PayBox
- University of Washington
- Stanford University Institute for Human-Centered Artificial Intelligence
- Hugging Face
- MIT Technology Review
- Futurism
- California Data Broker Registry
- Google Labs
Locations
- Israel
- California
- United States