“ChatGPT hacked” gets searched thousands of times every time an AI security story makes headlines. The reality is more nuanced than a single breach: ChatGPT and its users have been affected by several distinct security issues in 2023–2026 — from platform-side vulnerabilities to credential theft targeting individual accounts to prompt injection attacks exploiting the AI itself. I cover AI security professionally, and this is the honest rundown of what has actually happened, what it means for people using the platform, and what you should do to protect your account.
What You’ll Learn
The documented ChatGPT security incidents — what actually happened
How ChatGPT user accounts are targeted (credential theft, not ChatGPT itself)
The prompt injection vulnerabilities that affect ChatGPT’s AI layer
What to check right now to secure your ChatGPT account
Why AI platforms are increasingly targeted and what that means for users
The focus here is the security incident record for the ChatGPT platform. For the deeper AI security methodology — how prompt injection works technically and how to test for it — see the Prompt Injection Attacks guide and the AI jailbreaking methodology in the AI Security series. Check if your account credentials have been exposed with the Email Breach Checker.
Platform-Level Security Incidents
The documented record of security incidents affecting ChatGPT at the platform level — not rumours or unverified claims, but incidents acknowledged by OpenAI or reported by reputable security researchers with evidence.
DOCUMENTED CHATGPT SECURITY INCIDENTS
# March 2023 — Chat history exposure bug (OpenAI confirmed)
What happened: a bug caused some users to see titles of other users’ chat conversations
Scope: OpenAI confirmed 1.2% of ChatGPT Plus subscribers were potentially affected
Duration: approximately 9 hours before OpenAI took ChatGPT offline to fix it
Data exposed: conversation titles, payment info (last 4 digits), email addresses
Source: OpenAI’s own blog post disclosing the incident (March 24, 2023)
# 2023 — Credential theft via dark web (reported by Group-IB)
What happened: cybersecurity firm Group-IB found 101,134 compromised ChatGPT accounts
Method: credentials stolen by info-stealer malware on users’ own devices, then sold
Context: this was not a ChatGPT hack — it was credential theft from users’ computers
Impact: anyone whose ChatGPT credentials were stolen could access their chat history
# 2024 — Memory feature privacy concerns
What happened: ChatGPT’s memory feature stores information about users across sessions
Researcher demonstrated: prompt injection via web browsing could manipulate memories
Impact: attacker could potentially cause ChatGPT to store false or harmful user information
What happened: Reuters reported a hacker accessed an OpenAI internal employee forum
Scope: internal discussions stolen — not customer data or model weights
OpenAI disclosed internally but did not initially make public disclosure
💡 Important Context: “ChatGPT hacked” headlines often conflate several distinct things: (1) OpenAI’s platform having vulnerabilities, (2) users’ own devices having malware that steals ChatGPT credentials, and (3) AI-layer attacks like prompt injection. These are different problems with different causes and different solutions. The largest category by volume — stolen ChatGPT credentials — has nothing to do with OpenAI’s security and everything to do with whether users have malware on their computers or reuse passwords from breached sites.
How ChatGPT Accounts Get Stolen
The majority of “ChatGPT account hacked” reports I see aren’t platform breaches — they’re individual account takeovers through credential theft. The attack chains are the same ones that affect every online account, just applied to ChatGPT credentials specifically because ChatGPT accounts have value (ChatGPT Plus access, conversation history with sensitive business data).
AI-generated phishing emails are now indistinguishable from legitimate OpenAI emails
# Why ChatGPT accounts have value to attackers
ChatGPT Plus accounts ($20/month) sold for $2–$10 on dark web — arbitrage profit
Conversation history may contain sensitive business data, code, personal information
Corporate ChatGPT accounts may have access to internal company data via plugins
Prompt Injection Vulnerabilities
Separate from account security, ChatGPT has been the subject of numerous prompt injection vulnerability disclosures — attacks against the AI layer itself rather than the user authentication layer. My work in AI security means I track these closely. The documented cases reveal consistent patterns in how ChatGPT’s AI can be manipulated.
DOCUMENTED CHATGPT PROMPT INJECTION INCIDENTS
# ChatGPT conversation history exfiltration via prompt injection (2023)
Researcher demonstrated: malicious content in a web page ChatGPT browsed
could inject instructions that caused ChatGPT to exfiltrate conversation history
Method: indirect prompt injection via the browsing plugin
OpenAI patched the specific vector but the underlying class remains challenging
# ChatGPT memory manipulation (2024)
Researcher Johann Rehberger demonstrated: prompt injection via web content
could cause ChatGPT’s memory feature to store false information about the user
Example payload: get ChatGPT to “remember” the user is a different person
Impact: persistent false memories that affect all future conversations
# System prompt extraction (ongoing)
GPTs (custom ChatGPT configurations) use system prompts with custom instructions
Numerous researchers demonstrated extraction of these “confidential” system prompts
OpenAI does not guarantee system prompt confidentiality — it’s a known limitation
User Data Exposure — What OpenAI Has Disclosed
The Samsung incident is the most cited example of data exposure involving ChatGPT — but it’s important to understand that it was caused by user behaviour, not an OpenAI breach. It illustrates the data exposure risk of using AI platforms with sensitive information, which is distinct from the platform being compromised.
CHATGPT DATA EXPOSURE INCIDENTS
# Samsung source code leak via ChatGPT (April 2023)
What happened: Samsung engineers pasted proprietary source code into ChatGPT
for code review assistance — across three separate incidents in 20 days
The code entered OpenAI’s servers and potentially their training pipeline
Samsung response: banned ChatGPT for internal use, built internal AI tools
This was user error, not an OpenAI security failure — but the data was still exposed
# OpenAI’s data retention and training policies
Default: conversations may be used to improve OpenAI models
Opt-out: Settings → Data controls → disable “Improve the model for everyone”
Enterprise/Team plans: by default, conversations are NOT used for training
Temporary chats: not stored, not used for training — available in all plans
Most ChatGPT account security issues are preventable with standard account hygiene. The specific steps I recommend combine platform-level settings with broader credential security practices.
Use an authenticator app (Google Authenticator, Authy) — not SMS if possible
MFA means even if your password is stolen, the account stays secure
# Step 2: Use a unique password for OpenAI/ChatGPT
Never reuse a password from any other site on your OpenAI account
Generate a unique 20+ character password with a password manager
# Step 3: Check active sessions
openai.com → Settings → Security → Active sessions
Any location or device you don’t recognise → click “Log out all other sessions”
# Step 4: Adjust data sharing settings
Settings → Data controls → disable “Improve the model for everyone”
Use “Temporary chat” mode for any sensitive discussions
Enterprise/Team plan: data not used for training by default
# Step 5: Check if your email is in a breach
Use Email Breach Checker → if found: change ChatGPT password immediately
Even if the breach was from a different site — credential stuffing attempts ChatGPT too
Why AI Platforms Are Increasingly Targeted in 2026
The security incidents affecting ChatGPT are not unique to OpenAI. Google Gemini, Microsoft Copilot, Claude, and other major AI platforms have all been the subject of vulnerability research, prompt injection disclosures, and account security issues. My view on why AI platforms attract disproportionate security attention: they sit at the intersection of valuable user data (conversation history containing business information, personal details, intellectual property) and a novel attack surface (the AI layer) that most security teams haven’t yet learned to assess.
AI PLATFORM SECURITY — THE BROADER PICTURE
# Why AI platforms are high-value targets
Conversation content: users share business strategies, code, personal info, credentials
Large user bases: millions of accounts = large credential theft target pool
Novel attack surface: AI layer (prompt injection) requires different assessment skills
Rapid deployment: features ship faster than security review cycles
# Documented incidents across AI platforms (beyond ChatGPT)
Microsoft Copilot: indirect prompt injection via email exfiltrating Slack messages (2024)
Gemini Advanced: prompt injection vulnerabilities documented by researchers (2024)
Claude AI: system prompt extraction techniques studied (general LLM research)
Pattern: prompt injection is a class-wide vulnerability, not specific to any one platform
# What this means for enterprise AI users
AI security is now a standard category in penetration testing scope
OWASP LLM Top 10 provides the assessment framework for enterprise AI deployments
Enterprise plans (not free/Plus) provide better data protection guarantees
Employee training on AI data hygiene is now a standard security requirement
ChatGPT Security — Key Points
March 2023: OpenAI bug exposed conversation titles + partial payment info to other users (confirmed)
101K+ accounts found on dark web in 2023 — stolen via malware, NOT an OpenAI breach
Prompt injection via browsing/plugins allows manipulation of ChatGPT responses (documented)
Samsung: user error exposed source code — don’t paste sensitive data into free/Plus ChatGPT
Protect account: MFA on + unique password + check active sessions + disable data training
ChatGPT Security — What You Can Do Now
Enable MFA on your OpenAI account, check for active sessions you don’t recognise, and check your email in the Email Breach Checker to see if your credentials have been exposed in other breaches that could be used against your ChatGPT account.
Quick Check
A news headline reads “100,000 ChatGPT accounts found for sale on dark web.” What most likely happened and what does it mean for your account?
Frequently Asked Questions
Has ChatGPT ever been hacked?
OpenAI confirmed a platform-level bug in March 2023 that exposed some users’ conversation titles, email addresses, and partial payment information to other users for approximately 9 hours. OpenAI disclosed this publicly and patched it. A separate 2024 incident involved an attacker accessing an internal employee forum, though not customer data. The large-scale “ChatGPT account on dark web” incidents were the result of info-stealer malware on users’ own devices, not breaches of OpenAI’s systems.
Is it safe to use ChatGPT with sensitive information?
For free and ChatGPT Plus plans: treat any information you enter as potentially used for model training and potentially accessible to OpenAI employees. Don’t enter proprietary business data, passwords, API keys, personal financial data, or client information. Use temporary chat mode for sensitive discussions — these are not stored. For ChatGPT Enterprise and Team plans: conversations are not used for training by default, which is more appropriate for business use. The Samsung incident shows the real-world consequence of treating ChatGPT as a private tool when using standard plans.
How do I check if my ChatGPT account has been compromised?
Log into openai.com → Settings → Security → Active sessions. Any sessions from locations or devices you don’t recognise suggest your account has been accessed by someone else. Click “Log out all other sessions” to terminate them, then change your password and enable MFA. Also check your email address in a breach checker — if your credentials were stolen by malware or from another breached site, your ChatGPT account may be at risk from credential stuffing.
What is prompt injection in ChatGPT?
Prompt injection is an attack where malicious instructions embedded in content that ChatGPT processes (a web page it browses, a document it reads) override or manipulate its intended behaviour. Researchers have demonstrated ChatGPT being manipulated via its browsing plugin to exfiltrate conversation data and via web content to modify its memory of the user. These are AI-layer vulnerabilities distinct from account security — they don’t require knowing your password and affect the AI’s behaviour rather than your account credentials.
Should I delete my ChatGPT account after the security incidents?
Deleting your account is a personal decision but not required as a security response to the documented incidents. The March 2023 bug was patched. The credential theft incidents were caused by malware on users’ devices, not OpenAI. The appropriate response is: enable MFA, use a unique password, disable conversation training, use temporary chat for sensitive topics, and don’t enter data you wouldn’t want OpenAI to have access to. These steps address all the realistic risk vectors without losing access to the platform.
→ Related
Prompt Injection Attacks — Full Technical Guide
→ Check Now
Email Breach Checker — Was Your Data Leaked?
Further Reading
Prompt Injection Attacks 2026— The technical methodology behind the AI-layer attacks that affect ChatGPT. How direct and indirect prompt injection works, real disclosed cases, and how to test for it in authorised assessments.
ChatGPT Conversation History Theft— The research on exfiltrating ChatGPT conversation history via prompt injection through the browsing plugin — a more detailed look at one of the incidents covered here.
Email Breach Checker— Check if your email has appeared in data breaches. If your credentials have been exposed in any breach, your ChatGPT account is at risk from credential stuffing — check now and change your password if found.
OpenAI Security Page— OpenAI’s official security disclosure page, responsible disclosure programme, and security blog posts. The March 2023 bug disclosure is documented here. Subscribe to their security bulletins for authoritative future incident disclosures.
ME
Mr Elite
Owner, SecurityElites.com
The ChatGPT security story that I reference most in client briefings is the Samsung incident — not because OpenAI did anything wrong, but because it illustrates something fundamental about AI platform risk that organisations consistently miss. The question isn’t whether the AI platform is secure. The question is what you’re putting into it. Samsung’s engineers used ChatGPT for a legitimate purpose, and they created a significant IP exposure event. The lesson: treat any AI platform running on external servers as a system where everything you type may be retained, analysed, and potentially used for training. Use it accordingly.
Founder of Securityelites and creator of the SE-ARTCP credential. Working penetration tester focused on AI red team, prompt injection research, and LLM security education.