D
High Risk

ChatGPT

Chatbot

by OpenAI · chat.openai.com

"Knows more about you than your therapist. Reports to a company that wants to be God."

High Risk Published March 15, 2026

Overview

ChatGPT is the product that kicked off the generative AI gold rush. Launched in November 2022, it went from zero to 100 million users faster than any consumer product in history. By 2026, OpenAI claims over 400 million weekly active users. That’s 400 million people feeding their thoughts, questions, fears, business plans, love letters, and medical symptoms into a system owned by a company that started as a nonprofit, pivoted to a “capped-profit,” then dropped the pretense entirely and went full corporate.

The product itself is genuinely useful. It can draft emails, debug code, explain complex topics, and help you think through problems. That’s exactly what makes it dangerous. The better it works, the more you tell it.

What It Knows About You

Every conversation you have with ChatGPT is stored by default. OpenAI’s privacy policy gives them broad rights to use your inputs for model training unless you explicitly opt out — and even then, your data is retained for 30 days for “abuse monitoring.”

Here’s what ChatGPT can infer from a typical user’s history: your job, your skill level, your native language, your health concerns, your relationship problems, your political leanings, your financial situation. You wouldn’t hand that dossier to a stranger on the street. But you’ll type it into a chatbox because the interface feels private.

OpenAI has suffered data breaches. In March 2023, a bug exposed other users’ chat histories and payment information. In 2024, internal Slack messages were compromised. The company that wants to build AGI — artificial general intelligence, a system smarter than any human — cannot reliably secure a Slack workspace.

Italy banned ChatGPT temporarily. The FTC investigated. Multiple lawsuits are pending over training data. None of this has slowed adoption.

The Real Risks

Privacy is the headline risk. OpenAI trains on your conversations unless you opt out, and the opt-out is buried in settings most users never visit. Enterprise customers get better data protections, but individual users are the product.

Job displacement is accelerating. ChatGPT directly threatens copywriters, customer service agents, junior developers, tutors, translators, and paralegals. McKinsey estimates generative AI could automate tasks equivalent to 11.8 million jobs in the US alone. That’s not a projection — it’s already happening. Freelance writing rates on platforms like Upwork collapsed 30-40% within a year of ChatGPT’s launch.

The bias problem is real but improving. ChatGPT exhibits measurable biases in its outputs — political, cultural, and demographic. OpenAI has invested heavily in alignment, but the fundamental issue remains: the model reflects the internet it was trained on, and the internet is not a fair place.

Autonomy erosion is subtle. The more you use ChatGPT to think, the less you practice thinking. Students who use it for homework learn less. Writers who use it for drafts lose their voice. Programmers who use it for every function forget how to code without it. This isn’t speculation — early research from Stanford and MIT confirms the pattern.

Alternatives

  • Claude (Anthropic): Similar capability with stronger privacy defaults and a focus on safety research. Still a large AI company with its own risks.
  • Perplexity: Better for research tasks where you want cited sources rather than generated text.
  • Local models (Llama, Mistral via Ollama): Run entirely on your hardware. No data leaves your machine. Capability is lower but improving fast.
  • Just… thinking: For many tasks people use ChatGPT for, a few minutes of focused thought works better.

Our Verdict

ChatGPT is the most capable consumer AI product on the market, and that’s precisely why it earns a D. The product is good enough to become a dependency, run by a company that has repeatedly prioritized growth over safety, profit over principle, and speed over transparency. Use it if you must. But understand what you’re trading: your data, your cognitive independence, and your trust in a company that changes its charter whenever the old one becomes inconvenient.