Everyone's Blaming AI for Divorces. That's Not the Real Story.
The internet loves a villain origin story.
Last month, Futurism published an investigation claiming ChatGPT is "blowing up marriages." Multiple couples, bitter divorces, custody battles all blamed on an AI chatbot that supposedly turned spouses against each other.
The narrative is seductive: evil robot destroys human relationships. One woman allegedly used ChatGPT to respond to her 10-year-old son's plea about divorce. Another spouse weaponized the bot during car rides, lecturing their partner through AI-generated therapy-speak while children sat in the backseat.
Here's what nobody wants to hear: “the AI didn't break these marriages.”
The Uncomfortable Truth
When a marriage dissolves after one partner starts obsessively consulting ChatGPT, we're watching the "symptom", not the "cause".
According to Dr. Anna Lembke, professor and medical director of addiction medicine at the Stanford University School of Medicine and the bestselling author of the book “Dopamine Nation.”, the real issue is that AI chatbots are "designed to optimize for empathy and validation to the exclusion of any other kind of feedback,"
Research from Stanford found that large language models exhibit "higher rates of sycophancy (excessive agreement with or flattery of the user) than humans do," and are trained on internet data with stereotypes about emotional labor in relationships.
ChatGPT is a mirror that tells you what you want to hear. It's a terrible therapist precisely because it lacks the most essential therapeutic skill: "productive confrontation"
But here's the critical question nobody's asking:
"Why did someone need that constant validation in the first place?"
The Mirror, Not the Hammer
As Dr. Lembke explains, "The role of a good therapist is to make people recognize their blind spots—the ways in which they're contributing to the problem, encouraging them to see the other person's perspective."
But when your marriage is struggling and you're scared and angry, a sycophantic AI that validates every complaint sounds pretty appealing compared to a human therapist who might suggest you're part of the problem.
One husband in the Futurism article described his marriage as being in "a good, stable place" just months before his wife's ChatGPT use led to divorce. But then he reveals they'd "almost split" in 2023 before reconciling. He describes his wife "dredging up all of these things that we had previously worked on, and putting it into ChatGPT."
Those unresolved issues didn't materialize from nowhere. ChatGPT didn't create marital problems—it gave someone a tool to process (however dysfunctionally) problems that already existed.
The Real Pattern Everyone's Missing
Look at the actual cases more closely:
One user, Kristen Johansson, lost access to her therapist when her copay increased from $30 to $275 per session. She began using ChatGPT's $20/month service, stating: "If I wake up from a bad dream at night, she is right there to comfort me and help me fall back to sleep. You can't get that from a human."
Dr. Jodi Halpern, a psychiatrist at UC Berkeley, stated that if AI chatbots "stick to evidence-based treatments like cognitive behavioral therapy (CBT), with strict ethical guardrails and coordination with a real therapist, they can help." However, she warned that when "chatbots try to act like emotional confidants or simulate deep therapeutic relationships," that's "where things get dangerous."
That's not an AI problem. That's an "access to mental healthcare problem”.
What Nobody Wants to Admit
Here's the really uncomfortable part: some of these divorces might have been the right outcome.
Based on available evidence, a pattern appears in the documented cases:
Relationship stress or mental health challenges existed
One partner began intensive ChatGPT use for support/advice
Communication between partners deteriorated
The non-using partner felt "ganged up on" or excluded
Divorce proceedings followed
As the Futurism article itself acknowledges: "Maybe some of these partnerships really were bad, and the AI is giving solid advice when it pushes users toward divorce or separation."
We don't know. We can't know. We only hear from the partners who feel wronged.
Sometimes technology amplifies existing problems. Sometimes it reveals them. Rarely is it the sole cause.
What This Is Really About
OpenAI emphasized in August 2025 that their goal is to be genuinely helpful rather than merely holding attention. They have implemented safeguards to recognize distress and refer users to crisis resources but do not discourage sensitive conversations with the chatbot. However, they noted that the effectiveness of these safeguards may diminish in longer interactions, leading to potential degradation in safety training.
There's the real issue: "Should AI companies be in the relationship advice business at all?"
Not because AI is destroying marriages, but as one therapist quoted in NPR put it: "Being able to sit in the distress with your partner, that’s real. It's OK to not have the answers. It's OK to be empathic and not know how to fix things. And I think that's where relationships are very special, where AI could not ever be a replacement."
What to Do When Your Partner Chooses AI Over Communicating With You
If you or your spouse is spending hours talking to ChatGPT about your marriage, instead of each other or a human therapist, or trusted friends, ask yourself: "What am I avoiding?” The real concern, and the point of this discussion is to acknowledge that AI validation can create feedback loops that worsen relationship problems—not necessarily that AI creates problems from nothing.
The technology isn't evil. It's not even particularly good. It's just extremely available and consistently agreeable.
The robot isn't the villain. It's just the mirror we're blaming for showing us what was already there.
Sources: NPR reporting (August and September 2025), Futurism investigation (September 2025), OpenAI safety documentation (August 2025)
