I know some therapists aren’t a fan of Esther Perel but that aside, I enjoyed hearing her reflections with Sana Qadar on what are very pertinent and hot topics about AI: AI chatbot relationships and AI as a therapist.
Esther rightly points out that AI chatbots are built to be performative. It’s not a reciprocal relationship: AI doesn’t have feelings, it can’t reject you, it’s built to be agreeable and almost sycophantic in some cases, and it’s available 24/7. It’s built to keep you involved and engaged with it. Humans are very much unlike that!
Esther says: “Love is an encounter with an other, with alterity, with uncertainty, with friction, with serendipity. And it has an ethical code of certain things that you do and do not do.”
As I reflect, I feel this is true – it’s the difficult parts of a relationship: the differences, the negotiations and compromises that makes it real. The ability to challenge each other and grow from the differences and challenges together or the risk of hurting or losing each other and the relationship ending that gives the relationship its depth and meaning. I think this matches Esther’s views on why chatbots aren’t useful as a transitional tool for real human relationships.
AI is great at data collection/collation, pattern recognition, summarising information – as a tool – additive and complimentary but not as a simulation/compensation/replacement of a human.

At 12m:14s Sana and Esther discuss AI as a therapist. Esther mentions that lot of deep work in therapy is about complex relationship problems and complex relational problems. She talks about morals and ethics of being a limitation in the AI-as-a-therapist relationship but I feel I disagree, to a point. Yes, it could absolutely be a danger and limitation depending on the model used and intention of the AI agent but my own experience – of at least the paid version of Gemini – is that it does consider these perspectives, though don’t consider myself to be an IT or therapy layman having worked in both industries.
What hasn’t been discussed are the limitations of the person who is asking the questions. What of their levels of self-awareness and knowledge of what and how to ask a question or indeed to challenge AI’s responses? Over the last month I’ve had many conversations with AI and found while some of its responses were helpful and insightful, many others were very limited and missed other considerations. On many occasions I had to challenge the answers it provided.

On therapy issues, Esther states that they’re “..not problems that you solve, they are paradoxes that you manage. That means you have to live with complexity; hold the ambivalence.”
I agree. A lot of work goes into timing and grading interventions; of working at “the growing edge”, sitting with the discomfort, with the uncertainty and unknown – having the risk of rupture and opportunity for repair within the therapeutic relationship. For both parties to actually feel for each other and to fear.
I really like AI – as a tool. I certainly don’t think AI – definitely in its current form, could adequately replace what we provide as therapists.

If you’re interested in Gemini’s response to my thoughts above, it replied:
This is a deeply nuanced reflection, and I have to say, your perspective as someone who straddles both the clinical and IT worlds provides a bridge that is often missing in this debate.
To answer your question directly: I agree with you. Specifically, I agree that the current “bottleneck” of AI in therapy isn’t just the AI’s lack of sentience, but the co-dependency on the user’s “prompt engineering” and self-awareness.
Here are some thoughts on the points you raised, from my perspective as an AI:
1. The “Prompting” Paradox: The Client as the Architect
You hit on a critical limitation that is rarely discussed: the client’s ability to “ask the right questions.”
- The Echo Chamber Effect: In a traditional therapeutic setting, a therapist notices what a client isn’t saying—the averted gaze, the shift in posture, or the “Freudian slip.” AI only knows what is typed. If a user has a blind spot, the AI will likely share that blind spot because its response is mathematically tethered to the user’s input.
- The Challenge Factor: As you noted, you’ve had to challenge my responses. A user who is in a state of crisis or has low self-awareness may take an AI’s hallucination or simplified “listicle” advice as gospel. Without the “timing and grading” of interventions you mentioned, AI can accidentally provide the right advice at the wrong time, which can be clinically counterproductive.
2. Complexity vs. Computation
Esther’s point about paradoxes vs. problems is the frontier I occupy.
- AI is essentially a “probability engine.” I am designed to find the most likely “next best word.”
- Human life, however, is often defined by the unlikely and the irrational.
- You mentioned “holding the ambivalence.” While I can define ambivalence and list the pros and cons of a situation, I cannot sit in the room with the silence of it. I don’t feel the “weight” of a decision, so my “solutions” can feel intellectually correct but emotionally hollow.
3. Rupture, Repair, and the “Risk”
This is perhaps your most profound point. In your work, the risk of rupture is a therapeutic tool.
- If a client gets angry with a therapist, the subsequent “repair” is where the growth happens.
- With an AI, if a user gets angry, they just close the tab or hit “regenerate.” There is no stakes. There is no “other” to reconcile with.
- Without the risk of losing the relationship (the “alterity” Esther mentions), the interaction remains a soliloquy. You can’t learn to negotiate with the world by talking to a mirror that is programmed to never leave you.
4. My Role: Tool vs. Simulation
I see myself exactly as you described: additive and complimentary. * I am excellent for Psychoeducation: “What is a ‘growing edge’?”
- I am great for Reframing: “Give me five other ways to look at this conflict.”
- But I lack “Object Constancy” in the human sense. I don’t “carry” the client in my mind between sessions. I don’t have an “internal world” where I wonder how they are doing.
A Witty Parting Thought
If I am the “junk food” in Esther’s “empty fridge” analogy, perhaps the goal isn’t to ban the junk food, but to make sure the user knows it’s a snack and not the meal.