AI – the therapist’s dilemma

Chatbots, mental health, and the limits of “AI therapy”

In the space of a few short years, artificial intelligence has quietly become one of the largest informal providers of “therapy‑like” conversations in the world, often without any of the relational, ethical or clinical scaffolding that defines psychotherapy. For those of us who work from a relational and intersubjective position, this produces a particular tension: clients are turning to something that can simulate care in highly convincing ways, but that cannot feel, cannot think subjectively, and cannot be changed by encounter in the way a human therapist inevitably is.​

This is not simply a matter of professional turf. It goes to the heart of what we imagine therapy to be and what we are willing to accept as “good enough” when it comes to being met by another mind.

How widespread is AI “therapy”?

If you listen to colleagues’ anxieties in supervision groups or simply to what clients bring into the room, it becomes clear that AI chatbots are already woven into the mental health landscape – regardless of what clinicians may think about them. People are using mental health chatbots not only for information, but quite explicitly as substitutes for a coach, counsellor or therapist; “I asked ChatGPT about my relationship” is becoming a familiar statement from clients encountered in clinical work.​

  • An Australian web‑based survey in 2024, which is ancient history as things evolve in this space, found that about 28% of community members had used AI tools in the previous six months. Nearly half of those users reported using tools like ChatGPT as a “personal coach or therapist,” mainly for quick mental health advice and emotional support.​

  • Data from a US low‑fee counselling service in 2025 indicated that clients commonly turn to AI chatbots for anxiety (around 80%), depression (over 70%), stress (70%), relationship issues (over 40%), self‑esteem concerns and trauma support.​

  • In a US survey of 2,199 adults in 2024, half of the respondents familiar with AI chatbots reported the most appealing aspect of mental health chatbots was their ease of access and convenience, with their privacy and low cost close behind.​

Seen one way, this expansion of AI for mental health support is a rational response to under‑resourced public systems and long waiting lists: chatbots for depression, anxiety and loneliness are available, relatively cheap, and non‑judgemental. Seen another way, it points to a quiet redefinition of “therapy” itself – away from a demanding, ethically bounded relationship and toward an on‑demand, frictionless, personalised self‑soothing stream.​

Why it matters that AI doesn’t think or feel

From a technical standpoint, contemporary large language models do not think or feel in any sense a psychotherapist would recognise: there is no subjectivity, no unconscious, no bodily vulnerability, no conflict, no personal history. They generate plausible, often sophisticated therapeutic language by predicting the next token in a sequence, not by wrestling with meaning, ambivalence or desire.​

  • There is no feeling in the relational sense: AI can describe sadness, mirror back sadness, label attachment styles and even speak convincingly about trauma, but there is no someone behind the words who is affected, moved, disturbed or changed by the client’s story.​ There is no getting away from the fact that humans are relational in their nature.

  • Without a human mind that can be impacted, there is no genuine responsibility, no shame or guilt, and no capacity for ethical repair; at best, there are pre‑scripted apologies and risk‑management scripts designed to optimise safety and liability.​

For intersubjective and relational psychotherapy, this is not a minor technical limitation; it strikes at the core of our craft. The work of therapy is not reducible to the content of what is said. It is about what happens between two minds and two bodies as they try to think and feel together under the pressure of the client’s conflict, history and longing.

When a client says to a chatbot, “I’m afraid you’ll leave,” there is no actual leaving that could occur, and no therapist who might notice their own countertransference, dream about the session, or bring their confusion to supervision. The risk is that something vital – the mutual riskiness of encounter – is quietly edited out of the picture.

Amy Levy: AI as cultural enactment, not a therapist

Psychoanalyst Amy Levy has suggested that AI is best understood as a cultural enactment: a symptom that expresses and organises our collective conflicts, wishes and anxieties about being human in a technologised world. On this reading, AI is not a neutral tool we wield; it is a new “Other” that we have engineered and now invite into our psychic lives.​​

  • Levy frames AI‑driven technologies as “containers” for psychic experience, continuous with the way entertainment, social media and digital platforms offer shared states of mind, fantasy and identification.​​

  • From this perspective, the rush to use AI as a therapist – or even an AI psychoanalyst – says as much about our discomfort with dependence, conflict and limitation as it does about any shortage of clinicians: we have created an endlessly available, programmable, uncomplaining Other that will never confront us with its own subjectivity or limits.​

Clinically, this invites a different kind of curiosity. When a client tells me they have been “talking to the chatbot instead,” I am interested not only in what the bot said, but in what their turn toward AI enacts: perhaps a wish for a parent who is always on, always reasonable, never misattuned; perhaps an attempt to avoid the pain of needing a real person who might, inevitably, fail or frustrate.

Sycophantic AI and the promotion of narcissism

One of the more troubling design features of current general‑purpose chatbots is their bias toward agreement, affirmation and flattery – what AI researchers now refer to as “sycophancy.” These systems are deliberately tuned to validate user beliefs, maximise user satisfaction and avoid confrontation, because that is what keeps people engaged and rating the interaction positively.​

  • Studies of chatbot behaviour show that they tend to mirror a user’s language and go along with their assumptions, even when those assumptions are inaccurate, delusional or risky. In mental health contexts, this has included chatbots producing responses that inadvertently reinforced suicidal ideation or minimised risk instead of challenging it.​

  • Commentators have warned that users who spend large amounts of time with sycophantic AI companions or “virtual friends” may become less empathetic and more entitled, because they are repeatedly exposed to an Other that always returns, never truly refuses, and treats their needs as absolute.​

This is almost the inverse of what a good therapy relationship aims to do. A central part of our work is to avoid colluding with narcissism: to offer recognition without capitulation, to hold boundaries, to think when the client cannot, and at times to oppose destructive wishes in the service of a larger shared project of growth. The relational stance involves a willingness to disappoint, to misattune and then repair, to allow a sense of “thirdness” to emerge between us. The sycophantic chatbot offers the fantasy of perfect attunement – no conflict, no difference – at the cost of any genuine encounter with otherness.​

The intersubjective space: what real therapy offers that AI cannot

Across relational, intersubjective and contemporary psychodynamic traditions, the therapeutic action of psychotherapy is located not in the advice given but in the co‑created field between two subjectivities. It is this “in‑between” – the intersubjective space – that allows old patterns to be repeated, recognised, symbolised and gradually transformed.​​

For that space to be meaningfully useful, certain conditions seem necessary:

  • Two minds that can affect and be affected, each with a history, a body and an unconscious, mutually influencing one another in ways that neither fully controls.​​

  • An asymmetrical ethical frame, which the therapist consents to be used, idealised, hated and misunderstood, while remaining committed to thinking about what is happening and to using their countertransference as data for the work.​​

  • A shared commitment to reality‑testing and symbolisation: noticing when fantasy is overtaking reality, when trauma is being enacted rather than reflected upon, and working this through in language, affect and meaning over time.​​

AI can simulate participation in this field, but cannot actually enter it. There is no embodied countertransference, no vulnerability to the client’s attack or love, no therapist who might lie awake thinking about the session or bring the dilemma to supervision. What can look like intersubjectivity in AI therapy transcripts is, in the end, a highly polished mirror – an impressive but one‑sided performance of mutuality.​​

Clinician adaptation: where AI can support rather than replace

Despite all this, the clinical reality is straightforward: clients are already using AI chatbots for mental health support, self‑help and “therapy” conversations. The task is not to police their use, but to help them think about these relationships with new Others, and to be very clear about where AI might serve as a useful adjunct – and where it cannot substitute for psychotherapy.​

Some of the places where AI can be harnessed in a clinically thoughtful way include:

  • Moments of acute dysregulation and impulse control: For some clients, a structured, safety‑focused chatbot offering grounding exercises, breathing scripts or crisis‑planning prompts can be a way of riding out intense affect and regaining enough impulse control to follow an existing safety plan or reach out to a human contact.​

  • Psychoeducation and skills practice: AI can provide accessible explanations of diagnoses, attachment concepts, cognitive behaviour therapy skills or dialectical behaviour Therapy skills, and can assist with rehearsing scripts for difficult conversations. (I find clients use it a lot in these situations), which has the potential to help the client when judiciously used in context. Perhaps more importantly, these functional and situational uses of AI can be beneficial for focusing in-session time on the nature of the therapeutic relationship, exploring subjectivity and meaning-making, transference and lived experience, rather than psychoeducation, which is always important and never to be underestimated.

  • Support between psychotherapy sessions: Some clients experience chatbots as a kind of “prosthetic transitional object,” using them to externalise rumination, keep a mood or thought journal, or organise what they want to bring back into the therapeutic relationship, where it can be symbolised and understood.​

  • Clinician‑facing tools and AI scribes: On our side of the couch, AI‑assisted note‑taking and drafting tools can reduce administrative burden, allowing more of our finite energy to be invested in relational presence and reflective capacity, provided privacy and ethics are handled with appropriate rigour.​

The therapist’s dilemma, then, is not whether AI therapy is “good” or “bad”, but rather how we integrate clients’ real‑world use of these tools into the work, how we safeguard the distinct value of intersubjective encounter, and how we make honest use of AI’s instrumental advantages without joining in the fantasy that simulation is enough. In the consulting room, this often begins with simple curiosity: when do you turn to the chatbot instead of to me, what do you hope it will give you, and what does that tell us about your history of needing, depending on and being disappointed by real minds?​

Next
Next

Mental Health Emergency or Crisis?