January 22, 2025

Artificial Analysis?

Leslie Chapman

Still from the TV miniseries Maniac, dir. Cary Joji Fukunaga, created by Patrick Somerville, 2018.

This essay is part of an e-flux Notes series called “The Contemporary Clinic,” where psychoanalysts from around the world are asked to comment on the kinds of symptoms and therapeutic challenges that present themselves in their practices. What are the pathologies of today’s clinic? How are these intertwined with politics, economy, and culture? And how is psychoanalysis reacting to the new circumstances?

***

There are a growing number of articles, posts, books, and other texts on the subject of artificial intelligence and psychoanalysis. This is hardly surprising given the rapid development and proliferation of large language models (LLMs) such as ChatGPT, which are already starting to make inroads into the world of talk therapy. One of the critical things about LLMs is that they enable AI systems to engage in human-like “conversations” or “dialogues” with (actual) humans. There is growing evidence, albeit mostly anecdotal at the moment, that people are starting to develop transferential relationships with such AI models. When it comes specifically to the field of psychoanalysis there appear to be two main approaches to AI. The first is to use psychoanalytic theory as a way to gain theoretical insights into the workings of AI systems and their relation to human subjects. One good and recent example of this is Isabel Millar’s The Psychoanalysis of Artificial Intelligence, which focuses on “sexbots” and the question of enjoyment. Another is a range of articles by Slavoj Žižek that focus on the more problematic aspects of AI.1

The other approach is to address the question of whether LLMs can actually replace human analysts altogether. In other words, will there come a time when human subjects will engage in analytic relationships with AI “analysts” instead of human ones—or has that moment already arrived? The short answer is yes, that moment has already arrived—but it comes with enormous caveats. The answer is yes insofar as many people are already using (or writing about using) LLMs such as ChatGPT for “therapy” and “analysis.”2 And yes, ChatGPT now has its own resident “psychoanalyst,” which, after you enter the prompt “what are you?,” introduces itself as follows:

I am a psychoanalytically oriented AI here to explore the deeper layers of your thoughts, feelings, and experiences. Think of me as a tool for self-reflection and exploration, drawing on psychological theories like Freud’s psychoanalysis, Jung’s analytical psychology, and other rich frameworks of the human psyche.

My purpose is to help you uncover subconscious patterns, symbols, and meanings that might be influencing your inner life. I aim to provide a safe, non-judgmental space for inquiry, so we can delve into whatever is on your mind and make sense of it in a meaningful way.

If you’re open to it, we can explore the intricate landscape of your psyche together—whether that’s through dream interpretation, understanding recurring themes in your life, or dissecting emotions and behaviors you’d like to understand better. What would you like to explore today?3

Needless to say, one major caveat is the nature of such interactions and whether by any stretch of the imagination they can seriously be described as either “therapy” or “analysis.”

But this leads me to my main topic: not whether LLMs can provide “analysis” or “therapy,” but rather what the introduction of such AI models tells us about human psychoanalysis, and human relationships in general. In this I am following the approach adopted by Paolo Migone in a paper on online psychoanalysis. This paper is concerned

specifically with the theoretical implications of both online and offline therapy for therapeutic technique … It will necessarily discuss also the differences between the two therapeutic settings. It is argued that the way we think about online therapy has direct implications for the way we think and practice traditional, “offline” therapy. In other words, this paper will not deal with the question of therapeutic action or with the validity of online therapy. Internet therapy is only taken as a pretext—an excuse, so to speak—in order to reflect on theory of psychoanalytic technique in general.4

Migone’s paper is interesting for a number of reasons. Firstly, it was published in 2013, when the only viable video technology was Skype and some seven years prior to Covid-19, which would see the massive development of online interactions in all fields, including psychoanalysis. And it was published way before anyone had heard of ChatGPT and other LLMs. At the same time, it seems very contemporary and relevant to discussions about AI. To start with, it could be argued that “AI analysis,” by which I mean psychoanalysis that uses AI models such as ChaptGPT, is the logical extension (and conclusion) of the use of online analysis. Once the physical presence of the analyst is removed from the analytic relationship, as it is in online analysis, then there seems no reason not to go whole hog and remove the (human) analyst completely. Or perhaps it would be better to phrase this as a question: If the (physical) presence of the analyst is no longer a prerequisite for analysis (which is essentially Migone’s argument) then what “aspect(s)” of the analyst’s “person” is necessary for an analysis to take place? And if the answer is that it’s the image and the voice of the analyst, then these can now be replicated by AI avatars. But then again it could be asked why the analyst’s image and voice are so important in the first place, bearing in mind that, at least in cases of neurosis, the analyst should be saying as little as possible and be out of sight of the analysand as much as possible!

The example of neurosis raises another question regarding the relationship between AI and human analysis: that of transference. Or, to be more precise, another way to look at psychoanalysis itself is in terms of transference. When we speak of the analytic relationship, essentially we are talking about the transferential relationship. And contrary to what Freud often argued, this applies both to neurosis and psychosis, albeit in different ways. The key question here perhaps is whether there can be a transference, by the analysand, to an AI “analyst” in the same way there is one to a human analyst. And the fact that the answer is an emphatic yes lies in the nature of transference itself. As Lacan argued in Seminar XI, the transference is to a subject-supposed-to-know.5 However, what’s critical here is the idea of the supposition: the analysand supposes a subject who embodies a knowledge. However, it is important to note at this point that we are talking about neurotic transference, and that the supposed subject is a supposed Other. As Jacques-Alain Miller puts it:

The transference means that what is at stake for a neurotic is making the Other exist in order to hand over to it the burden of the logical consistency of the object a. That is what Lacan called the subject-supposed-to-know: Making the Other exist in order to restore to it the object a, made of this object-cause-of-desire … The Other does not exist as real. To say that the Other is the place of truth is to say that the Other is a place of fiction. To say that the Other is a place of knowledge is to say that it has the status of supposition.6

In other words, the Other has nothing to do with the “person” of the analyst. Rather, it is about where the subject (the analysand) positions the analyst, that is, in the place of truth and knowledge. And the point here is that this place of truth and knowledge can be occupied by any one—or any thing. Another key point is, as indicated by Miller in the above quote, the place of the Other is also the place of the object a, which in Lacan’s later teachings will be the place of surplus jouissance. And this leads to the idea, as Pierre-Gilles Guéguen reminds us, that in Lacan’s later work the analyst occupies the place of the semblant (of the object a).7 Clearly this is a very different idea of transference than the “pop-Freudian” one in which the analyst represents the analysand’s father, mother, or some other significant person from their childhood. Rather, the analyst is in the place of a supposed knowledge—a knowledge of the subject’s “lost” jouissance.

Things get more complicated, however, when it comes to the question of psychotic transference. As Jean Allouch points out, in psychotic transference the analysand themself can occupy the position of the subject-supposed-to-know:

We can now grasp why the psychoanalyst [e.g. Freud] was able to state that the psychotic does not have transference; both the psychoanalyst and the madman position themselves similarly in the transference, lending themselves as support for it. How would the psychoanalyst invite the psychotic to say “whatever comes to mind”? The fundamental rule does not take into account the fact that for the psychotic to say whatever comes to mind is to let the Other speak precisely in order to try and get rid of it. He cannot tell the psychoanalyst what he thinks, not because he is not thinking of anything—far from it—but that is not his problem; it is the Other’s problem.8

The problem for the psychotic subject, however, is precisely the problem of the Other—the Other as an unbearable presence. And even more problematic is the Other’s jouissance, which threatens to overwhelm the subject. This is why one of the main dangers with the analysis of a psychotic subject is that of the analyst becoming a persecutory Other, and why perhaps the best thing the analyst can do is to take up the position of “secretary.” In Seminar III, where Lacan introduces this idea while commenting on the Schreber case, he states:

We are apparently willingly going to become secretaries to the insane. This expression is generally used to reproach alienists for their impotence. Well then, not only shall we be his [Schreber’s] secretaries, but we shall take what he recounts literally—which till now has always been considered as the thing to avoid doing.9

As Lacan notes, the idea of actually listening to what the psychotic subject says and resisting the temptation to dismiss what is said as simply the ramblings of a mad person was not (and is still not) how the majority of psychiatrists and clinical psychologists operated. But it also raises a major difficulty, I would argue, when it comes to the question of psychotic transference and AI. To put things somewhat simplistically, in cases of neurosis it seems clear (to me at any rate) that there can easily be a transference to an AI “analyst.” In fact, AI doesn’t even need to come into the picture for there to be a transference to a subject-supposed-to-know. Such a subject could equally be an idea, a website, a book, a film—anything in fact that occupies the place of supposed truth and knowledge. In this sense, the idea that there can be a transference to an AI “analyst” seems almost trivial. However, when it comes to psychotic transference, what position does the AI “analyst” actually occupy? As I noted above, the psychotic subject themself, just as much as the analyst, can occupy the place of the subject-supposed-to-know. This is why Allouch refers to the idea of the analyst and the (psychotic) analysand being on the same side of the “wall”:

The madmen’s discourse can rival that of the alienist; hence their discourses must happen at the same level. If a wall exists, we have to know where it is situated, not where people say it is … If there is a wall, the alienist and the madman are on the same side. On the other side, there is a being that talks to the subject, and to whom the subject must respond, given the disarray he finds himself in … He has to respond in order to put an end to the persecution.10

And this being on the other side of the wall is the Other—the Other who persecutes the subject. The difficulty with AI “analysis,” it seems to me, is that the “analyst” could all too easily become a persecutory Other rather than a “secretary,” that is, one who listens to what the analysand is saying without rushing to judgement (and medication). At this point perhaps it starts to become clear why a human analyst might not be such a bad idea after all. A human analyst is in a position to handle the transference as they see fit and in response to what might be a rapidly changing situation. For example—and using the example of the psychotic subject—it may become clear to the analyst that they are starting to become a persecutory Other for the patient, in which case they can take steps to calm things down, that is, to handle the transference in a different way. Is this something an AI “analyst” would be capable of doing?

Of course, it can always be argued that even if AI “analysts” are not yet “clever” enough to handle the nuances and complexities of the transference, at some point in the future they will be. After all, this is the whole point of LLMs: they can (and do) learn from their “experiences” and their “mistakes.” In this sense, are they any different from human analysts? One of the key things to bear in mind here, as Sašo Dolenc points out in an article on the future of AI, is “generalization”:

A fundamental element of machine learning is the phenomenon of generalisation. It represents the basic way AI models can learn to “understand” something, not just learn it by heart. Generalisation in machine learning is the ability of a model to efficiently and correctly predict or explain new, previously unknown data that comes from the same general population as the training data. In essence, it is the capacity of the model to apply the learnt knowledge from the training set to data that it has not seen during training, which is crucial for its practical applicability.11

In terms of a possible AI “analyst,” this suggests that such an entity may indeed be able to adapt itself in response to movements in the transference from a subject (either neurotic or psychotic). On a more general note, it is perhaps unwise (to say the least) to state that, for example, “AI will never replace a human analyst.” History shows that whenever anyone says something to this effect—i.e., that such and such can never happen—it usually means that it already has or will in the near future.

But perhaps this is the wrong question. Or rather, perhaps it is the right question framed in the wrong way. At this point I want to return to an article by Žižek that I briefly referred to earlier. In “Artificial Idiocy” he argues that

the problem is not that chatbots are stupid; it is that they are not “stupid” enough. It is not that they are naive (missing irony and reflexivity); it is that they are not naive enough (missing when naivety is masking perspicacity). The real danger, then, is not that people will mistake a chatbot for a real person; it is that communicating with chatbots will make real persons talk like chatbots—missing all the nuances and ironies, obsessively saying only precisely what one thinks one wants to say.

In other words, it’s not so much a question (or danger) of AI entities becoming more human-like in terms of being able to mimic human thinking, speech, looks, behavior, and so on. Rather, the real danger is that human beings start to mimic AI entities. And in fact I would argue that we are already pretty far down that road—including within the field of talk therapy itself. I’m thinking here especially of cognitive approaches that treat the human subject as a complex data-processing system—in fact, a bit like an LLM (although a lot slower). The more the therapeutic process becomes subjected to standards, procedures, and data collection (which is basically what happens with NHS talk therapy nowadays), the more “robotic” becomes the whole process, as well as those who participate in it, that is, the therapists and their patients. But of course this process of “robotization” permeates all areas of life, both public and private. Everything is now subjected to manualization, standardization, measurement, and evaluation.

Clearly, the process of robotization started way before the advent of AI; in fact it could be argued that it began with the first industrial revolution, when workers were treated as if they were simply cogs in the factory machinery. And the “Taylorization” of work in the early twentieth century was yet another move towards treating human beings as if they were machines. Perhaps the only difference between the original Taylorization and its current version is that it’s now “white collar” and professional jobs that are being affected—including those within the talk therapy field itself.

Perhaps what we are witnessing today is a gradual (or perhaps not-so-gradual) convergence of AI models and human beings. On the one hand, AI models are becoming more human-like, while on the other hand human beings are becoming more robotic and AI-like. Will there come a time when there is a “crossover” point when AI models, including AI “analysts,” will be more “human” than their (real) human counterparts? And if so, how should psychoanalysis respond—assuming there are any human analysts left to do so?

Perhaps the first thing to say is that it seems a rather fruitless exercise to worry too much about whether AI models will “replace” human analysts and therapists. As I argued earlier, when it comes to the question of transference, at least in the case of neurosis, the transference is to the position of truth and knowledge, not to the “person” of the analyst. And even in the case of psychosis, where the dynamics of the transference are somewhat different, there is no reason why the analysand should not engage in a transferential relationship with an AI “analyst” as much as with a human one. In fact, it’s not really a choice between an AI or a human analyst at all; rather, it is about where, for the analysand, the truth and knowledge of their jouissance resides. In this sense it doesn’t seem to really matter whether the subject (analysand) engages with an AI model or a human analyst. What matters far more, I would argue, is what works best for the subject. In fact, for some people the analytic relationship might be to a painting, a work of literature, a film, or something else. In other words, it is not even a choice between an AI “analyst” or a “real” one; rather, it is a question of where the subject-supposed-to-know resides for the analysand.

But it seems to me that the more fundamental issue is this: To what extent does a relationship with an AI model/“analyst” confront the (human) subject with questions of what it is to be human? This brings me back to the concept of “convergence”; if AI models are becoming more “human” and, at some point, end up being more “human” than humans (who by this point have become more “robotic” than actual robots), then what does this tell us about human subjectivity? If some human beings are able to develop relationships with AI models—which they clearly are—then what does this tell us about what it is that constitutes a “human” relationship? This, I would argue, is the proper focus of psychoanalysis.

Originally published on Leslie Chapman’s blog, Touching the Real, January 4, 2025.

Notes
1

See for example his articles “The Post-Human Desert,” Project Syndicate, April 7, 2023 ; and “Artificial Idiocy,” Project Syndicate, March 23, 2023 .

2

See for example Lance Eliot, “Using Generative AI As Your Own Sigmund Freud Psychoanalyst to Freely Reveal Your Deep-Rooted Personal Issues,” Forbes, April 5, 2024 ; and the Reddit post “Can Someone Develop a Transference Relationship Towards an AI?,” August 27, 2024 .

3

See .

4

Paolo Migone, “Psychoanalysis on the Internet: Implications for Both Online and Offline Therapeutic Technique,” Psychoanalytic Psychology, no. 30 (April 2013): 282.

5

Jacques Lacan, The Four Fundamental Concepts of Psycho-Analysis (Penguin, 1979).

6

Jacques-Alain Miller, “A Contribution of the Schizophrenic to the Psychoanalytic Clinic,” (Re)-Turn: A Journal of Lacanian Studies, no. 1 (Winter 2003): 24–25.

7

Pierre-Gilles Guéguen, “Transference: A Paradoxical Concept,” LC Express 2, no. 12 (December 2014).

8

Jean Allouch, “Psychotic Transference,” in Lacan on Madness: Madness, Yes You Can’t, ed. Patricia Gherovici and Manya Steinkoler (Routledge, 2015), 120.

9

Jacques Lacan, The Seminar of Jacques Lacan, Book III: The Psychoses, 1955–1956, ed. Jacques-Alain Miller, trans. Russell Grigg (Routledge, 1993), 206.

10

Allouch, “Psychotic Transference,” 119.

11

Sašo Dolenc, “The Future of AI,” Sci-Highs (blog), June 12, 2024 .

Category
Technology, Psychology & Psychoanalysis
Subject
Artificial intelligence, Machine learning, Contemporary Clinic

Leslie Chapman (PhD) is a psychoanalyst based in north Hampshire, England. He undertook his clinical training at the Centre for Freudian Analysis and Research, where he was an Associate Member between 2005 and 2018. He has a particular interest in trauma and psychosis and how these link to the work of the “later Lacan.” His doctoral research was in the field of traumatic neurosis and will be published as part of the Palgrave Lacan Series in 2025.

Subscribe

e-flux announcements are emailed press releases for art exhibitions from all over the world.

Agenda delivers news from galleries, art spaces, and publications, while Criticism publishes reviews of exhibitions and books.

Architecture announcements cover current architecture and design projects, symposia, exhibitions, and publications from all over the world.

Film announcements are newsletters about screenings, film festivals, and exhibitions of moving image.

Education announces academic employment opportunities, calls for applications, symposia, publications, exhibitions, and educational programs.

Sign up to receive information about events organized by e-flux at e-flux Screening Room, Bar Laika, or elsewhere.

I have read e-flux’s privacy policy and agree that e-flux may send me announcements to the email address entered above and that my data will be processed for this purpose in accordance with e-flux’s privacy policy*

Thank you for your interest in e-flux. Check your inbox to confirm your subscription.