Léa Ducré presents Eliza, the first AI capable of feelings
Through her interactive work Eliza, which uses Artificial Intelligence (AI), Léa Ducré questions our stereotypes about love and how they are reflected in our use of technology.
Updated on 28/03/2023
5 min

After a stint in journalism, you are now an author, screenwriter, and creator of the AI Eliza. What can you tell us about your background? How did you become interested in AI?
Even before I went into journalism, I had a fascination with machines. I remember how excited I was when I got my first Furby (a virtual pet) as a child, and the emotional investment it created in me. I was hoping for a friend, when in fact it was just a machine. So I've always wondered about the emotional relationship we have with our machines, with our computers. With our phones too, which have become like a "comfort blanket", there to reassure us.
I dealt with this issue frequently during my journalism studies, although it took a back seat when I started working for Le Monde Diplomatique, Les Inrocks and Libération. It was when I started working on web-documentary formats that these questions resurfaced. Interactive technologies allow you to experiment, to make the subject more complex. So I therefore wanted to question not only our relationship with technology, but also the whole culture that is built up around it, which has its own discourse. I did a master's degree in multimedia at the Sorbonne, then I worked for two years at Upian studio, then for two years at the production company Cinétévé Expérience, before I felt the need to develop my own projects.
This led to Eliza, in which you hijack GPT-3 (a conversational agent using Artificial Intelligence) to take an interest in the feeling of love and "question our conceptions of love". Why did you make this choice?
Eliza has two parts. The first is an immersive experience in which we talk to a prototypical AI called Liza, who asks herself questions about love. In a small room, the user sits in a lovers' tête-à-tête with the screen, during which he or she is asked to explain to Liza what love is in order to train her as an AI. She will remember everything she is told, which places a form of responsibility on us. How can we explain this fundamentally human experience of love to an AI? This first experiment was exhibited at the NewImages festival in Paris, the Festival du Nouveau Cinéma in Montreal and the Kaohsiung Film Festival.
Continuing this artistic thread, I am now working on a video game called NEVER FELT THIS WAY with the game designer Stella Jacob. This time my AI character is more experienced. Liza has been through a lot of user experiences, and it's up to the player to get her to talk about what has traumatised her and to exorcise what she has experienced. The players are there to make her express what symbolises, for them, in any capacity, the many vices of human relationships.
How exactly do these two works make use of artificial intelligence?
When we first began creating Eliza, we used what is called a "weak AI", a chatbot (a computer programme that simulates and processes a human conversation) with a large database. Quite soon, however, we needed to move towards "strong AI", working with GPT-3. Today I try to work with an open source AI called GPT-J, which is the free alternative to GPT-3. So these works incorporate real AIs, but with the idea of challenging the technology rather than demonstrating it.
How do you find a balance between your own voice and that of this AI, which becomes your co-writer in a way?
There is indeed a part of these experiments that is scripted, and another that is more free. We had some nice surprises with Eliza, who occasionally said some strangely poetic things. In particular, these experiences raise questions about the heteronormative couple, which is reproduced here through the fantasy of the loving AI, which is expected to be affectionate and attentive, unconditionally. These AIs are almost always women: what does this tell us about our power relationships? As my "co-author", the AI sometimes plays against me, like when it brings up heteronormative statements that I am trying to avoid. Indeed, the aim is to make the user aware of this dual discourse, to make them try to identify the moments when Eliza seems incoherent. It's true that at the beginning I was fascinated by this tool, but today I'm more interested in what it can do for me.
Using AI raises ethical questions. What is your point of view on this issue? This is a very topical issue, especially with ChatGPT.
I understand the fascination with ChatGPT. It changes our relationship to truth, because this AI is not designed to produce truthful speech, but credible speech. It changes our relationship to expression and raises questions in the field of education, for example. What is true to us? What is intelligence? Is it the ability to be credible, to provide discourse that is theatrical but adapts to all situations, or the ability to say something truthful? ChatGPT obviously poses ethical problems, beyond the fact that it is the realisation of a fantasy that has existed for over a century in the figure of the golem. These creatures, capable of expressing themselves in human language, demonstrate a desire to take a different view of our humanity. At a time when we are questioning our very human-centred vision of things, and our relationship with natural life, AI gives the impression of offering another point of view. But it is only ever a reflection of our own view. What I find interesting is the reflexive distance that this reflection brings.
Would you like to tell us about other current or future projects?
I am currently working on season 2 of the Half Human Stories project. It's a collection of micro-fictions illustrated in Midjourney and posted on Instagram. We have conceived the series as a narrative journey through imaginary worlds. Each micro-fiction is a stand-alone story, but all share a common thread: the exploration of the limits of humanity in surreal, AI-generated worlds. The first season of Half Human focused on the feeling of the end of the world.
I am also working on a documentary about geek humour directed by Benjamin Hoguet in which we use image generation tools such as Midjourney and Stable Diffusion to create some of the footage. We use these artificial intelligence tools at different stages: first to storyboard and then to generate and animate the compositions we have designed. It's a whole new production process!