Kate Darling, robot expert: ‘We shouldn’t laugh at people who fall in love with a machine. It’s going to be all of us’

The MIT researcher, who has spent years studying the interactions between humans and machines, analyzes the explosion of artificial intelligence

Kate Darling investigates the legal, social and ethical effects of robots at the MIT Media Lab.

Kate Darling, who studies the legal, social and ethical effects of robots at the MIT Media Lab, has spent years observing the interactions between humans and robots; she even has several at her house. Regarding the future of the artificial intelligence revolution, her answers could be considered evasive: “It’s all so speculative,” she says, “that it’s hard to figure out.” Still, the timing could not be better for her work, as we have never been so close to coexisting with robots. “It’s a very exciting time to be alive, I feel very fortunate to be experiencing all of this,” she reflects.

Darling is the author of The New Breed, where she states that the best comparison to understand what a robot is is animals, not humans. In this conversation with EL PAÍS, she tries to shed some light on the enormous novelty that language models like ChatGPT represent.

Question. How has the success of ChatGPT changed the way you see the future of robots?

Answer. It’s a big, big change. A lot of people did not anticipate this. If you’d ask me a few years ago if we would have this type of sophistication and language learning models, I would have said no, never. As for what’s going to happen next, I don’t think anybody knows. For me one of the big questions is: The capabilities that we’re seeing in language learning and in generative AI – does that translate into being able to control physical robots and program physical robots and to have the same kind ofIntelligence and learning? Because that would be truly game-changing. But it’s not clear to me.

Q. There is no definition of what a robot is. Why is it so difficult?

A. If you look throughout history at what’s been called a robot, it’s often something that’s kind of new, a new technology that people don’t fully understand. There’s some magical element to it. And then, once it becomes more commonplace, people stop calling it a robot and they start calling it a dishwasher or a vending machine.

Q. There is a lot of debate going on right now regarding a possible extinction caused by an AI that is capable of making decisions.

A. There’s nothing we can do to predict whether it will happen, and when. There’s nothing we can do to guard against it, short of stopping all AI research entirely, which is not going to happen. I’m more interested in the fact that people are going to think that AI is sentient, whether it is or not. That is something that we do need to deal with as a society.

Q. You say that to understand what a robot is, it is better to compare it with an animal than with a human. Do you still think like that, even after ChatGPT?

A. Yes. I know it’s a harder comparison to make now that we have AI that uses human language, but I think it’s all the more reason for the comparison, which is to say it’s not as valuable or useful to create something that we already have, that we can already do. It is much more valuable to have machines that can supplement us or be partners in what we’re trying to achieve. A lot of the tasks that generative AI will be able to do are currently done by humans, but I think the real potential for the technology is for it to be a tool that is combined with other human skills and not just a replacement for people.

Q. You predict that robots will be part of our families soon. How will they be?

A. In a lot of the research on human-robot interaction, we see that people treat robots like living things, even though they know that they’re just machines. People love to do this. And so I think that even though people anthropomorphize robots and we project ourselves onto them, give them crazy human qualities, emotions, people also understand that what they’re interacting with is not a person but something different. It’s also something we see in the research. Robots are going to be a new kind of social relationship; it might be like a pet, or it might be something totally different, which is why my book is called The New Breed. But I don’t think it’s necessarily going to replace human relationships. It’s going to be something different, but I definitely think it’s going to happen.

Q. You have robots at home. What are they like? What do they do?

A. I have a couple of different types. We have a baby seal, a dinosaur robot, a robot dog, and then we have other robots that are more to help around the house, like an assistant or a vacuum cleaner. They all do different things and my kids interact with them differently, depending on whether they see them as a tool or a partner.

Q. Can the companion robots be turned off, or are they always on?

A. Some of them are designed to be on. The dog, for example, is designed to be left on. And when its battery is getting low, it finds its own charging station and it lies down, like it’s going to sleep, to charge.

Q. Are these pet robots ready to enter millions of homes?

A. With this very primitive technology, that’s very expensive. We’ve seen that the people who do have them do develop meaningful connections. And the technology is not going to get any worse. I do think that there’s going to be home robots, and I think the barrier to that isn’t necessarily the complexity of the robot, but it’s just the fact that people don’t know yet the value that it would bring to them, socially. Once somebody gets enough traction on a home robot, I think that there will be a tipping point where more people will want them.

Q. What do you mean by “value”?

A. People didn’t use to see the value of having a pet, for example. The pet had to serve a function, the dog would guard the house or the cat would catch the mice. But then, gradually, people realized that it was the relationship with the pet and the emotional connection that was the real value. And now people have pets for that reason. I think the same thing is going to happen with robots. Right now, they have a function, they have to be a home assistant, they have to vacuum your floor. But once people have interacted with enough of them, I think they will see a value in the social connection and want them for that reason, too.

Q. You have said that the movie Her, about a human who falls in love with a machine, worries you and excites you in equal measure. What ethical issues do you see?

A. Her is about this app that a company has put out. But what is the company’s business model? What are they trying to do? They’re probably trying to maximize their profit. So you have people who are in a very vulnerable position because they have a very strong emotional connection to an app or a device or a robot or whatever it is. And this is already happening, the app Replika has millions of users already and people become very emotionally attached to it, some of them. I’m concerned that there are privacy and data collection issues. There are ways that you could emotionally manipulate people to buy products and services or otherwise change their behavior, not in their own best interest, but in the best interest of a company.

A still image of the movie ‘Her’ with Joaquin Phoenix.Cordon

Q. You said you can imagine a sexual app exploiting a user’s weakness in the heat of climax.

A. Yes.

Q. Isn’t that bad marketing?

A. It’s a bit more subtle. Replika has in-app purchases that people buy; there are really easy ways to manipulate people to spend money or to advertise to them. I think that there are consumer protection issues there, because it’s really persuasive, but a little bit too manipulative.

Q. Could there be a reasonable way to monetize these apps?

A. Once consumers realize that there’s a value to buying an artificial companion and they’re willing to pay enough money for that. Then you could just sell the thing. Do I think that’s going to happen? No, but I think that would be the best way to have something that protects people’s privacy and doesn’t emotionally manipulate them.

Q. Many people will be surprised that someone humanizes these machines, but we are programmed for that.

A. Yes. And it’s not going away. If something moves around us, that’s because there’s life in it. That’s what our brains think, and there’s this subconscious projection, we project ourselves onto things not even just with movement, but onto a chatbot or anything that mimics human behavior, things that we recognize as cues, sounds, anything, and research shows that we do that from a very early age. It’s pretty deeply ingrained.

Q. Robots will die. Could it be that we get divorced or abandon a robot in a ditch because of a software update?

A. Yes, relationships can end in all different kinds of ways and I do think that we’re going to have real relationships with robots, whether they’re like human relationships, like human-pet relationships or something different. Like any relationship, it’s going to be able to end in different ways, too, whether that’s through death or through someone deciding that they’re done with the relationship. I think all sorts of things are going to happen. It’s easy to anticipate if we understand that people do develop these emotional relationships with artificial entities, but I don’t think enough people understand that yet.

Q. People don’t believe they can become attached to an AI?

A. People think “That couldn’t be me. Those people are sad and lonely, but not me.” But I think we are all susceptible to bonding with these machines, especially as they get a little bit more interesting and more readily available. We need to take it a lot more seriously than just laughing at the people who fall in love with chatbots, because it’s going to be all of us.

Q. Isn’t it surprising that the machine we fall in love with is just a screen?

A. I’m not too surprised. Even with the most early chatbots that were completely primitive, people would really open up. There was Eliza, a chatbot developed at MIT in the 1970s, and people would tell it things. I think that we’re just suckers for anything that gives us cues that we recognize, even if it’s just a screen. The reason I love physical robots is because the physicality adds an even more visceral layer that I find even more compelling.

Q. But you’re not into humanoid robots.

A. No, they’re boring.

Q. You prefer an R2-D2, a “trash can with wheels.”

A. I like robots that are designed to look cute and that people relate to, but they don’t have to look human for that to happen. I think it’s much more interesting to create a form, sometimes it works even better because if it looks too humanoid, then people have specific expectations for how it should behave and what it should do, and when it doesn’t match those expectations people are disappointed. Meanwhile, with something that looks like a trash can that’s animated, people don’t have the same expectations.

Q. Are you more excited or worried about these developments?

A. Both.

Q. What are you most concerned about?

A. The companies, the incentive structures and the politics and the economic problems. It’s a governance problem, not a technology problem.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In