Back to All Events

Independent Study 15

  • English Round Table 서울시 서초구 나루터로 10길 29 (용마일렉트로닉스) (map)

Today is the last class in your current four class set. We will start class with a casual conversation. Our material today is about about human robot interaction. We will finish class with a sentence diagram.

EMILY KWONG, BYLINE: You're listening to SHORT WAVE...

(SOUNDBITE OF MUSIC)

KWONG: ...From NPR.

REGINA BARBER, HOST:

Human beings are hardwired to search for social connection. We naturally think of even the most basic objects as having feelings and experiences, which makes us feel attached to them, even if they're just a vacuum.

EVE HEROLD: I mean, there's people who name their Roombas. It's very, very common for Roomba owners to give a name and ascribe a personality to their Roombas.

BARBER: Eve Herold is a science writer, and she was fascinated by this desire to connect and how it's driving the technology we build.

HEROLD: We have robots that express emotions. Of course, they don't feel the emotions at this point, but they act and look and move as though they do. And this triggers an emotional reaction in us, which is almost irresistible.

BARBER: Her curiosity about the technology is why she wrote a new book called "Robots And The People Who Love Them," about social robots or robots designed to interact with humans and other robots. They can do things like care for children, the elderly, even act as a friend. But Eve's book also explores the darker side of the field - some of the ways social robots might make us more lonely, more isolated.

HEROLD: There are men who actually have married holograms and anime figures. So they call themselves technosexuals. And they're - they have long-term relationships. They see themselves as being faithful to their techno-mates or girlfriends or wives.

BARBER: Which is alarming. But it's also kind of unsurprising because Eve says that technology is already starting to change our perceptions of reality and may even change our perception of ourselves.

(SOUNDBITE OF MUSIC)

HEROLD: They listen, they learn from us, they remember what we say and they respond in ways that are very exquisitely tailored to us and our preferences and our history with them. And over time, this effect really kind of snowballs until you get to the point where it's like a feedback loop. You know, it's very easy to lose the realization that they're not actually alive and they don't actually have an inner life. It's all us talking to ourselves.

BARBER: So today on the show, as the line between human and robot begins to blur, how do we hold on to our humanity and use the technology ethically? I'm Regina Barber, and you're listening to SHORT WAVE, the science podcast from NPR.

(SOUNDBITE OF MUSIC)

BARBER: So Eve, I think it's fair to say we're kind of in this, like, rise of robots right now. And I actually love robots, and I'm so interested in all these advances that are happening with them. But I definitely find them unsettling at times when they're too humanlike. So I love that one of the first chapters in your book is called Overcoming The Uncanny. What is the uncanny valley?

HEROLD: Sure. So the uncanny valley is a - an emotional reaction that gets stirred in us when we interface with a robot that seems too human for comfort while not actually being perfectly convincing. So there's a glitch, or there's a timing issue, or there's still a way to see that it doesn't quite make the mark of being totally human. And this is a disturbing thing for our brains because it makes you think about things like zombies and beings that were almost human, not quite human, almost alive, not quite alive, and how terrifying that is to us. And robots can really evoke that.

BARBER: It makes complete sense to me, right? Because I like cute robots, like, big eyes, big heads, you know, things that look still metal, you know? Like...

HEROLD: Yes.

BARBER: I think that that is the perfect kind of robot.

HEROLD: I do, too. And, you know, it's interesting because, in the research with people who are interacting with robots, people actually like a robot that isn't too perfect.

BARBER: Mmm hmm. Yeah.

HEROLD: It puts them at ease if the robot makes a mistake every now and then because, otherwise, their brain is confused about whether they're dealing with a living thing or a machine.

BARBER: Or we could even argue that living things and humans do make mistakes all the time, so it actually is more comforting.

HEROLD: That is a good point (laughter). I'll give you credit.

BARBER: Right. Thank you (laughter). But OK, so your book also touches on the question of like robot consciousness, right? Like, so we're talking about, like, what makes this robot more human - is it conscious? Like, consciousness is this big mystery in neuroscience. So how do we even define robot consciousness?

HEROLD: Well, here's the problem. We don't understand how consciousness works even in humans.

BARBER: Right.

HEROLD: In the case of a robot, OK, intellectually, we know that, up to this point, current robots do not have consciousness. But that's not something that we can hold in our minds for any length of time when we interact with them. And we imagine that they do have consciousness. It's just irresistible. We imagine that. But that's a little scary because nobody knows what the robot mind would be like. It's completely alien to us. So, you know, even the engineers who write their algorithms don't understand how the algorithms reach a certain decision. It's called the black box problem, and it's there with all AI. But I think there's a lot of ambivalence about whether we really want robots to be conscious because we can't really define what that consciousness would be like. Therefore, we can't predict what they might do.

BARBER: And in your book, you used a great analogy for consciousness from a neuroscientist, Christof Koch, about how we use computers to create weather simulations and make predictions. Can you expand on that analogy?

HEROLD: The analogy is you can run a computer program - you can do computer modeling of all kinds of phenomena. A thunderstorm, for example, you can model in a computer program, but nothing gets wet. That's how you know it's not real. It's just a program. So that's, you know - I mean, I think we need to be really clear on this because it's an interesting question - will robots ever be conscious? If they do ever become conscious, then we have to start thinking about robot rights...

BARBER: Mmm hmm. Yeah.

HEROLD: ...And how we use them, in addition to - how will they behave? How can we predict how they might behave?

BARBER: Wow. But how do we interact with them ethically?

HEROLD: It's a very murky understanding that we have of robots and ethics. And, you know, there really aren't any real guardrails as far as ensuring that your robot behaves ethically all the time and incorporates human values into their behavior. When you talk about ethics, you bring up questions of responsibility. I mean, if a robot - for example, a caregiving robot - somehow accidentally kills or injures a frail person that they're taking care of, who's responsible?

BARBER: That's kind of horrifying. So regardless of their potential consciousness, some people do develop these really strong, like, social connections with robots. But we recently had an episode on loneliness in the U.S., and you write that robots aren't necessarily the solution. Like, what's the connection between robots and loneliness?

HEROLD: Yeah, it's a strange effect. And the thing that I can compare it to is people who are too addicted to social media and end up becoming isolated because they're not interacting with real people in a real relationship. And it's very seductive and hard to prevent in the people that have these relationships because of our hardwiring. So we have to remember - keep a firm fix in our minds of the dividing line between what is a robot and what is a human being. If we don't have that firmly fixed in our minds, we can start to prioritize our relationship with our robots because it's so easy.

BARBER: Right.

HEROLD: They cater to our whims. They talk about what we want to talk about. They, you know, are - they're accommodating in every way, which is not something that human beings can keep up over time.

BARBER: Yeah.

HEROLD: So, you know, after a while, we could become so comfortable with these kind of, you know, virtual relationships that we'll cease to reach out to other people. And it's something that there are people in this world who are especially vulnerable to. People who are lonely and isolated and don't have enough social stimulation - they can actually lose what social skills they have because they're so accustomed to this kind of consequence-free, easy, appealing relationship with a robot.

BARBER: This is, like, making me think of human relationships and, like - you know, we don't like it when we're challenged, and we don't like it when relationships are hard. But, like, if they aren't, then it's this - what you're saying. There's no - it's one-sided.

HEROLD: If they aren't challenging, you're not growing, you know? That's kind of the bottom line. And so, you know, you're not going to grow a lot from these robot relationships.

However, there are people in the world - children with autism, for example - who have a very hard time developing social skills. And there are robots that have a special autism program to them, and they actually do teach rudimentary social skills to people on the autism spectrum. So yes, on a very rudimentary level, they can actually help. Things like turn-taking, eye contact, things like that - you know, very basic things can be taught to you by robots. But the thing is, you need to take that - those skills, once you've developed them, and transfer them to real people in order to keep growing.

BARBER: Right. And this goes back to what you're writing about - how some people even go as far as to say robot relationships are not only harmful psychologically - not these specialized situations that you're talking about, but in general - because they are emotionally one-sided but also that they're unethical.

HEROLD: It's unethical because the person is not connecting with other human beings. And the world is full of lonely people - people who are - have needs physically and emotionally. And if you're not connecting with other people, you're not part of the solution.

And I do think it can be unethical just simply by displacing existing relationships with people in your life, and that concerns me a lot. I would say, you know, more than any of this, what - that concerns me is the displacement of people and actually increasing the loneliness and the isolation that exist in the world.

BARBER: It's not a silver bullet.

HEROLD: It's not a silver bullet. It's not worthless, either.

BARBER: Right.

HEROLD: For example, an older person who has dementia - you know, a relationship with a robot can ease their loneliness and their sense of isolation...

BARBER: Right.

HEROLD: ...Which is therapeutic. So yes, there are some good uses of this technology - no doubt about it. It's just that we need to know how far to take it and when to step away from the robot and start engaging in the real world because that's where you're really going to grow those social muscles and those ethical muscles that you need.

BARBER: So after writing this book, like, how do you see robots fitting into our lives in the future?

HEROLD: Well, I think robots are going to change the culture - you know? - and for better or worse - but in some ways for the better. You have to remember that these robots that are going to be consumer household robots - they do more than just, you know, converse with you. They'll look up information for us. They'll send emails, and they'll do so many things for us that I think are wonderful and that will free us up, you know, to perhaps use our time in a more productive, more meaningful way.

BARBER: Mmm hmm. For more human interaction.

HEROLD: More human interaction, more education, more pursuing hobbies - you know, all kinds of things - which, a lot of these things, we can't currently think of because our culture hasn't gone there yet.

BARBER: Thank you so much, Eve. This was really, really wonderful.

HEROLD: Well, thank you so much for having me.

(SOUNDBITE OF MUSIC)

BARBER: This episode was produced by Rachel Carlson, edited by our showrunner Rebecca Ramirez and fact-checked by Brit Hanson. Gilly Moon was the audio engineer. Beth Donovan is our senior director, and Collin Campbell is our senior vice president of programming. I'm Regina Barber. Thank you for listening to SHORT WAVE from NPR.

(SOUNDBITE OF MUSIC)

Embed Block
Add an embed URL or code. Learn more
Earlier Event: January 17
Independent Study (AJ)
Later Event: January 17
Independent Study 2