網頁

BBC 6 Minute English | ARTIFICIAL INTELLIGENCE | English CC | Daily List...


source: Daily Listening    2016年9月14日
► SUBSCRIBE DAILY LISTENING: http://goo.gl/9sRkEB
Professor Stephen Hawking has said recently that efforts to create thinking machines could put the human race in danger. This is the theme of Rob and Neil’s chat in this programme. Listen to their conversation and learn some new vocabulary.

0:07 Feeling bright today, Neil?
0:08 I am feeling quite bright and clever, yes!
0:10 That’s good to hear.
0:11 Well, you’ll need all your wits about you – meaning you’ll need to think very quickly
0:15 in this programme because we’re talking about intelligence, or to be more accurate,
0:21 Artificial Intelligence.
0:23 And we’ll learn some vocabulary related to the topic so that you can have your own
0:27 discussion about it.
0:28 Now, Neil, you know who Professor Stephen Hawking is, right?
0:31 Well, of course!
0:32 Yes.
0:33 Many people say that he’s a genius – in other words, he is very, very intelligent.
0:39 Professor Hawking is one of the most famous scientists in the world and people remember
0:43 him for his brilliance and also because he communicates using a syntheticvoice generated
0:48 by a computer – synthetic means it’s made from something non-natural.
0:54 Artificial is similar in meaning – we use it when something is man-made to look or behave
0:58 like something natural.
1:00 Well, Professor Hawking has said recently that efforts to create thinking machines are
1:05 a threat to our existence.
1:07 A threat means something which can put us in danger.
1:10 Now, can you imagine that, Neil?!
1:12 Well, there’s no denying that good things can come from the creation of Artificial Intelligence.
1:17 Computers which can think for themselves might be able to find solutions to problems we haven’t
1:22 been able to solve.
1:24 But technology is developing quickly and maybe we should consider the consequences.
1:28 Some of these very clever robots are already surpassing us, Rob.
1:33 To surpass means to have abilities superior to our own.
1:37 Yes.
1:38 Maybe you can remember the headlines when a supercomputer defeated the World Chess Champion
1:42 Gary Kasparov, to everybody’s astonishment.
1:45 It was in 1997.
1:47 What was the computer called, Neil?
1:49 Was it: a) Red Menace
1:51 b) Deep Blue c) Silver Surfer
1:54 I don’t know.
1:57 I think (c) is probably not right.
2:01 I think Deep Blue.
2:02 That’s (b) Deep Blue.
2:04 Okay.
2:05 You’ll know if you got it right at the end of the programme.
2:07 Well, our theme is Artificial Intelligence and when we talk about this we have to mention
2:12 the movies.
2:13 Many science fiction movies have explored the idea of bad computers who want to harm
2:18 us.
2:19 One example is 2001: A Space Odyssey.
2:22 Yes, a good film.
2:23 And another is The Terminator, a movie in which actor Arnold Schwarzenegger played an
2:28 android from the future.
2:30 An android is a robot that looks like a human.
2:32 Have you watched that one, Neil?
2:34 Yes, I have.
2:35 And the android is not very friendly.
2:37 No, it’s not.
2:38 In many movies and books about robots that think, the robots end up rebelling against
2:43 their creators.
2:45 But some experts say the risk posed by Artificial Intelligence is not that computers attack
2:50 us because they hate us.
2:52 Their problem is related to their efficiency.
2:55 What do you mean?
2:56 Well, let’s listen to what philosopher Nick Bostrom has to say.
3:00 He is the founder of the Future of Humanity Institute at Oxford University.
3:05 He uses three words when describing what’s inside the mind of a thinking computer.
3:11 This phrase means ‘to meet their objectives’.
3:13 What’s the phrase he uses?
3:16 The bulk of the risk is not in machines being evil or hating humans but rather that they
3:22 are indifferent to humans and that in pursuit of their own goals we humans would suffer
3:27 as a side effect.
3:28 Suppose you had a super intelligent AI whose only goal was to make as many paperclips as
3:33 possible.
3:34 Human bodies consist of atoms and those atoms could be used to make a lot of really nice
3:39 paperclips.
3:40 If you want paperclips it turns out that in the pursuit of this you would have instrumental
3:44 reasons to do things that would be horrible to humanity.
3:48 A world in which humans become paperclips - wow, that’s scary!
3:52 But the phrase which means ‘meet their objectives’ is to ‘pursue their goals’.
3:57 Yes, it is.
3:58 So the academic explains that if you’re a computer responsible for producing paperclips,
4:05 you will pursue your objective at any cost… … and even use atoms from human bodies to
4:10 turn them into paperclips!
4:12 Now that’s a horror story, Rob.
4:14 If Stephen Hawking is worried, I think I might be too.
4:17 How can we be sure that Artificial Intelligence – be it either a device or software – will
4:23 have a moral compass?
4:24 Ah, a good expression - a moral compass - in other words, an understanding of what is right
4:29 and what is wrong.
4:31 Artificial Intelligence is an interesting topic, Rob.
4:33 I hope we can chat about it again in the future.
4:36 But now I’m looking at the clock and we are running out of time, I’m afraid, and
4:39 I’d like to know if I got the answer to the quiz question right?
4:42 Well, my question was about a supercomputer which defeated the World Chess Champion Gary
4:48 Kasparov in 1997.
4:50 What was the machine’s name?
4:51 Was it: Red Menace, Deep Blue or Silver Surfer?
4:55 And I think it’s Deep Blue.
4:58 Well, it sounds like you are more intelligent than a computer because you got the answer
5:03 right.
5:04 Yes, it was Deep Blue.
5:05 The 1997 match was actually the second one between Kasparov and Deep Blue, a supercomputer
5:10 designed by the company IBM and it was specialised in chess-playing.
5:14 Well, I think I might challenge Deep Blue to a game obviously.
5:18 I’m a bit of a genius myself.
5:20 Very good!
5:21 Good to hear!
5:22 Anyway, we’ve just got time to remember some of the words and expressions that we’ve
5:26 used today, Neil.
5:27 They were: you’ll need your wits about you
5:33 artificial genius
5:38 synthetic threat
5:41 to surpass to pursue their goals
5:48 moral compass. Thank you.