Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,845 --> 00:00:04,281
Freeman: We are in the midst
of a revolution so insidious,
2
00:00:04,281 --> 00:00:06,483
we can't even see it.
3
00:00:07,218 --> 00:00:11,256
Robots live and work beside us.
4
00:00:11,256 --> 00:00:15,427
And now we're designing them
to think for themselves.
5
00:00:17,529 --> 00:00:21,499
We're giving them the power
to learn to move on their own.
6
00:00:23,902 --> 00:00:26,170
Will these new life-forms evolve
7
00:00:26,170 --> 00:00:29,107
to be smarter and more capable
than us?
8
00:00:29,107 --> 00:00:32,543
Or will we choose
to merge with the machines?
9
00:00:33,410 --> 00:00:36,649
Are robots the future
of human evolution?
10
00:00:38,049 --> 00:00:41,149
Through The Wormhole
11
00:00:41,418 --> 00:00:46,091
Space, time, life itself.
12
00:00:48,326 --> 00:00:52,764
The secrets of the cosmos
lie through the wormhole.
13
00:00:52,764 --> 00:00:55,266
-- Captions by vitac --
www.Vitac.Com
14
00:00:55,366 --> 00:00:57,770
Captions Paid For By
Discovery Communications
15
00:00:57,870 --> 00:01:01,870
Sync & Corrected: iscol.
16
00:01:04,942 --> 00:01:07,378
We humans
like to think of ourselves
17
00:01:07,378 --> 00:01:09,747
as the pinnacle of evolution.
18
00:01:09,747 --> 00:01:14,485
We are the smartest, most
adaptable form of life on earth.
19
00:01:14,485 --> 00:01:18,789
We have reshaped the world
to suit our needs.
20
00:01:18,790 --> 00:01:22,760
But just as homo sapiens
replaced homo erectus,
21
00:01:22,760 --> 00:01:26,364
it's inevitable
something will replace us.
22
00:01:26,364 --> 00:01:30,000
What if we're building
our own successors?
23
00:01:30,000 --> 00:01:34,672
Just as we learn to move, think,
and feel for ourselves,
24
00:01:34,672 --> 00:01:38,575
we're now giving robots
those same powers.
25
00:01:38,575 --> 00:01:40,578
Where will this lead?
26
00:01:40,578 --> 00:01:44,215
Is this what humanity
will become?
27
00:01:48,653 --> 00:01:53,892
When I was a teenager, I built
a bicycle from spare parts.
28
00:01:53,892 --> 00:01:57,393
My bicycle was so well-balanced,
29
00:01:57,394 --> 00:02:00,331
i could jog alongside it
without holding onto it.
30
00:02:02,367 --> 00:02:05,770
Nobody was that impressed,
but it made me think.
31
00:02:05,770 --> 00:02:08,205
Would we one day have machines
32
00:02:08,205 --> 00:02:12,510
that truly could move
on their own?
33
00:02:12,510 --> 00:02:16,147
Would they even need us anymore?
34
00:02:23,006 --> 00:02:26,777
Daniel wolpert
of the university of Cambridge
35
00:02:26,777 --> 00:02:28,178
believes that if robots
36
00:02:28,178 --> 00:02:30,681
are to be
the future of human evolution,
37
00:02:30,681 --> 00:02:35,752
they're going to have to learn
to move as well as we do...
38
00:02:35,752 --> 00:02:39,589
Because movement
is the supreme achievement
39
00:02:39,589 --> 00:02:41,892
of our powerful intellect.
40
00:02:41,892 --> 00:02:44,194
The most fundamental question
I think we can ever ask is,
41
00:02:44,194 --> 00:02:46,764
why and when have animals
ever evolved a brain?
42
00:02:46,764 --> 00:02:48,766
Now, when I ask my students
this question,
43
00:02:48,766 --> 00:02:49,965
they'll tell me,
44
00:02:49,966 --> 00:02:51,735
we have ones to think
or to perceive the world,
45
00:02:51,735 --> 00:02:53,035
and that's completely wrong.
46
00:02:53,036 --> 00:02:55,539
We have a brain for one reason
and one reason only,
47
00:02:55,539 --> 00:02:58,309
and that's to produce
adaptable and complex movement,
48
00:02:58,309 --> 00:03:00,510
because movement
is the only way we have
49
00:03:00,510 --> 00:03:02,411
of affecting
the world around us.
50
00:03:02,412 --> 00:03:04,416
Freeman: All of our brains'
intellectual capacity
51
00:03:04,496 --> 00:03:06,398
grew from
one primal motivation --
52
00:03:06,498 --> 00:03:09,968
to learn how to move better.
53
00:03:09,968 --> 00:03:13,172
It was our ability
to walk on two legs,
54
00:03:13,172 --> 00:03:16,841
to speak and emote
with complex facial movements,
55
00:03:16,842 --> 00:03:19,778
and to manipulate
our dexterous limbs
56
00:03:19,778 --> 00:03:23,413
that put humans
on top of the food chain.
57
00:03:23,414 --> 00:03:24,850
There can be no value
58
00:03:24,850 --> 00:03:27,418
to perception or emotions
or thinking
59
00:03:27,419 --> 00:03:29,120
without the ability to act.
60
00:03:29,120 --> 00:03:30,655
All those other features,
61
00:03:30,655 --> 00:03:34,026
like memory, cognition, love,
fear play into movement,
62
00:03:34,026 --> 00:03:35,961
which is the final output
of the brain.
63
00:03:38,030 --> 00:03:40,332
Freeman: No machine could handle
64
00:03:40,332 --> 00:03:44,537
the huge variety of complex
movements we perform every day.
65
00:03:46,872 --> 00:03:48,473
Just imagine a robot
66
00:03:48,473 --> 00:03:51,977
trying to play one of england's
most famous pastimes.
67
00:03:57,182 --> 00:03:59,184
So, although
that shot looks simple
68
00:03:59,184 --> 00:04:00,752
and it felt effortless to me,
69
00:04:00,752 --> 00:04:02,421
the complexity
of what's going on in my brain
70
00:04:02,421 --> 00:04:03,621
is really quite remarkable.
71
00:04:03,622 --> 00:04:06,024
I have to follow the ball
as the bowler bowls it
72
00:04:06,024 --> 00:04:07,993
and predict
where it's going to bounce
73
00:04:07,993 --> 00:04:09,227
and how it's gonna rise
from the ground.
74
00:04:09,227 --> 00:04:10,529
I then have to make a decision
75
00:04:10,529 --> 00:04:12,397
as to what type of shot
I'm going to make.
76
00:04:12,397 --> 00:04:15,065
And finally, I have to
contract my 600 muscles
77
00:04:15,065 --> 00:04:17,702
in a particular sequence
to execute the shot.
78
00:04:17,702 --> 00:04:19,303
Now, each of those components
79
00:04:19,304 --> 00:04:21,273
has real
mathematical complexity,
80
00:04:21,273 --> 00:04:24,676
which is currently beyond the
ability of any robotic device.
81
00:04:24,676 --> 00:04:27,246
Freeman:
One of the greatest challenges
82
00:04:27,246 --> 00:04:29,046
in getting robots
to move like we do
83
00:04:29,047 --> 00:04:33,185
is teaching them
to deal with uncertainty --
84
00:04:33,185 --> 00:04:36,888
something our brains do
intrinsically.
85
00:04:36,888 --> 00:04:40,025
A ball will never come at you
the same way twice.
86
00:04:42,393 --> 00:04:46,031
You must instantly
adjust your swing each time.
87
00:04:49,034 --> 00:04:51,002
The question is...
88
00:04:51,002 --> 00:04:54,806
How does the human brain deal
with all this uncertainty?
89
00:04:54,806 --> 00:04:59,077
Daniel thinks it uses a theory
of probability estimation
90
00:04:59,077 --> 00:05:00,979
called bayesian inference
91
00:05:00,979 --> 00:05:02,680
to figure it out.
92
00:05:02,680 --> 00:05:05,818
Wolpert: So, a critical thing
the batsman now has to do
93
00:05:05,818 --> 00:05:08,087
is decide where this ball
is going to bounce,
94
00:05:08,087 --> 00:05:10,122
so as they can prepare
the correct shot,
95
00:05:10,122 --> 00:05:12,291
and for that,
they need bayesian inference.
96
00:05:12,291 --> 00:05:14,359
What bayesian inference
is all about
97
00:05:14,359 --> 00:05:17,296
is deciding, optimally,
the bounce location of the ball
98
00:05:17,296 --> 00:05:19,798
from two different sources
of information.
99
00:05:19,798 --> 00:05:22,267
Freeman: One source
of information is obvious.
100
00:05:22,267 --> 00:05:26,070
You look at the ball.
101
00:05:26,071 --> 00:05:28,740
Wolpert: So, you can use vision
of the trajectory of the ball
102
00:05:28,740 --> 00:05:30,008
as it comes in
103
00:05:30,008 --> 00:05:32,210
to try and estimate
where it's going to bounce.
104
00:05:32,210 --> 00:05:33,712
But vision is not perfect,
105
00:05:33,712 --> 00:05:36,782
in that we have variability
in our visual processors,
106
00:05:36,782 --> 00:05:39,049
so at least where distribution's
shown in red here
107
00:05:39,050 --> 00:05:41,120
as the probable
bounce locations.
108
00:05:41,120 --> 00:05:45,290
But bayes' rule says there's
another source of information.
109
00:05:45,290 --> 00:05:48,493
It's the prior knowledge
about possible bounce locations.
110
00:05:48,493 --> 00:05:50,028
If you're a good batter,
111
00:05:50,028 --> 00:05:52,297
then you can effectively
look at the bowler
112
00:05:52,297 --> 00:05:56,568
and maybe know by his particular
bowling style or small cues --
113
00:05:56,568 --> 00:05:58,469
and that's shown
by the blue shading --
114
00:05:58,470 --> 00:05:59,671
which is a different area.
115
00:05:59,671 --> 00:06:02,040
So, bayesian inference
is a way of combining
116
00:06:02,040 --> 00:06:04,610
this red distribution
with the blue distribution,
117
00:06:04,610 --> 00:06:07,646
and you do that by multiplying
the numbers together in each
118
00:06:07,646 --> 00:06:09,681
to generate
this yellow distribution,
119
00:06:09,681 --> 00:06:11,483
which is termed the belief.
120
00:06:11,483 --> 00:06:13,051
And using that information,
121
00:06:13,051 --> 00:06:15,287
the batsman can now
prepare his shot.
122
00:06:15,287 --> 00:06:17,789
Okay, I should probably
get out of the way.
123
00:06:20,091 --> 00:06:22,761
Freeman: The batsman's brain,
like all of ours,
124
00:06:22,761 --> 00:06:26,031
is doing this math
automatically.
125
00:06:26,031 --> 00:06:29,467
We are a species that's honed
for movement prediction.
126
00:06:29,467 --> 00:06:31,303
It's what has made us
127
00:06:31,303 --> 00:06:36,274
the planet's best hunters
and toolmakers.
128
00:06:36,274 --> 00:06:39,211
We already have robots that
are faster and more accurate
129
00:06:39,211 --> 00:06:42,147
than we are.
130
00:06:42,147 --> 00:06:45,216
But we have to program
their every move.
131
00:06:45,216 --> 00:06:48,854
For robots to walk down
the evolutionary road
132
00:06:48,854 --> 00:06:50,722
we've already traveled,
133
00:06:50,722 --> 00:06:53,726
they're going to have to learn
to move on their own.
134
00:06:55,160 --> 00:06:57,162
What happens then?
135
00:06:57,162 --> 00:07:01,967
Will they evolve complex brains
like ours?
136
00:07:04,302 --> 00:07:09,575
Robot builders Josh bongard
of the university of Vermont
137
00:07:09,575 --> 00:07:13,211
and hod lipson
of Cornell university
138
00:07:13,211 --> 00:07:15,981
are trying to answer
that question.
139
00:07:15,981 --> 00:07:18,850
Increasingly, we see
that interaction with the world,
140
00:07:18,850 --> 00:07:21,820
with the physical world,
is important for intelligence.
141
00:07:21,820 --> 00:07:24,522
You can't just build a brain
in a jar.
142
00:07:24,522 --> 00:07:29,161
Freeman: Hod and Josh's goal
is to build a machine
143
00:07:29,161 --> 00:07:33,265
that's smart enough to learn how
to move around all by itself.
144
00:07:33,265 --> 00:07:36,801
They've created a menagerie
of strange robotic forms
145
00:07:36,801 --> 00:07:38,904
along the way.
146
00:07:38,904 --> 00:07:41,974
But their work starts
with the computer program
147
00:07:41,974 --> 00:07:45,642
designed to evolve robot bodies.
148
00:07:45,643 --> 00:07:48,013
It simulates various body plans
149
00:07:48,013 --> 00:07:51,117
and then tries various
strategies to get them to move.
150
00:07:51,999 --> 00:07:54,201
Okay, so,
let's walk our way through --
151
00:07:54,201 --> 00:07:57,071
no pun intended -- an actual
evolutionary simulation.
152
00:07:58,738 --> 00:08:00,274
So, in this case,
153
00:08:00,274 --> 00:08:03,610
we've told the computer that we
want a robot that has two legs,
154
00:08:03,610 --> 00:08:05,979
but we want the computer
to figure out
155
00:08:05,979 --> 00:08:07,246
how to get the robot
156
00:08:07,247 --> 00:08:10,783
to orchestrate the movement
of the robot's legs.
157
00:08:10,783 --> 00:08:13,220
And here, we see something
a little bit surprising,
158
00:08:13,220 --> 00:08:17,124
that evolution hasn't discovered
the solution that we use.
159
00:08:17,124 --> 00:08:20,460
Sometimes when we run
this evolutionary process,
160
00:08:20,460 --> 00:08:23,563
it produces something familiar
like walking,
161
00:08:23,563 --> 00:08:26,765
and in other cases, it produces
something that's not familiar,
162
00:08:26,766 --> 00:08:29,469
something we wouldn't have
come up with on our own.
163
00:08:29,469 --> 00:08:32,773
Freeman:
It's survival of the fittest
164
00:08:32,773 --> 00:08:35,142
or perhaps the least awkward.
165
00:08:35,241 --> 00:08:38,111
Just as mother nature
selects generations
166
00:08:38,111 --> 00:08:40,947
based on their ability
to survive,
167
00:08:40,947 --> 00:08:43,349
so does the simulation.
168
00:08:43,349 --> 00:08:44,885
The computer deletes the robots
169
00:08:44,885 --> 00:08:46,753
that aren't doing
a very good job,
170
00:08:46,753 --> 00:08:48,655
and the computer
then takes the robots
171
00:08:48,655 --> 00:08:50,490
that are doing
a slightly better job
172
00:08:50,490 --> 00:08:53,026
and makes
modified copies of them
173
00:08:53,026 --> 00:08:55,996
and repeats this process
over and over again.
174
00:08:55,996 --> 00:09:01,034
And after a while, the computer
starts to discover robots
175
00:09:01,034 --> 00:09:03,103
that, in this case,
are able to walk
176
00:09:03,103 --> 00:09:06,540
from the left side of the screen
to the right side of the screen.
177
00:09:08,775 --> 00:09:12,579
Freeman:
This is evolution on steroids.
178
00:09:12,579 --> 00:09:16,550
What took mother nature
millions of years
179
00:09:16,550 --> 00:09:20,152
takes the computer
just a few hours.
180
00:09:20,153 --> 00:09:23,757
Overnight, the computer tests
thousands of generations,
181
00:09:23,757 --> 00:09:28,996
and eventually it produces
a robot that meets the goal.
182
00:09:28,996 --> 00:09:31,164
When the simulation
makes something
183
00:09:31,164 --> 00:09:33,367
that looks
particularly interesting,
184
00:09:33,367 --> 00:09:37,069
hod and Josh take that body plan
and build it.
185
00:09:37,070 --> 00:09:39,105
Now they can test whether
186
00:09:39,105 --> 00:09:42,275
the strategies for moving
learned in simulation
187
00:09:42,275 --> 00:09:44,877
work as well in the real world.
188
00:09:44,878 --> 00:09:48,381
So, this robot
is called the quadratot,
189
00:09:48,381 --> 00:09:51,651
and it's basically a robot
that learns how to walk
190
00:09:51,651 --> 00:09:53,453
using evolutionary
robotic techniques.
191
00:09:53,854 --> 00:09:55,789
And so, what we can see here
192
00:09:55,789 --> 00:09:59,192
is a particular example
of how this robot learns.
193
00:09:59,192 --> 00:10:02,263
This is one of
the earliest gaits that it did,
194
00:10:02,263 --> 00:10:05,965
and we can see that it's not
moving very far or very fast.
195
00:10:05,965 --> 00:10:08,202
It's kind of like a child
196
00:10:08,202 --> 00:10:11,203
during its very early behaviors
of crawling.
197
00:10:11,204 --> 00:10:13,140
It's trying out
different things.
198
00:10:13,140 --> 00:10:15,943
Some things work better.
Some things work less well.
199
00:10:15,943 --> 00:10:18,877
And it's taking the experiences
and learning from them
200
00:10:18,878 --> 00:10:20,848
and gradually improving
its gait.
201
00:10:20,848 --> 00:10:24,284
Freeman: There are many robots
that can move well
202
00:10:24,284 --> 00:10:28,487
while executing
a specific predesigned task.
203
00:10:28,488 --> 00:10:32,125
But hod and Josh's robots
204
00:10:32,125 --> 00:10:34,961
must start to learn
by themselves from scratch
205
00:10:34,961 --> 00:10:37,698
in an unknown environment.
206
00:10:37,698 --> 00:10:39,966
It can sense its own progress,
207
00:10:39,966 --> 00:10:42,369
and like a baby
learning to crawl,
208
00:10:42,369 --> 00:10:44,271
it becomes more aware
of its body
209
00:10:44,471 --> 00:10:45,807
with every step
and every tumble.
210
00:10:46,000 --> 00:10:48,970
Hod and Josh believe
this self-awareness
211
00:10:48,970 --> 00:10:54,475
gradually builds into
a basic form of consciousness.
212
00:10:54,476 --> 00:10:56,577
Often,
we've raised the question,
213
00:10:56,577 --> 00:10:58,880
is something conscious,
or is it not?
214
00:10:58,880 --> 00:11:00,548
But it's really not
a black-and-white thing.
215
00:11:00,548 --> 00:11:01,783
It's more about
216
00:11:01,783 --> 00:11:05,419
to what degree an entity's able
to conceive of itself,
217
00:11:05,420 --> 00:11:08,090
to simulate itself,
to think about itself,
218
00:11:08,090 --> 00:11:09,824
to be self-aware.
219
00:11:09,824 --> 00:11:13,761
As robots learn to move
in more complex ways,
220
00:11:13,761 --> 00:11:17,198
it's possible they will develop
levels of consciousness
221
00:11:17,198 --> 00:11:20,469
equal to ours and maybe beyond.
222
00:11:21,836 --> 00:11:23,904
But according to one scientist,
223
00:11:23,905 --> 00:11:26,875
for a robot
to become truly conscious...
224
00:11:28,310 --> 00:11:31,213
...it must develop feelings.
225
00:11:33,000 --> 00:11:35,337
What is consciousness?
226
00:11:35,837 --> 00:11:39,874
The answer depends
on who you talk to.
227
00:11:39,874 --> 00:11:43,778
A doctor's definition would be
different from a priest's.
228
00:11:43,778 --> 00:11:47,114
But we all agree that
our high-level consciousness
229
00:11:47,114 --> 00:11:49,951
is what separates us
from other organisms
230
00:11:49,951 --> 00:11:52,987
and, of course, from robots.
231
00:11:52,987 --> 00:11:57,525
What would it take
for robots to become conscious?
232
00:11:57,525 --> 00:12:01,863
Can they get there
on logic alone?
233
00:12:01,863 --> 00:12:05,567
Or must they also learn to feel?
234
00:12:07,735 --> 00:12:12,306
Professor pentti haikonen
from the university of Illinois
235
00:12:12,306 --> 00:12:15,810
believes machines
will only become conscious
236
00:12:15,810 --> 00:12:18,679
when they can
experience emotions.
237
00:12:18,679 --> 00:12:22,249
It's a belief he has held
since he was very young
238
00:12:22,249 --> 00:12:26,287
when he first contemplated
what it meant to be conscious.
239
00:12:26,287 --> 00:12:31,492
When I was 4 or 5 years old,
I was standing in our kitchen,
240
00:12:31,492 --> 00:12:36,497
and suddenly I was struck
by the mystery of existence...
241
00:12:37,899 --> 00:12:41,402
...how and why I was me
242
00:12:41,402 --> 00:12:45,940
and why I was not my sister
or my brother.
243
00:12:45,940 --> 00:12:49,844
How did I get inside myself?
244
00:12:51,078 --> 00:12:53,113
Freeman:
As he got older,
245
00:12:53,114 --> 00:12:56,250
pentti realized that
what made him feel conscious
246
00:12:56,250 --> 00:12:58,285
of being inside his head
247
00:12:58,285 --> 00:13:01,689
were his emotional reactions
to the people and objects
248
00:13:01,689 --> 00:13:03,757
in the world around him.
249
00:13:05,593 --> 00:13:09,097
The neural processes that
are behind our consciousness
250
00:13:09,097 --> 00:13:11,999
take place inside our brain,
251
00:13:11,999 --> 00:13:14,302
but we don't see things
that way.
252
00:13:14,302 --> 00:13:16,905
For instance,
when you cut your finger,
253
00:13:16,905 --> 00:13:20,775
the pain is in the finger
or so it appears,
254
00:13:20,775 --> 00:13:24,045
but, actually,
the pain is in here.
255
00:13:24,045 --> 00:13:27,247
To feel is to be conscious.
256
00:13:29,817 --> 00:13:33,854
Freeman:
Our brain's raw experience
of the world around us
257
00:13:33,854 --> 00:13:36,557
is just a series
of electrical impulses
258
00:13:36,557 --> 00:13:39,560
generated by our senses.
259
00:13:39,560 --> 00:13:44,332
However, we translate these
impulses into mental images
260
00:13:44,332 --> 00:13:48,203
by making emotional associations
with them.
261
00:13:50,004 --> 00:13:52,573
A sound is pleasing.
262
00:13:52,573 --> 00:13:54,909
A view is peaceful.
263
00:13:54,909 --> 00:13:57,944
Consciousness,
according to pentti,
264
00:13:57,945 --> 00:14:02,616
is just a rich mosaic of
emotionally laden mental images.
265
00:14:02,616 --> 00:14:06,887
He believes that to have
a truly conscious machine,
266
00:14:06,887 --> 00:14:09,257
you must give it the power
267
00:14:09,257 --> 00:14:13,026
to associate sensory data
with emotions.
268
00:14:13,027 --> 00:14:17,598
And in this robot,
he has begun that process.
269
00:14:17,598 --> 00:14:19,968
This is the first robot
270
00:14:19,968 --> 00:14:23,503
that utilizes
associative neural networks.
271
00:14:23,504 --> 00:14:26,774
It is the same kind of learning
that we humans use.
272
00:14:26,774 --> 00:14:28,743
When we see and hear something,
273
00:14:28,743 --> 00:14:31,545
we make a connection
between those things,
274
00:14:31,545 --> 00:14:35,382
and later on when we see or hear
the other thing,
275
00:14:35,382 --> 00:14:38,252
the other thing
comes to our mind.
276
00:14:38,252 --> 00:14:42,622
Freeman: The xcr-1
experiences the world
277
00:14:42,623 --> 00:14:45,459
directly through its senses
like we do.
278
00:14:45,459 --> 00:14:50,330
On board are the basics
of touch, sight, and sound.
279
00:14:51,799 --> 00:14:56,103
Pentti has begun the process of
giving it emotional associations
280
00:14:56,103 --> 00:14:58,604
to specific sensory data,
281
00:14:58,605 --> 00:15:00,875
like the color green.
282
00:15:00,875 --> 00:15:04,245
Pentti places a green object
in front of the robot,
283
00:15:04,245 --> 00:15:05,646
which it recognizes.
284
00:15:05,646 --> 00:15:07,281
Green.
285
00:15:07,281 --> 00:15:10,550
Then he gives green
a bad association --
286
00:15:10,551 --> 00:15:13,587
a smack on the backside.
287
00:15:13,587 --> 00:15:18,891
The associative learning
is similar to little children.
288
00:15:18,892 --> 00:15:20,061
Hurt.
289
00:15:20,061 --> 00:15:24,698
And you say that this is
not good or this is good,
290
00:15:24,698 --> 00:15:28,968
or you may also
smack the little child.
291
00:15:28,969 --> 00:15:30,571
Hurt.
292
00:15:30,571 --> 00:15:32,073
I don't recommend that.
293
00:15:33,307 --> 00:15:35,909
Green bad.
294
00:15:35,909 --> 00:15:38,313
Freeman:
The robot's mental image
of the green object
295
00:15:38,313 --> 00:15:42,249
is now associated
with the emotion bad.
296
00:15:42,249 --> 00:15:46,955
And from now on,
it will avoid the green bottle.
297
00:15:46,955 --> 00:15:50,324
But it's not all pain
for the xcr-1.
298
00:15:50,324 --> 00:15:52,293
Just like we teach the robot
299
00:15:52,293 --> 00:15:55,228
to associate pain
with the green object,
300
00:15:55,228 --> 00:15:57,131
we can teach the robot
301
00:15:57,131 --> 00:16:00,166
to associate, also,
pleasure with objects,
302
00:16:00,167 --> 00:16:04,505
in this case
with the blue object, like this.
303
00:16:04,505 --> 00:16:05,506
Blue.
304
00:16:05,506 --> 00:16:07,540
Freeman:
To give blue a good association,
305
00:16:07,541 --> 00:16:11,545
pentti gently strokes
the top of the robot.
306
00:16:11,545 --> 00:16:14,181
Blue good.
307
00:16:15,000 --> 00:16:18,237
This simple experiment
demonstrates
308
00:16:18,237 --> 00:16:24,376
that this robot
has mental images of objects
309
00:16:24,376 --> 00:16:26,679
and mental content.
310
00:16:26,679 --> 00:16:29,181
Freeman: It's still early
in its development,
311
00:16:29,181 --> 00:16:33,384
but the xcr-1 has learned
the basics of emotional reaction
312
00:16:33,385 --> 00:16:35,988
from fear...
313
00:16:35,988 --> 00:16:38,324
Green bad.
314
00:16:44,696 --> 00:16:47,400
Green bad.
315
00:16:50,869 --> 00:16:52,204
...to desire.
316
00:16:52,204 --> 00:16:55,039
Blue good.
317
00:16:55,040 --> 00:16:59,645
♪ Now's my time for love ♪
318
00:16:59,644 --> 00:17:02,513
♪ lonely moments seem to... ♪
319
00:17:02,514 --> 00:17:05,851
As a more advanced version
of the xcr-1
320
00:17:05,851 --> 00:17:07,820
fills its memory
with mental images...
321
00:17:07,820 --> 00:17:09,321
Dentist bad.
322
00:17:09,321 --> 00:17:11,090
...it will start to be able
323
00:17:11,090 --> 00:17:13,625
to react to new situations
on its own
324
00:17:13,624 --> 00:17:16,695
and eventually
experience the world
325
00:17:16,696 --> 00:17:20,065
much like
any emotionally-driven being.
326
00:17:20,065 --> 00:17:22,034
It is my great dream
327
00:17:22,034 --> 00:17:25,871
to build robot
that is one day able to ask,
328
00:17:25,871 --> 00:17:29,975
"How did I get inside myself?"
329
00:17:29,975 --> 00:17:33,111
Freeman:
Once robots reach this point,
330
00:17:33,111 --> 00:17:35,981
what's to stop them
from moving on
331
00:17:35,981 --> 00:17:39,417
and becoming conscious
of things we're not?
332
00:17:39,418 --> 00:17:43,755
This man thinks robots will
become the future of humanity
333
00:17:43,755 --> 00:17:47,292
because they'll have something
we lack.
334
00:17:47,292 --> 00:17:51,630
Their brains will have
the capacity for genius
335
00:17:51,630 --> 00:17:56,235
long after the last human
ever says "Eureka."
336
00:17:57,000 --> 00:18:00,936
for Archimedes,
Eureka happened in the bathtub.
337
00:18:00,937 --> 00:18:05,942
Einstein was riding a streetcar
when relativity dawned on him.
338
00:18:05,942 --> 00:18:09,980
These brilliant minds
had a flash of inspiration
339
00:18:09,980 --> 00:18:12,949
and drove all of humanity
forward.
340
00:18:12,949 --> 00:18:16,186
But the scientific questions
of today,
341
00:18:16,186 --> 00:18:19,622
probing shoals
of subatomic particles
342
00:18:19,622 --> 00:18:22,424
and our vast genetic code,
343
00:18:22,424 --> 00:18:24,094
have become so complex
344
00:18:24,094 --> 00:18:27,631
that they take teams
of thousands of researchers
345
00:18:27,631 --> 00:18:29,232
to solve.
346
00:18:29,232 --> 00:18:34,403
Is the age of the
single scientific genius over?
347
00:18:34,403 --> 00:18:37,674
Not if machines have their way.
348
00:18:41,911 --> 00:18:44,880
Data scientist Michael schmidt
349
00:18:44,880 --> 00:18:48,317
sees the world
filled with intricate beauty --
350
00:18:48,317 --> 00:18:50,886
the flowering of a rose,
351
00:18:50,887 --> 00:18:53,389
the veins branching on a leaf,
352
00:18:53,389 --> 00:18:56,960
the flight of a Bumblebee.
353
00:18:56,960 --> 00:18:59,796
But below the surface
of nature's wonders,
354
00:18:59,796 --> 00:19:02,165
Michael also sees
a treasure trove
355
00:19:02,165 --> 00:19:05,835
of uncharted
mathematical complexity.
356
00:19:05,835 --> 00:19:09,305
Schmidt:
Well, I love coming out here.
Nature is beautiful.
357
00:19:09,305 --> 00:19:13,610
There are equations hidden
in every plant and every bee
358
00:19:13,610 --> 00:19:17,080
and the ecosystems
involved in this garden.
359
00:19:17,080 --> 00:19:19,014
And part of science
is figuring out
360
00:19:19,014 --> 00:19:20,950
what causes those things
to happen.
361
00:19:20,950 --> 00:19:26,356
Freeman: Science is our effort
to make sense of nature,
362
00:19:26,356 --> 00:19:31,927
and this quest has given us
some very famous discoveries.
363
00:19:31,928 --> 00:19:33,296
In Newton's time,
364
00:19:33,296 --> 00:19:36,232
he was able to figure out a very
important rule in physics,
365
00:19:36,232 --> 00:19:37,800
which is the law of gravity.
366
00:19:37,801 --> 00:19:39,569
It predicts how this apple falls
367
00:19:39,569 --> 00:19:41,905
and the forces
that act upon this apple.
368
00:19:41,905 --> 00:19:44,773
Today in science, we're
interested in similar problems
369
00:19:44,774 --> 00:19:46,743
but not just about
how the apple falls
370
00:19:46,743 --> 00:19:48,878
but the massive complexity
that follows
371
00:19:48,878 --> 00:19:51,314
from this very simple dynamic
to the world around us.
372
00:19:51,314 --> 00:19:54,283
For example,
when I drop this apple,
373
00:19:54,283 --> 00:19:56,152
the apple stirs up dust.
374
00:19:56,152 --> 00:19:58,054
This dust could hit a flower,
375
00:19:58,054 --> 00:20:01,691
and a bee may be less likely
to pollinate that flower.
376
00:20:01,691 --> 00:20:04,160
And the entire ecosystem
in this garden
377
00:20:04,160 --> 00:20:07,130
could change dramatically
from that single event.
378
00:20:07,130 --> 00:20:10,767
Freeman: Scientists understand
the basic forces of nature,
379
00:20:10,767 --> 00:20:12,669
but making precise predictions
380
00:20:12,669 --> 00:20:15,338
about what will happen
in the real world
381
00:20:15,338 --> 00:20:18,040
with its staggering complexity
382
00:20:18,040 --> 00:20:21,376
is overwhelming
to the human mind.
383
00:20:21,377 --> 00:20:24,213
So, one of the reasons why
it's extremely difficult
384
00:20:24,213 --> 00:20:26,516
for humans to understand
and figure out
385
00:20:26,516 --> 00:20:28,651
the equations
and the laws of nature
386
00:20:28,651 --> 00:20:31,288
is literally the number
of variables that are at play.
387
00:20:31,288 --> 00:20:33,189
There could be
thousands of variables
388
00:20:33,189 --> 00:20:34,723
that influence a system
389
00:20:34,723 --> 00:20:36,926
that we're only just beginning
to tease apart.
390
00:20:36,926 --> 00:20:39,095
In fact, there are
so many of these equations,
391
00:20:39,095 --> 00:20:41,097
we'll never be able
to finish analyzing them
392
00:20:41,097 --> 00:20:42,598
if we do it by hand.
393
00:20:44,934 --> 00:20:47,336
Freeman: In 2006,
394
00:20:47,336 --> 00:20:50,173
Michael began developing
intelligent computer software
395
00:20:50,173 --> 00:20:52,508
that could observe
complex natural systems
396
00:20:52,508 --> 00:20:56,846
and derive meaning
from what seems like chaos.
397
00:20:56,846 --> 00:21:00,849
So, what I have here
is a double pendulum.
398
00:21:00,850 --> 00:21:03,352
If you look at it,
it consists of two arms.
399
00:21:03,352 --> 00:21:06,055
One arm
swings along the top axis,
400
00:21:06,055 --> 00:21:08,257
and the second arm is attached
to the bottom of the first arm,
401
00:21:08,257 --> 00:21:10,860
and it's two pendulums
that are hooked together,
402
00:21:10,860 --> 00:21:12,962
one pendulum
at the end of the other.
403
00:21:12,962 --> 00:21:15,932
Now, the pendulum
is a great example of complexity
404
00:21:15,932 --> 00:21:19,035
because it exhibits some of
the most complex behavior
405
00:21:19,035 --> 00:21:21,637
that we're aware of,
which is called chaos.
406
00:21:21,637 --> 00:21:24,140
So, when you collect data
from this sort of device,
407
00:21:24,140 --> 00:21:25,908
it looks almost
completely random,
408
00:21:25,908 --> 00:21:28,311
and there doesn't appear
to be any sort of pattern.
409
00:21:28,311 --> 00:21:30,312
But because this is
a physical deterministic system,
410
00:21:30,312 --> 00:21:31,781
a pattern does exist.
411
00:21:31,781 --> 00:21:36,052
Freeman:
Finding a pattern amidst
the chaos of the double pendulum
412
00:21:36,052 --> 00:21:39,089
has stumped scientists
for decades.
413
00:21:45,961 --> 00:21:50,700
But then Michael
had a flash of inspiration.
414
00:21:50,700 --> 00:21:55,004
Why not grow new ideas
the same way nature created us,
415
00:21:55,004 --> 00:21:57,840
using evolution?
416
00:21:57,840 --> 00:22:01,677
He called his program Eureka.
417
00:22:01,677 --> 00:22:06,115
Eureka starts with a primordial
soup of random equations
418
00:22:06,115 --> 00:22:08,518
and checks how closely they fit
419
00:22:08,518 --> 00:22:11,487
the behavior
of the double pendulum.
420
00:22:11,487 --> 00:22:15,191
If they don't fit,
the computer kills them.
421
00:22:15,191 --> 00:22:20,194
If they do, the computer moves
them into the next generation,
422
00:22:20,195 --> 00:22:23,566
where they mutate and try
to get an even closer fit.
423
00:22:23,566 --> 00:22:26,869
Eventually,
a winning equation emerges,
424
00:22:26,869 --> 00:22:29,739
one that Archimedes
would be proud of.
425
00:22:29,739 --> 00:22:31,508
Eureka!
426
00:22:33,676 --> 00:22:35,712
Schmidt: And I'm running
our algorithm now.
427
00:22:35,712 --> 00:22:37,879
On the left pane
are the lists of the equations
428
00:22:37,880 --> 00:22:41,350
that Eureka has thought up
for this double pendulum.
429
00:22:41,350 --> 00:22:44,320
Walking up, we can see
we increase the complexity,
430
00:22:44,320 --> 00:22:47,490
and we're also increasing
the agreement with the data.
431
00:22:47,490 --> 00:22:48,958
And eventually, as you go up,
432
00:22:48,958 --> 00:22:51,795
you start to get an extremely
close agreement with the data,
433
00:22:51,795 --> 00:22:53,896
and eventually
you snap on to a truth
434
00:22:53,896 --> 00:22:57,667
where you get a large
improvement in the accuracy.
435
00:22:57,667 --> 00:23:01,403
And we can actually look in here
and see exactly what pops out.
436
00:23:01,403 --> 00:23:03,640
For example here,
you might notice we have a 9.8,
437
00:23:03,640 --> 00:23:05,541
and if you remember
from physics courses,
438
00:23:05,541 --> 00:23:08,511
that is the coefficient
of gravity on earth.
439
00:23:08,511 --> 00:23:10,445
What's very important
is the difference
440
00:23:10,446 --> 00:23:12,648
between the two angles
of the double pendulum.
441
00:23:12,648 --> 00:23:14,083
This pops out.
442
00:23:14,083 --> 00:23:16,252
Essentially,
we've used this software
443
00:23:16,252 --> 00:23:18,721
and the data we've collected
to model chaos,
444
00:23:18,721 --> 00:23:22,157
and we've teased out the
solution directly from the data.
445
00:23:22,158 --> 00:23:24,293
Freeman:
Eureka has not only discovered
446
00:23:24,293 --> 00:23:28,431
a single equation to explain
how a double pendulum moves.
447
00:23:28,431 --> 00:23:32,068
It has found meaning
in what looks like chaos --
448
00:23:32,068 --> 00:23:36,571
something no human or machine
has done before.
449
00:23:36,572 --> 00:23:39,876
Schmidt: So, we could collect
an entirely new data set,
450
00:23:39,876 --> 00:23:41,309
run this process again,
451
00:23:41,310 --> 00:23:43,213
and even though the data
is completely different --
452
00:23:43,213 --> 00:23:45,213
we could have
different observations --
453
00:23:45,214 --> 00:23:48,084
we can still identify
the underlying truth,
454
00:23:48,084 --> 00:23:51,221
the underlying pattern,
which is this equation.
455
00:23:53,956 --> 00:23:57,426
Freeman: To Michael, the future
of scientific exploration
456
00:23:57,426 --> 00:23:59,761
isn't inside our heads.
457
00:23:59,761 --> 00:24:02,030
It's inside machines.
458
00:24:02,031 --> 00:24:04,500
Whether they're looking
at patterns of data
459
00:24:04,500 --> 00:24:08,337
from genetics, particle physics,
or meteorology,
460
00:24:08,337 --> 00:24:13,443
programs like Eureka can evolve
inspiration on demand,
461
00:24:13,443 --> 00:24:17,445
finding basic truths
about nature
462
00:24:17,446 --> 00:24:19,815
that no human ever could.
463
00:24:19,815 --> 00:24:21,351
We're gonna reach a point
464
00:24:21,351 --> 00:24:25,054
where we decide
what we want to discover
465
00:24:25,054 --> 00:24:27,790
and we let the machines
figure this out for us.
466
00:24:27,790 --> 00:24:30,626
Eureka can find
these relationships
467
00:24:30,626 --> 00:24:34,396
without human bias
and without human limitations.
468
00:24:35,931 --> 00:24:39,134
We created robots to serve us.
469
00:24:39,134 --> 00:24:44,641
As the machines learn their own
ways to move, feel, and think,
470
00:24:44,641 --> 00:24:47,777
they will eventually
grow out of that role.
471
00:24:47,777 --> 00:24:51,213
What if they start
working together?
472
00:24:51,213 --> 00:24:54,549
Could they build
their own society,
473
00:24:54,550 --> 00:24:59,689
one made by the robots
for the robots?
474
00:25:02,000 --> 00:25:07,438
There's no species on earth
more successful than us.
475
00:25:07,438 --> 00:25:09,173
We owe that success
476
00:25:09,173 --> 00:25:12,911
to the powerful computer
inside our heads.
477
00:25:12,911 --> 00:25:16,948
But it takes more than one brain
to conquer a planet.
478
00:25:16,948 --> 00:25:20,317
Homo sapiens thrive
because we have learned
479
00:25:20,318 --> 00:25:25,290
to make those computers
work together as a society.
480
00:25:25,290 --> 00:25:31,596
What will happen when robots
put their heads together?
481
00:25:36,567 --> 00:25:40,070
Roboticist by day
and gourmet chef by night,
482
00:25:40,071 --> 00:25:42,674
Professor Dennis Hong
of Virginia tech
483
00:25:42,674 --> 00:25:45,810
is a specialist
in building cooperative robots.
484
00:25:45,810 --> 00:25:49,815
But he also sees cooperation
outside the lab.
485
00:25:49,815 --> 00:25:51,849
So,
we don't really think about it,
486
00:25:51,849 --> 00:25:54,786
but everything in our daily
lives involves cooperation.
487
00:25:54,786 --> 00:25:58,089
For example, cooking oftentimes
is thought of as a solo act,
488
00:25:58,089 --> 00:26:00,124
but if you think about it,
a lot of people are involved
489
00:26:00,124 --> 00:26:02,626
and a lot of careful
coordination is required
490
00:26:02,627 --> 00:26:04,695
to make it happen.
491
00:26:04,695 --> 00:26:06,631
Oh, thank you, charli.
492
00:26:08,032 --> 00:26:09,834
Take this tomato as an example.
493
00:26:09,834 --> 00:26:12,837
This tomato most likely
started its life as a seed,
494
00:26:12,837 --> 00:26:14,506
where a group of breeders
495
00:26:14,506 --> 00:26:17,041
need to choose
the right sequence of genes
496
00:26:17,041 --> 00:26:19,143
for a plump, juicy,
tasty tomato.
497
00:26:19,143 --> 00:26:24,015
The seeds needed to be planted,
grown, harvested,
498
00:26:24,015 --> 00:26:26,685
then the tomatoes
needed to get to the market.
499
00:26:26,685 --> 00:26:32,123
Freeman: Food production is
a complex web of coordination.
500
00:26:32,123 --> 00:26:34,357
But as good as it is,
501
00:26:34,358 --> 00:26:37,963
human cooperation
has its limits.
502
00:26:37,963 --> 00:26:38,930
Hong: Oops.
503
00:26:43,735 --> 00:26:45,537
Freeman:
Every day, like most of us,
504
00:26:45,537 --> 00:26:48,373
Dennis has to contend
with the prime example
505
00:26:48,373 --> 00:26:51,241
of human cooperation
gone wrong --
506
00:26:51,242 --> 00:26:52,243
traffic.
507
00:26:52,243 --> 00:26:54,111
The problem is, us being human,
508
00:26:54,111 --> 00:26:56,948
we all need to, want to
get to our destination
509
00:26:56,948 --> 00:26:59,918
as quick as possible,
thus we have traffic jams.
510
00:26:59,918 --> 00:27:02,153
Freeman:
If it wasn't for traffic lights,
511
00:27:02,153 --> 00:27:04,422
which are, in reality,
very simple robots,
512
00:27:04,422 --> 00:27:08,692
it will be almost impossible
to get anywhere.
513
00:27:08,693 --> 00:27:10,695
These traffic lights,
they talk to each other.
514
00:27:10,695 --> 00:27:12,564
They communicate
with other traffic lights
515
00:27:12,564 --> 00:27:13,596
at other intersections.
516
00:27:13,597 --> 00:27:14,799
And they have cameras,
517
00:27:14,799 --> 00:27:17,101
so they actually see
the traffic patterns
518
00:27:17,101 --> 00:27:19,938
and make decisions for us,
for humans.
519
00:27:19,938 --> 00:27:21,539
Oh, there you go.
520
00:27:21,539 --> 00:27:23,742
Thank you, traffic light.
521
00:27:23,742 --> 00:27:26,711
Freeman:
Traffic is a nuisance.
522
00:27:26,711 --> 00:27:29,047
But other failures
of human cooperation
523
00:27:29,047 --> 00:27:31,249
are much more serious...
524
00:27:31,249 --> 00:27:32,916
And often deadly.
525
00:27:32,917 --> 00:27:34,886
[ Machine-gun fire ]
526
00:27:34,886 --> 00:27:37,788
Dennis believes
a society of robots
527
00:27:37,788 --> 00:27:40,759
can be much better collaborators
than we are.
528
00:27:40,759 --> 00:27:43,862
So, in collaboration
with Daniel Lee
529
00:27:43,862 --> 00:27:45,930
at the university
of Pennsylvania,
530
00:27:45,930 --> 00:27:50,368
he designed a group of robots
to compete in the robocup,
531
00:27:50,368 --> 00:27:53,337
an international
robotic soccer championship.
532
00:27:53,337 --> 00:27:56,608
Robocup is an autonomous robot
soccer competition,
533
00:27:56,608 --> 00:27:59,477
which means that
you have a team of robots,
534
00:27:59,477 --> 00:28:02,247
you press "Start," And then
nobody touches anything.
535
00:28:02,247 --> 00:28:05,083
And the robots need to look
around, see where the ball is,
536
00:28:05,083 --> 00:28:07,752
need to coordinate and actually
play a game of soccer.
537
00:28:07,752 --> 00:28:12,389
Freeman: Dennis' soccer robots,
called Darwin-op,
538
00:28:12,389 --> 00:28:14,459
are fully autonomous.
539
00:28:14,459 --> 00:28:16,861
They use complex sensors
and software
540
00:28:16,861 --> 00:28:19,063
to navigate the playing field.
541
00:28:19,063 --> 00:28:21,266
And they have
a serious competitive edge
542
00:28:21,266 --> 00:28:22,834
over their human counterparts.
543
00:28:22,834 --> 00:28:27,104
Teammates can read
each other's minds.
544
00:28:27,104 --> 00:28:29,074
So, if you look at
human soccer players,
545
00:28:29,074 --> 00:28:31,008
obviously they're great
at what they do.
546
00:28:31,008 --> 00:28:33,144
They communicate
sometimes by shouting,
547
00:28:33,144 --> 00:28:34,679
sometimes by a subtle gesture,
548
00:28:34,679 --> 00:28:36,714
but, again,
it's not really accurate,
549
00:28:36,714 --> 00:28:39,350
and they cannot share
all the information together
550
00:28:39,350 --> 00:28:40,918
at the same time in real time,
551
00:28:40,918 --> 00:28:43,555
but robots can do that.
552
00:28:45,223 --> 00:28:49,227
Freeman: Each robot knows the
exact location and destination
553
00:28:49,227 --> 00:28:52,130
of the other robots
at all times.
554
00:28:52,130 --> 00:28:54,298
They can adjust their strategy
555
00:28:54,298 --> 00:28:56,433
and even their roles
as necessary.
556
00:28:56,434 --> 00:29:00,004
Hong:
Depending on where the ball is,
where the opponents are,
557
00:29:00,004 --> 00:29:02,107
they didactically
switch their roles.
558
00:29:02,107 --> 00:29:03,708
So the goalie becomes a striker,
559
00:29:03,708 --> 00:29:05,677
a striker becomes a goalie
or defense.
560
00:29:05,677 --> 00:29:10,682
Freeman:
They may not be as agile as pelé
or bend it like Beckham,
561
00:29:10,682 --> 00:29:13,683
but they are able
to dribble past their opponents,
562
00:29:13,684 --> 00:29:18,289
pass the ball,
score a goal...
563
00:29:18,289 --> 00:29:20,458
And even celebrate.
564
00:29:23,528 --> 00:29:25,563
Dennis believes robocup
565
00:29:25,563 --> 00:29:28,833
is just the beginning
of robot societies.
566
00:29:28,833 --> 00:29:33,204
Dennis imagines a connected
community of thinking machines
567
00:29:33,204 --> 00:29:35,573
that would be
far more sophisticated
568
00:29:35,573 --> 00:29:37,575
than human communities.
569
00:29:37,575 --> 00:29:40,577
He calls it cloud robotics.
570
00:29:40,578 --> 00:29:43,347
Hong: Cloud robotics is a
shared network of intelligence.
571
00:29:43,347 --> 00:29:46,150
It's similar to what we call
common sense in humans.
572
00:29:46,150 --> 00:29:48,519
So,
just like those smaller robots
573
00:29:48,519 --> 00:29:50,387
that play soccer for robocup,
574
00:29:50,388 --> 00:29:53,424
they share a common data,
team data, to achieve the goal,
575
00:29:53,424 --> 00:29:55,593
in this case,
winning the soccer game.
576
00:29:55,593 --> 00:29:56,760
For cloud robotics,
577
00:29:56,761 --> 00:29:59,030
robots from the furthest corners
in the world,
578
00:29:59,030 --> 00:30:00,998
they can all connect
to the cloud
579
00:30:00,998 --> 00:30:03,968
and share information and
intelligence to do their job.
580
00:30:03,968 --> 00:30:08,339
Freeman: Humans spend a lifetime
mastering knowledge,
581
00:30:08,339 --> 00:30:12,710
but future robots could learn it
all in microseconds.
582
00:30:12,710 --> 00:30:15,947
They could create their own
hyper-connected network
583
00:30:15,947 --> 00:30:18,382
using the same spirit
of cooperation
584
00:30:18,382 --> 00:30:21,218
that built human society
585
00:30:21,218 --> 00:30:25,890
without the selfishness
and greed that hold us back.
586
00:30:25,890 --> 00:30:29,560
Robots operate by a very
well-defined set of rules.
587
00:30:29,560 --> 00:30:32,296
The human impetus to break them
is just not there.
588
00:30:32,296 --> 00:30:35,866
Freeman: Robots already know
how to talk to one another.
589
00:30:35,867 --> 00:30:39,404
But now a scientist in Berlin
590
00:30:39,404 --> 00:30:43,240
has taken robotic communication
a step further.
591
00:30:44,309 --> 00:30:48,047
His machines are speaking a
language he doesn't understand.
592
00:30:50,000 --> 00:30:53,138
Motakay tokima.
593
00:30:54,472 --> 00:30:57,808
Did you understand
what I just said?
594
00:30:57,808 --> 00:30:59,644
Of course you didn't
595
00:30:59,644 --> 00:31:03,581
because I wasn't speaking
any known human language.
596
00:31:03,581 --> 00:31:05,882
But it wasn't nonsense.
597
00:31:05,883 --> 00:31:08,786
It was a robot language.
598
00:31:08,786 --> 00:31:11,890
We humans
took tens of thousands of years
599
00:31:11,890 --> 00:31:15,293
to develop our complex means
of communication.
600
00:31:15,293 --> 00:31:18,263
Now robots
are following our lead,
601
00:31:18,263 --> 00:31:21,365
and they're doing it
at light speed.
602
00:31:21,365 --> 00:31:23,400
Someday soon,
603
00:31:23,400 --> 00:31:28,907
robots may decide to exclude us
from their conversation.
604
00:31:31,976 --> 00:31:34,979
Robot: Tokima.
605
00:31:34,979 --> 00:31:36,915
Lucabo.
606
00:31:36,915 --> 00:31:38,917
Miyoto.
607
00:31:38,917 --> 00:31:41,217
Motakay.
608
00:31:41,218 --> 00:31:45,557
Tokima, kymamu.
609
00:31:45,557 --> 00:31:47,859
Tokima.
610
00:31:47,859 --> 00:31:49,927
Simeta.
611
00:31:49,928 --> 00:31:51,262
Tokima.
612
00:31:55,767 --> 00:31:57,836
Motakay.
613
00:32:02,373 --> 00:32:03,641
Steels: Without language,
614
00:32:03,641 --> 00:32:07,511
our species would never be
where it is today.
615
00:32:07,511 --> 00:32:10,947
It's the most magnificent thing
616
00:32:10,948 --> 00:32:14,618
that has ever been created
by humanity.
617
00:32:14,618 --> 00:32:16,687
If you look at ourselves,
618
00:32:16,687 --> 00:32:20,424
then it's pretty clear
that without language,
619
00:32:20,424 --> 00:32:21,493
we would not be able
620
00:32:21,493 --> 00:32:23,727
to do the kinds of things
that we're doing.
621
00:32:26,297 --> 00:32:30,701
Freeman: Luc steels, a Professor
of artificial intelligence,
622
00:32:30,701 --> 00:32:32,536
sees language as the key
623
00:32:32,536 --> 00:32:35,339
to developing
true robot intelligence.
624
00:32:35,339 --> 00:32:38,341
Steels: What I'm trying
to understand is,
625
00:32:38,342 --> 00:32:41,212
how can we
synthesize this process
626
00:32:41,212 --> 00:32:45,683
so that we can start up
a kind of evolution in a robot
627
00:32:45,683 --> 00:32:48,019
or in a population of robots
628
00:32:48,019 --> 00:32:51,222
that will also
lead to the growth
629
00:32:51,222 --> 00:32:55,293
of a rich communication system
like we have.
630
00:32:55,293 --> 00:32:58,996
Freeman: Machines already
communicate with each other,
631
00:32:58,996 --> 00:33:00,665
but these are based
632
00:33:00,665 --> 00:33:03,167
on predetermined,
human-coded languages.
633
00:33:03,167 --> 00:33:05,636
Luc wants to know
634
00:33:05,636 --> 00:33:08,672
how future robot societies
might communicate
635
00:33:08,672 --> 00:33:12,109
given the chance to make
a language on their own.
636
00:33:12,109 --> 00:33:15,879
Luc gives his robots the basic
ingredients of language,
637
00:33:15,880 --> 00:33:17,982
like potential sounds to use,
638
00:33:17,982 --> 00:33:21,085
and possible ways
to join them together.
639
00:33:21,085 --> 00:33:25,189
But what the robots say
is up to them.
640
00:33:25,189 --> 00:33:27,458
We put in learning mechanisms,
641
00:33:27,458 --> 00:33:29,761
we put in invention mechanisms,
642
00:33:29,761 --> 00:33:32,263
mechanisms so that they can
coordinate their language.
643
00:33:32,263 --> 00:33:35,765
They can kind of negotiate
how they're gonna speak,
644
00:33:35,766 --> 00:33:40,004
but we don't put in our language
or our concepts.
645
00:33:40,004 --> 00:33:44,274
Freeman: It's not enough for
the robots to know how to speak.
646
00:33:44,275 --> 00:33:47,511
They need to have something
to speak about.
647
00:33:47,511 --> 00:33:49,647
Luc's next step
648
00:33:49,647 --> 00:33:53,585
is to teach the robots how
to recognize their own bodies.
649
00:33:53,585 --> 00:33:56,221
Steels:
In order to learn language,
650
00:33:56,221 --> 00:34:00,023
you actually have to
learn about your own body
651
00:34:00,023 --> 00:34:03,527
and the movements
of your own body.
652
00:34:03,527 --> 00:34:06,263
So, what you see here
is an internal model
653
00:34:06,264 --> 00:34:09,600
that the robot
is building of itself.
654
00:34:09,600 --> 00:34:12,802
This robot
is trying to learn here,
655
00:34:12,802 --> 00:34:14,538
is the relationship
656
00:34:14,538 --> 00:34:18,208
between all these different
sensory channels
657
00:34:18,208 --> 00:34:20,444
and its own motor commands.
658
00:34:20,445 --> 00:34:24,448
Freeman: As a robot watches
itself move in the mirror,
659
00:34:24,447 --> 00:34:28,819
it forms a 3-d model
of its limbs and joints.
660
00:34:28,820 --> 00:34:31,789
It stores this information
in sense memory
661
00:34:31,789 --> 00:34:35,660
and is now ready to talk
to another robot about movement.
662
00:34:38,129 --> 00:34:39,563
Tokima.
663
00:34:39,563 --> 00:34:41,799
So, now, this robot is talking.
664
00:34:41,799 --> 00:34:44,102
He's asking an action.
665
00:34:44,102 --> 00:34:47,772
This robot is doing -- you know,
stretching the arm.
666
00:34:47,772 --> 00:34:50,975
No, this is not
what was requested,
667
00:34:50,975 --> 00:34:55,679
and he's showing again
what the right action is.
668
00:34:55,679 --> 00:34:58,515
Freeman: She's unsuccessful
in her first attempt,
669
00:34:58,515 --> 00:35:00,250
but eventually the robot learns
670
00:35:00,251 --> 00:35:04,388
that "Tokima" Means
"Raise two arms."
671
00:35:04,388 --> 00:35:07,758
after repeating this process
with different words,
672
00:35:07,758 --> 00:35:09,192
they try again.
673
00:35:09,193 --> 00:35:10,328
Homakey.
674
00:35:10,328 --> 00:35:12,596
Another request.
675
00:35:12,596 --> 00:35:14,599
He's doing the action.
676
00:35:16,467 --> 00:35:19,370
Yes, this is the right
kind of action.
677
00:35:19,370 --> 00:35:20,871
So, in other words,
678
00:35:20,871 --> 00:35:23,640
this robot has learned the word
from the other one
679
00:35:23,640 --> 00:35:25,643
and vice versa.
680
00:35:25,643 --> 00:35:28,312
They now have a way
to talk about actions.
681
00:35:28,312 --> 00:35:33,350
Freeman: The robots' language
is already so well-developed,
682
00:35:33,350 --> 00:35:35,520
they can teach it to Luc.
683
00:35:35,520 --> 00:35:39,624
Let's see what, you know,
what he asks me to do.
684
00:35:39,624 --> 00:35:41,158
Motakay.
685
00:35:41,158 --> 00:35:43,259
Okay, motakay.
686
00:35:44,394 --> 00:35:47,431
No, this is not right,
so he's showing it to me.
687
00:35:47,431 --> 00:35:50,401
Okay, I'm learning this gesture
now.
688
00:35:50,401 --> 00:35:52,303
Motakay.
689
00:35:52,303 --> 00:35:54,037
Motakay.
690
00:35:54,037 --> 00:35:56,138
Motakay is this.
691
00:35:56,139 --> 00:35:59,042
Okay, I got it right.
692
00:35:59,042 --> 00:36:02,947
So, now I'm going to
use the gesture with him.
693
00:36:02,947 --> 00:36:05,416
Motakay.
694
00:36:05,416 --> 00:36:08,819
Okay, yes,
you're doing the right thing.
695
00:36:08,819 --> 00:36:10,754
Thank you.
696
00:36:10,754 --> 00:36:13,723
As the robots
repeat this process,
697
00:36:13,724 --> 00:36:15,960
they generate words and actions
698
00:36:15,960 --> 00:36:18,763
that have real meanings
for one another.
699
00:36:18,763 --> 00:36:21,832
And so
the robots' vocabulary grows.
700
00:36:21,832 --> 00:36:24,201
Every new word they create
701
00:36:24,201 --> 00:36:27,872
is one more
that we can't understand.
702
00:36:27,872 --> 00:36:29,874
Is it only a matter of time
703
00:36:29,874 --> 00:36:34,278
before they lock us out of
the conversation completely?
704
00:36:34,278 --> 00:36:38,381
Steels: And I think
it's actually totally possible,
705
00:36:38,382 --> 00:36:42,086
but society will kind of
have to find the balance
706
00:36:42,086 --> 00:36:45,321
between what it is
that we want robots for
707
00:36:45,322 --> 00:36:49,092
and how much autonomy
are we willing to give them.
708
00:36:49,092 --> 00:36:53,030
Freeman: If we're giving robots
autonomy to move,
709
00:36:53,030 --> 00:36:54,932
to feel,
to make their own language,
710
00:36:54,932 --> 00:36:57,901
could that be enough
for them to surpass us?
711
00:36:57,902 --> 00:37:02,073
After all,
what's robot for "Exterminate"?
712
00:37:03,140 --> 00:37:04,709
But one Japanese scientist
713
00:37:04,709 --> 00:37:08,245
doesn't see the future
as robots versus humans.
714
00:37:08,245 --> 00:37:11,749
In fact, he is purposefully
engineering their intersection.
715
00:37:14,004 --> 00:37:17,074
We know that homo sapiens
cannot be the end of evolution.
716
00:37:20,010 --> 00:37:25,482
But will our descendents
be biological or mechanical?
717
00:37:25,482 --> 00:37:28,018
Some believe
that intelligent machines
718
00:37:28,018 --> 00:37:31,521
will eventually become
the dominant creatures on earth.
719
00:37:31,522 --> 00:37:33,456
But the next evolutionary step
720
00:37:33,456 --> 00:37:36,593
may not be robot
replacing human.
721
00:37:36,593 --> 00:37:42,565
It could be a life-form
that fuses man and machine.
722
00:37:42,565 --> 00:37:45,969
This is yoshiyuki sankai.
723
00:37:45,969 --> 00:37:49,606
Inspired by authors
like Isaac Asimov,
724
00:37:49,606 --> 00:37:53,610
he has always dreamed of fusing
human and robotic life-forms
725
00:37:53,610 --> 00:37:57,481
into something he calls the
hybrid assistive limb system...
726
00:38:00,001 --> 00:38:03,104
Or h.A.L.
727
00:38:03,104 --> 00:38:05,506
Sankai:
That one is one of my dreams.
728
00:38:05,506 --> 00:38:08,510
We could develop
such kind of devices,
729
00:38:08,510 --> 00:38:10,878
like a robot suit h.A.L. System,
730
00:38:10,878 --> 00:38:15,216
for supporting the humans
and human's physical movements.
731
00:38:15,216 --> 00:38:19,954
Freeman: And now,
after 20 years of research,
732
00:38:19,954 --> 00:38:21,890
he has succeeded.
733
00:38:21,890 --> 00:38:26,227
H.A.L. Assists the human body by
reading the brain's intentions
734
00:38:26,227 --> 00:38:30,831
and providing assistive power to
support the wearer's movement.
735
00:38:30,831 --> 00:38:35,903
If she wish to or try to move as
her brain generates intentions,
736
00:38:35,903 --> 00:38:39,674
and the robot detects
these intention signals
737
00:38:39,674 --> 00:38:43,445
and wants to assist
her movements.
738
00:38:44,000 --> 00:38:47,070
Freeman: When the brain
signals a muscle to move,
739
00:38:47,070 --> 00:38:50,206
it transmits a pulse
through the spinal cord
740
00:38:50,206 --> 00:38:52,608
and into the area of movement.
741
00:38:52,609 --> 00:38:54,211
This bioelectric signal
742
00:38:54,211 --> 00:38:57,047
is detectable
on the surface of the skin.
743
00:38:57,047 --> 00:39:01,318
Yoshiyuki designed the h.A.L.
Suit to pick up these impulses
744
00:39:01,318 --> 00:39:04,320
and then activate
the appropriate motors
745
00:39:04,320 --> 00:39:07,591
in order to assist the body
in its movement.
746
00:39:07,591 --> 00:39:11,928
The human brain is directly
controlling the robotic suit.
747
00:39:11,928 --> 00:39:15,365
It's not just
a technological breakthrough.
748
00:39:15,365 --> 00:39:18,935
Yoshiyuki already
has h.A.L. Suits at work
749
00:39:18,935 --> 00:39:23,907
in rehabilitation clinics
in Japan.
750
00:39:23,907 --> 00:39:28,277
So, if some of the body has some
problems, like a paralyzed...
751
00:39:28,277 --> 00:39:30,947
So this would help patients,
who are such kind,
752
00:39:30,947 --> 00:39:33,050
and a handicapped person
can use it.
753
00:39:35,451 --> 00:39:37,920
Freeman: People
who haven't walked in years
754
00:39:37,920 --> 00:39:39,956
are now on the move again
755
00:39:39,956 --> 00:39:43,126
thanks to
these brain-powered robot legs.
756
00:39:43,126 --> 00:39:48,465
Yoshiyuki has also developed
a model for the torso and arm
757
00:39:48,465 --> 00:39:50,133
that can provide
758
00:39:50,133 --> 00:39:53,403
up to 200 kilograms
of extra lifting power,
759
00:39:53,403 --> 00:39:56,907
turning regular humans
into strongmen.
760
00:39:56,907 --> 00:40:00,243
But it's not all about strength.
761
00:40:02,145 --> 00:40:03,813
He believes
762
00:40:03,814 --> 00:40:06,149
the merging of robotic machinery
and human biology
763
00:40:06,149 --> 00:40:10,753
will allow us to preserve
great achievements in movement.
764
00:40:10,753 --> 00:40:14,424
Athletes like Tiger Woods
or Roger federer
765
00:40:14,424 --> 00:40:17,661
bring unique skill and artistry
to their sports.
766
00:40:17,661 --> 00:40:21,964
However, when they die,
so does their movement.
767
00:40:21,964 --> 00:40:23,766
But since the h.A.L. Suit
768
00:40:23,766 --> 00:40:27,403
can detect and memorize
the movements of its wearer,
769
00:40:27,403 --> 00:40:31,341
that knowledge
doesn't have to disappear.
770
00:40:31,341 --> 00:40:36,345
If some of these athletes
like Tiger Woods,
771
00:40:36,345 --> 00:40:40,016
if they wear it
and they swing it,
772
00:40:40,016 --> 00:40:45,288
every data -- motion data
and physiological data --
773
00:40:45,288 --> 00:40:48,257
also gather in the computers.
774
00:40:50,160 --> 00:40:53,496
Freeman: We once built great
libraries to preserve knowledge
775
00:40:53,496 --> 00:40:56,933
expressed through writing
for future generations.
776
00:40:56,933 --> 00:41:02,938
Yoshiyuki wants to create
a great library of movement.
777
00:41:02,939 --> 00:41:07,043
By merging our bodies
with robotic exoskeletons,
778
00:41:07,043 --> 00:41:09,712
we will not only be stronger.
779
00:41:09,712 --> 00:41:11,982
We will all move as well
780
00:41:11,982 --> 00:41:16,652
as the most talented athletes
and artists.
781
00:41:16,652 --> 00:41:19,722
The last century
of popular culture
782
00:41:19,722 --> 00:41:24,227
has focused on apocalyptic
scenarios of robotic mutiny.
783
00:41:24,227 --> 00:41:28,765
But the h.A.L. Suit
opens up a different future.
784
00:41:28,765 --> 00:41:34,170
We tend to think about robotics
as an alternative life-form
785
00:41:34,170 --> 00:41:37,840
that may someday replace
or compete with humans.
786
00:41:37,840 --> 00:41:41,178
But I think
the reality of the matter is
787
00:41:41,178 --> 00:41:45,047
that, increasingly, we'll see
humans and robots cooperate
788
00:41:45,047 --> 00:41:48,550
and actually become
one kind of species
789
00:41:48,551 --> 00:41:50,753
both physically and mentally.
790
00:41:50,753 --> 00:41:53,223
Schmidt: Absolutely,
I think robots are the future.
791
00:41:53,223 --> 00:41:55,023
I think we need to rely on them.
792
00:41:55,024 --> 00:41:58,194
Otherwise, we will stagnate
and make no more progress.
793
00:41:58,194 --> 00:42:03,699
Eventually, life on the earth
will come to an end.
794
00:42:03,699 --> 00:42:05,868
What is our legacy?
795
00:42:05,868 --> 00:42:11,374
We will leave nothing
unless we leave consciousness.
796
00:42:11,374 --> 00:42:14,777
We need conscious robots
everywhere.
797
00:42:14,777 --> 00:42:17,914
That will be our legacy.
798
00:42:17,914 --> 00:42:19,950
That will be the legacy
of mankind.
799
00:42:22,185 --> 00:42:26,154
Robots are rapidly
becoming smarter, more agile,
800
00:42:26,155 --> 00:42:28,658
and are developing human traits
801
00:42:28,658 --> 00:42:33,563
like consciousness, emotions,
and inspiration.
802
00:42:33,563 --> 00:42:37,800
Will they leave us behind
on the evolutionary highway,
803
00:42:37,801 --> 00:42:41,504
or will humans join the machines
in a new age?
804
00:42:41,504 --> 00:42:46,109
Evolution is unpredictable...
805
00:42:46,109 --> 00:42:47,177
And is bound to surprise us.69585
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.