Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:01,050 --> 00:00:05,180
Let's talk about what AI actually
is. So, what is a AI--well
2
00:00:05,180 --> 00:00:07,970
actually this is a big discussion we
have to have
3
00:00:07,970 --> 00:00:12,030
as a field--what is AI? Well, we're
going to be building machine software,
4
00:00:12,030 --> 00:00:12,469
you know,
5
00:00:12,469 --> 00:00:13,700
that does something.
6
00:00:13,700 --> 00:00:17,320
What's our goal, what does it mean to
build an artificial intelligence.
7
00:00:17,320 --> 00:00:19,699
Well there's been multiple schools of thought on
this.
8
00:00:19,699 --> 00:00:22,230
One school of thought is what we
should really be doing is building machines
9
00:00:22,230 --> 00:00:22,949
10
00:00:22,949 --> 00:00:26,739
that think like people, right. Intelligence is
about thinking, and this is artificial.
11
00:00:26,739 --> 00:00:28,940
What's the natural intelligence--I guess
that's us.
12
00:00:28,940 --> 00:00:31,930
So we want to build these machines that
somehow go through the thinking
13
00:00:31,930 --> 00:00:34,810
processes that people do.
14
00:00:34,810 --> 00:00:37,620
Alright, there is actually a science that studies this,
15
00:00:37,620 --> 00:00:40,090
and it's not really AI anymore.
16
00:00:40,090 --> 00:00:43,890
This is some mix of cognitive
science and computational neuroscience
17
00:00:43,890 --> 00:00:45,770
really trying to understand the brain.
18
00:00:45,770 --> 00:00:48,700
And it's very important but it's not what this
course is going to be about.
19
00:00:48,700 --> 00:00:51,910
So another thing that people at times have
thought AI should be, is we should be
20
00:00:51,910 --> 00:00:54,330
building machines that act like people.
21
00:00:54,330 --> 00:00:56,980
Okay, so we should say: who cares
about how they think, they can think in some
22
00:00:56,980 --> 00:01:03,190
strange, alien, silicon way, but the action,
the behavior has to be like what we know from people.
23
00:01:03,190 --> 00:01:06,750
This is actually a very early definition.
This is straight from, uh, Alan Turing,
24
00:01:06,750 --> 00:01:11,040
the definition that really, all you can
really check is behavior. Is the behavior
25
00:01:11,040 --> 00:01:12,610
like an intelligent human?
26
00:01:12,610 --> 00:01:16,469
So this led to things like the Turing test where you put a robot on one
27
00:01:16,469 --> 00:01:20,090
chat channel and a human on the other and
then you have an interrogator who chats
28
00:01:20,090 --> 00:01:23,760
with both of them and try to say that one
was the robot and that one was the human.
29
00:01:23,760 --> 00:01:26,840
And this is a really good idea because
provided you can't actually see them--there's no
30
00:01:26,840 --> 00:01:29,580
video right ..., where you know the robot's the one with the blinking lights right.
31
00:01:29,580 --> 00:01:30,429
32
00:01:30,429 --> 00:01:32,049
So provided it's just over chat
33
00:01:32,049 --> 00:01:35,060
you can really kind of test anything.
It's open-ended: do they have hobbies,
34
00:01:35,060 --> 00:01:38,719
can they answer a general question
about a chess configuration,
35
00:01:38,719 --> 00:01:42,669
right. The problem was, the Turing test, in
order to really do well,
36
00:01:42,669 --> 00:01:45,969
you don't just really concentrate on
programming intelligence, you concentrate
37
00:01:45,969 --> 00:01:46,729
on things like,
38
00:01:46,729 --> 00:01:48,040
don't spell too well,
39
00:01:48,040 --> 00:01:51,079
humans don't do that. And so you build in
some type of typo Turing machines
40
00:01:51,079 --> 00:01:54,299
and then you think, wait a minute, if i get
asked about the square root thirty-five,
41
00:01:54,299 --> 00:01:56,920
I better not have an answer.
42
00:01:56,920 --> 00:02:00,829
And so you go through basically trying to
mimic things that probably you didn't
43
00:02:00,829 --> 00:02:03,920
really value in the human in the first
place. On the other hand, you got to be
44
00:02:03,920 --> 00:02:06,240
really sure that you have a favorite
Shakespeare play
45
00:02:06,240 --> 00:02:08,989
'cause the interrogator always asked
that.
46
00:02:08,989 --> 00:02:12,159
Okay, that thinking like people and acting like
people and the realization was this
47
00:02:12,159 --> 00:02:15,510
really wasn't going anywhere in terms of
building machines that were useful in
48
00:02:15,510 --> 00:02:16,849
say industry,
49
00:02:16,849 --> 00:02:19,650
and so the realization was maybe it's
not about mimicking people.
50
00:02:19,650 --> 00:02:23,240
We've already got those, right. Maybe we
should do something else. Maybe what we should
51
00:02:23,240 --> 00:02:26,500
be doing is building machines that think
rationally. So, whatever thought
52
00:02:26,500 --> 00:02:29,580
processes are, they should be correct.
What does it mean to have a correct thought process,
53
00:02:29,580 --> 00:02:32,189
it's a very kind of a prescriptive
thing.
54
00:02:32,189 --> 00:02:36,189
And this actually has a long history in the
logicist and philosophy tradition
55
00:02:36,189 --> 00:02:39,139
going all the way back, say to
Aristotle's laws of thought.
56
00:02:39,139 --> 00:02:39,830
This is how you think
57
00:02:39,830 --> 00:02:43,209
in order to kind of not make a mistake
in your deductions.
58
00:02:43,209 --> 00:02:47,290
And this tradition actually still shows
up in various places of AI.
59
00:02:47,290 --> 00:02:48,380
By and large,
60
00:02:48,380 --> 00:02:52,400
this wasn't the winner, and the reason it
wasn't the winner is because our ability
61
00:02:52,400 --> 00:02:55,779
to write down how to do logical
deduction
62
00:02:55,779 --> 00:02:57,809
turned out to be relatively fragile,
63
00:02:57,809 --> 00:03:01,629
and it any case when we're learning
about how to incorporate uncertainty we
64
00:03:01,629 --> 00:03:04,849
also had this realization that really it
wasn't about how you think, but about the
65
00:03:04,849 --> 00:03:06,289
actions you take in the end.
66
00:03:06,289 --> 00:03:09,370
So the winner for this course is that AI, for us,
67
00:03:09,370 --> 00:03:12,509
is the science of making machines, that act
rationally.
68
00:03:12,509 --> 00:03:15,329
So what's that mean? We only care about what they do,
69
00:03:15,329 --> 00:03:19,419
and our requirement on what they do is
the that they achieve their goals optimally.
70
00:03:19,419 --> 00:03:22,829
You may be looking at this, and you maybe be thinking, okay rational, rational means I have a
71
00:03:22,829 --> 00:03:26,719
level-headed decision, I don't get angry. So we want to build machines that don't get angry.
72
00:03:26,719 --> 00:03:28,689
Well you know, I don't know, uh...
73
00:03:28,689 --> 00:03:31,230
if you think back to GLaDOS maybe
that's good, maybe we shouldn't
74
00:03:31,230 --> 00:03:34,819
build machines that get angry. Um... Skynet
got a little angry.
75
00:03:34,819 --> 00:03:38,229
So maybe we shouldn't build machines that get angry.
76
00:03:38,229 --> 00:03:38,919
But when we say rational that's not what we mean.
77
00:03:38,919 --> 00:03:42,369
Rational has a very technical meaning. It
means that you maximally achieve your pre-defined goals.
78
00:03:42,369 --> 00:03:46,069
So the input to an AI
is a goal,
79
00:03:46,069 --> 00:03:50,299
and rationality means you achieve it in
the best possible way.
80
00:03:50,299 --> 00:03:52,749
Rationality--only matters what you do.
81
00:03:52,749 --> 00:03:56,269
It doesn't matter the thought process
you go through, right. If I have a
82
00:03:56,269 --> 00:03:57,099
robot vacuum cleaner,
83
00:03:57,099 --> 00:04:02,279
and it just make some optimal grid on
the ground, and cleans up all the dirt, great.
84
00:04:02,279 --> 00:04:06,229
If it sits in the corner and thinks, alright, where
shall I clean? Well if I go diagonally
85
00:04:06,229 --> 00:04:09,289
there will be a place left over. And then
it cleans everything up--fine, it doesn't matter.
86
00:04:09,289 --> 00:04:10,760
They're equally rational
87
00:04:10,760 --> 00:04:12,560
for that task in that context.
88
00:04:12,560 --> 00:04:14,470
There may be advantages to the thinking
robot,
89
00:04:14,470 --> 00:04:17,389
there may be advantages to the kind of
more reactive reflex robot.
90
00:04:17,389 --> 00:04:19,949
We'll talk about that in the next class.
91
00:04:19,949 --> 00:04:23,169
Goals are all expressed through utilities. So
we're going to spend a lot of time in this course talking
92
00:04:23,169 --> 00:04:25,270
about what a utility is.
93
00:04:25,270 --> 00:04:28,659
And in the end remember that being
rational means maximizing your expected utility.
94
00:04:28,659 --> 00:04:31,199
95
00:04:31,199 --> 00:04:34,330
Okay, so this course, really, we should have called it computational rationality. We're going to teach you
96
00:04:34,330 --> 00:04:36,669
computational methods--this is a computer
science course, and
97
00:04:36,669 --> 00:04:38,820
it's all going to be about this idea of
rationality:
98
00:04:38,820 --> 00:04:40,879
maximally achieving your goals.
99
00:04:40,879 --> 00:04:44,169
Okay, you say what about artificial? I didn't really say anything about artificiality,
100
00:04:44,169 --> 00:04:45,580
that's kind of orthogonal.
101
00:04:45,580 --> 00:04:47,000
And what about intelligence? Well,
102
00:04:47,000 --> 00:04:49,930
intelligence is a tricky thing. The
philosophers are still working on that.
103
00:04:49,930 --> 00:04:52,970
When they get back to us on what intelligence is, well probably we'll just
104
00:04:52,970 --> 00:04:53,760
ask them then what consciousness is.
105
00:04:53,760 --> 00:04:56,680
but when they get back to us on
intelligence, we're gonna say,
106
00:04:56,680 --> 00:05:00,599
that's great but we're working on rationality right now.
107
00:05:00,599 --> 00:05:03,270
Okay, so if you remember nothing else in the
course,
108
00:05:03,270 --> 00:05:06,779
or if you decide that you really want an AI tattoo,
109
00:05:06,779 --> 00:05:09,639
and you needed to distill the course down to one thing,
110
00:05:09,639 --> 00:05:10,540
it would be this:
111
00:05:10,540 --> 00:05:13,099
it would be maximize your expected utility.
112
00:05:13,099 --> 00:05:16,789
Aand we're gonna spend this entire course
thinking about computational systems
113
00:05:16,789 --> 00:05:17,780
that do this.
114
00:05:17,780 --> 00:05:21,830
And in order to do that we've got, you
know, however many weeks left in which we
115
00:05:21,830 --> 00:05:23,400
will unpack this definition
116
00:05:23,400 --> 00:05:25,560
The first part of the course deals with
the maximize:
117
00:05:25,560 --> 00:05:28,970
How do I figure out which action is best?
That has to do with the consequences of
118
00:05:28,970 --> 00:05:32,180
that action, the context of that action,
are there adversaries.
119
00:05:32,180 --> 00:05:35,240
We're then going to have to unpack this idea of utility. What is utility?
120
00:05:35,240 --> 00:05:37,840
What does it mean to have a function that
describes my goals.
121
00:05:37,840 --> 00:05:40,569
And then, kind of the kicker in here
that's a little bit hidden is what is
122
00:05:40,569 --> 00:05:40,830
123
00:05:40,830 --> 00:05:42,379
this deal about expectation?
124
00:05:42,379 --> 00:05:44,570
Well if I take an action I don't know
what's gonna happen.
125
00:05:44,570 --> 00:05:48,530
So my optimization of goals
rationally doesn't deal being successful.
126
00:05:48,530 --> 00:05:52,070
Life is full of risks. It has to do with
doing the right thing in kind of the
127
00:05:52,070 --> 00:05:54,260
appropriate kind of weighted average.
128
00:05:54,260 --> 00:05:57,080
And so we're going to have to unpack this
notion of what it means to do the right
129
00:05:57,080 --> 00:06:00,729
thing on average, and that'll get us into
probabilistic inference, and that will
130
00:06:00,729 --> 00:06:01,909
occupy the middle third of the course.
12012
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.