All language subtitles for 3 - Total Probability - Artificial Intelligence for Robotics.en copy

af Afrikaans
ak Akan
sq Albanian
am Amharic
ar Arabic Download
hy Armenian
az Azerbaijani
eu Basque
be Belarusian
bem Bemba
bn Bengali
bh Bihari
bs Bosnian
br Breton
bg Bulgarian
km Cambodian
ca Catalan
ceb Cebuano
chr Cherokee
ny Chichewa
zh-CN Chinese (Simplified)
zh-TW Chinese (Traditional)
co Corsican
hr Croatian
cs Czech
da Danish
nl Dutch
en English
eo Esperanto
et Estonian
ee Ewe
fo Faroese
tl Filipino
fi Finnish
fr French
fy Frisian
gaa Ga
gl Galician
ka Georgian
de German
el Greek
gn Guarani
gu Gujarati
ht Haitian Creole
ha Hausa
haw Hawaiian
iw Hebrew
hi Hindi
hmn Hmong
hu Hungarian
is Icelandic
ig Igbo
id Indonesian
ia Interlingua
ga Irish
it Italian
ja Japanese
jw Javanese
kn Kannada
kk Kazakh
rw Kinyarwanda
rn Kirundi
kg Kongo
ko Korean
kri Krio (Sierra Leone)
ku Kurdish
ckb Kurdish (Soranî)
ky Kyrgyz
lo Laothian
la Latin
lv Latvian
ln Lingala
lt Lithuanian
loz Lozi
lg Luganda
ach Luo
lb Luxembourgish
mk Macedonian
mg Malagasy
ms Malay
ml Malayalam
mt Maltese
mi Maori
mr Marathi
mfe Mauritian Creole
mo Moldavian
mn Mongolian
my Myanmar (Burmese)
sr-ME Montenegrin
ne Nepali
pcm Nigerian Pidgin
nso Northern Sotho
no Norwegian
nn Norwegian (Nynorsk)
oc Occitan
or Oriya
om Oromo
ps Pashto
fa Persian
pl Polish
pt-BR Portuguese (Brazil)
pt Portuguese (Portugal)
pa Punjabi
qu Quechua
ro Romanian
rm Romansh
nyn Runyakitara
ru Russian
sm Samoan
gd Scots Gaelic
sr Serbian
sh Serbo-Croatian
st Sesotho
tn Setswana
crs Seychellois Creole
sn Shona
sd Sindhi
si Sinhalese
sk Slovak
sl Slovenian
so Somali
es Spanish
es-419 Spanish (Latin American)
su Sundanese
sw Swahili
sv Swedish
tg Tajik
ta Tamil
tt Tatar
te Telugu
th Thai
ti Tigrinya
to Tonga
lua Tshiluba
tum Tumbuka
tr Turkish
tk Turkmen
tw Twi
ug Uighur
uk Ukrainian
ur Urdu
uz Uzbek
vi Vietnamese
cy Welsh
wo Wolof
xh Xhosa
yi Yiddish
yo Yoruba
zu Zulu
Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated: 00:00:00,000 --> 00:00:05,000 Let me begin my story in a world where our robot resides, 00:00:05,000 --> 00:00:09,000 Let's assume the robot has no clue where it is, 00:00:09,000 --> 00:00:14,000 Then we would model this with a function--I'm going to draw into this diagram over here 00:00:14,000 --> 00:00:20,000 where the vertical axis is the probability for any location in this world, 00:00:20,000 --> 00:00:25,000 and the horizontal axis corresponds to all the places in this 1-dimensional world, 00:00:25,000 --> 00:00:29,000 The way I'm going to model the robot's current belief about where it might be, 00:00:29,000 --> 00:00:33,000 it's confusion is by a uniform function that assigns equal weight 00:00:33,000 --> 00:00:36,000 to every possible place in this world, 00:00:36,000 --> 00:00:39,000 That is the state of maximum confusion 00:00:39,000 --> 00:00:42,000 Now, to localize the world has to have some distinctive features, 00:00:42,000 --> 00:00:46,000 Let's assume there are 3 different landmarks in the world, 00:00:46,000 --> 00:00:55,000 There is a door over here, there's a door over here, and a 3rd one way back here, 00:00:55,000 --> 00:00:57,000 For the sake of the argument, 00:00:57,000 --> 00:00:59,000 let's assume they all look alike, so they're not distinguishable, 00:00:59,000 --> 00:01:04,000 but you can distinguish the door from the non-door area--from the wall, 00:01:04,000 --> 00:01:09,000 Now let's see how the robot can localize itself by assuming it senses, 00:01:09,000 --> 00:01:12,000 and it senses that it's standing right next to a door, 00:01:12,000 --> 00:01:17,000 So all it knows now is that it is located, likely, next to a door, 00:01:17,000 --> 00:01:20,000 How would this affect our belief? 00:01:20,000 --> 00:01:23,000 Here is the critical step for localization, 00:01:23,000 --> 00:01:26,000 If you understand this step, you understand localization, 00:01:26,000 --> 00:01:30,000 The measurement of a door transforms our belief function, 00:01:30,000 --> 00:01:35,000 defined over possible locations, to a new function that looks pretty much like this, 00:01:35,000 --> 00:01:42,000 For the 3 locations adjacent to doors, we now have an increased belief of being there 00:01:42,000 --> 00:01:46,000 whereas all the other locations have a decreased belief, 00:01:46,000 --> 00:01:51,000 This is a probability distribution that assigns higher probability for being next to a door, 00:01:51,000 --> 00:02:00,000 and it's called the posterior belief where the word "posterior" means it's after a measurement has been taken, 00:02:00,000 --> 00:02:04,000 Now, the key aspect of this belief is that we still don't know where we are, 00:02:04,000 --> 00:02:07,000 There are 3 possible door locations, and in fact, it might be 00:02:07,000 --> 00:02:11,000 that the sensors were erroneous, and we accidentally saw a door where there's none, 00:02:11,000 --> 00:02:16,000 So there is still a residual probability of being in these places over here, 00:02:16,000 --> 00:02:22,000 but these three bumps together really express our current best belief of where we are, 00:02:22,000 --> 00:02:29,000 This representation is absolutely core to probability and to mobile robot localization, 00:02:29,000 --> 00:02:32,000 Now let's assume the robot moves, 00:02:32,000 --> 00:02:35,000 Say it moves to the right by a certain distance, 00:02:35,000 --> 00:02:40,000 Then we can shift the belief according to the motion, 00:02:40,000 --> 00:02:43,000 And the way this might look like is about like this, 00:02:43,000 --> 00:02:46,000 So this bump over here made it to here, 00:02:46,000 --> 00:02:50,000 This guy went over here, and this guy over here, 00:02:50,000 --> 00:02:53,000 Obviously, this robot knows its heading direction, 00:02:53,000 --> 00:02:56,000 It's moving to the right in this example, 00:02:56,000 --> 00:02:58,000 and it knows roughly how far it moved, 00:02:58,000 --> 00:03:00,000 Now, robot motion is somewhat uncertain, 00:03:00,000 --> 00:03:02,000 We can never be certain where the robot moved, 00:03:02,000 --> 00:03:06,000 So these things are a little bit flatter than these guys over here, 00:03:06,000 --> 00:03:11,000 The process of moving those beliefs to the right side is technically called a convolution, 00:03:11,000 --> 00:03:15,000 Let's now assume the robot senses again, and for the sake of the argument, 00:03:15,000 --> 00:03:19,000 let's assume it sees itself right next to a door again, 00:03:19,000 --> 00:03:21,000 so the measurement is the same as before, 00:03:21,000 --> 00:03:24,000 Now the most amazing thing happens, 00:03:24,000 --> 00:03:29,000 We end up multiplying our belief, which is now prior to the second measurement, 00:03:29,000 --> 00:03:32,000 with a function that looks very much like this one over here, 00:03:32,000 --> 00:03:37,000 which has a peak at each door and out comes a belief that looks like the following, 00:03:37,000 --> 00:03:42,000 There are a couple of minor bumps, but the only really big bump is this one over here, 00:03:42,000 --> 00:03:48,000 This one corresponds to this guy over there in the prior, 00:03:48,000 --> 00:03:53,000 and it's the only place in this prior that really corresponds to the measurement of a door, 00:03:53,000 --> 00:03:57,000 whereas all the other places of doors have a low prior belief, 00:03:57,000 --> 00:04:01,000 As a result, this function is really interesting, 00:04:01,000 --> 00:04:04,000 It's a distribution that focuses most of its weight 00:04:04,000 --> 00:04:07,000 onto the correct hypothesis of the robot being in the second door, 00:04:07,000 --> 00:04:12,000 and it provides very little belief to places far away from doors, 00:04:12,000 --> 00:04:16,000 At this point, our robot has localized itself, 00:04:16,000 --> 00:04:21,000 If you understood this, you understand probability, and you understand localization, 00:04:21,000 --> 00:04:26,000 So congratulations, You understand probability and localization, 00:04:26,000 --> 00:04:30,000 You might not know yet, but that's really a core aspect of understanding 00:04:30,000 --> 09:59:59,000 a whole bunch of things I'm going to teach you in the class today, 6256

Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.