All language subtitles for 044 How Do GANs Work (Step 2)-en

af Afrikaans
sq Albanian
am Amharic
ar Arabic
hy Armenian
az Azerbaijani
eu Basque
be Belarusian
bn Bengali
bs Bosnian
bg Bulgarian
ca Catalan
ceb Cebuano
ny Chichewa
zh-CN Chinese (Simplified)
zh-TW Chinese (Traditional)
co Corsican
hr Croatian
cs Czech
da Danish
nl Dutch
en English
eo Esperanto
et Estonian
tl Filipino
fi Finnish
fr French
fy Frisian
gl Galician
ka Georgian
de German
el Greek
gu Gujarati
ht Haitian Creole
ha Hausa
haw Hawaiian
iw Hebrew
hi Hindi
hmn Hmong
hu Hungarian
is Icelandic
ig Igbo
id Indonesian
ga Irish
it Italian
ja Japanese
jw Javanese
kn Kannada
kk Kazakh
km Khmer
ko Korean
ku Kurdish (Kurmanji)
ky Kyrgyz
lo Lao
la Latin
lv Latvian
lt Lithuanian
lb Luxembourgish
mk Macedonian
mg Malagasy
ms Malay
ml Malayalam
mt Maltese
mi Maori
mr Marathi
mn Mongolian
my Myanmar (Burmese)
ne Nepali
no Norwegian
ps Pashto
fa Persian
pl Polish
pt Portuguese
pa Punjabi
ro Romanian
ru Russian
sm Samoan
gd Scots Gaelic
sr Serbian
st Sesotho
sn Shona
sd Sindhi
si Sinhala
sk Slovak
sl Slovenian
so Somali
es Spanish
su Sundanese
sw Swahili
sv Swedish
tg Tajik
ta Tamil
te Telugu
th Thai
tr Turkish
uk Ukrainian
ur Urdu
uz Uzbek
vi Vietnamese Download
cy Welsh
xh Xhosa
yi Yiddish
yo Yoruba
zu Zulu
or Odia (Oriya)
rw Kinyarwanda
tk Turkmen
tt Tatar
ug Uyghur
Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated: 1 00:00:01,130 --> 00:00:03,280 Step two is basically the same thing. 2 00:00:03,290 --> 00:00:08,900 But now the networks have been more trained which is going to do this again to reiterate this whole 3 00:00:08,900 --> 00:00:11,420 process and understand it better. 4 00:00:11,420 --> 00:00:18,450 So Step Two noise goes into the generator generator generates images which are not. 5 00:00:18,740 --> 00:00:20,570 They don't look as random anymore. 6 00:00:20,570 --> 00:00:28,230 They're kind of a bit clearer and we'll understand why in just a second off once we were finishing up 7 00:00:28,230 --> 00:00:31,580 with Step two we'll understand how they generate or understand these things. 8 00:00:31,670 --> 00:00:38,030 But basically through the through the back propagation error the declaration of error understood where 9 00:00:38,020 --> 00:00:40,730 it was making mistakes in the it. 10 00:00:41,120 --> 00:00:46,320 It's adjusted its weight and now is generating images which are a little bit more like dogs. 11 00:00:47,760 --> 00:00:53,270 And now we're going to train the discriminator again and for that we need some images of dogs. 12 00:00:53,490 --> 00:00:54,410 A new batch. 13 00:00:54,410 --> 00:01:01,390 And now we're going to put all of those into the discriminator along with the images of the dogs and 14 00:01:02,140 --> 00:01:07,900 it will output some values so I'll put the values now we need to compare the values to the actual numbers 15 00:01:08,210 --> 00:01:13,420 that we want that we know which ones are those which are not dogs the top ones are not dogs but the 16 00:01:13,420 --> 00:01:14,110 dogs. 17 00:01:14,290 --> 00:01:21,940 So we tell that to the discriminator this cremator Kalka is the era that propagates the arrow through 18 00:01:21,940 --> 00:01:26,650 its network and therefore learns from that and the weights are obtained. 19 00:01:26,650 --> 00:01:30,870 So next time we'll do a better job at discriminating dogs versus dogs. 20 00:01:30,940 --> 00:01:31,810 So it's learning. 21 00:01:31,810 --> 00:01:37,120 So it's now at some level to kind of like metaphorically speaking. 22 00:01:37,660 --> 00:01:41,870 And now we want to train the generators so the. 23 00:01:42,070 --> 00:01:46,470 So we want to get rid of that the generator generator and then we're going to use the same image. 24 00:01:46,480 --> 00:01:50,790 We're going to put them through the network of the discriminator is going to output some values. 25 00:01:50,830 --> 00:01:52,710 As you can see these values are lower. 26 00:01:52,750 --> 00:01:58,840 So if I go back you'll see that the values were apparently is open for the point to now the values are 27 00:01:58,990 --> 00:02:07,360 0.5 inches or 21 so they draw because the discriminator has learned what how that these don't really 28 00:02:07,360 --> 00:02:08,010 look like dogs. 29 00:02:08,020 --> 00:02:09,630 But as you can see they don't. 30 00:02:09,630 --> 00:02:13,780 They didn't drop as low they were like dropping to like a complete zero. 31 00:02:13,790 --> 00:02:18,040 We'll talk about that later further down when we're doing Step Three. 32 00:02:18,070 --> 00:02:24,390 But for now just kind of noticed that what do we want to discuss now though is that what happens. 33 00:02:24,400 --> 00:02:27,390 How does a generator kind of like get better. 34 00:02:27,550 --> 00:02:29,860 Well I think of that intuitively. 35 00:02:29,950 --> 00:02:35,000 Well we again we take these values we calculate errors because we want these values to be ones. 36 00:02:35,110 --> 00:02:40,510 That's what the generator wanted wants to trick the discriminator and this error these errors of backburn 37 00:02:40,510 --> 00:02:45,340 brigade through to the discriminator and the weights are updated and the way to think of it intuitively 38 00:02:45,340 --> 00:02:55,390 is these networks are huge What's what we have drawn here is like these are very small just representations 39 00:02:55,390 --> 00:02:58,950 of those networks just like images to show that this is indeed a network. 40 00:02:58,960 --> 00:03:01,250 But in reality those areas are much much bigger. 41 00:03:01,480 --> 00:03:08,230 And the way to think about it is that the there's the communication happening between the general discriminant 42 00:03:08,260 --> 00:03:11,260 So the generator creates these images. 43 00:03:11,280 --> 00:03:14,580 Since this criminal says hey this computer I have some images here. 44 00:03:14,710 --> 00:03:17,350 Do you do you think these are images of dogs or not. 45 00:03:17,350 --> 00:03:18,800 And what do you think the possibilities are. 46 00:03:18,970 --> 00:03:23,790 And the distributor looks at them and says oh well you know those don't really look like dogs to me 47 00:03:23,800 --> 00:03:28,270 I'll give them about 20 percent probability 10 percent probability of 50 percent possibility and then 48 00:03:28,270 --> 00:03:30,550 that discriminators like Are you OK. 49 00:03:30,580 --> 00:03:31,690 You're totally right. 50 00:03:31,720 --> 00:03:32,770 I was trying to trick you. 51 00:03:32,770 --> 00:03:35,700 These are not dogs but can you tell me what I did wrong. 52 00:03:35,710 --> 00:03:38,670 You know like what where did I go wrong. 53 00:03:38,920 --> 00:03:45,070 And the discriminator in this case my look at the images and say this is the back propagation we're 54 00:03:45,070 --> 00:03:51,700 kind of like this is the intuitive understanding of the back propagation process and gry indecent the 55 00:03:53,490 --> 00:03:59,440 discriminator might say something like look I checked your images of dogs you know you know the images 56 00:03:59,440 --> 00:04:02,860 you send me and I really have seen images of dogs. 57 00:04:03,370 --> 00:04:06,910 So I know that dogs normally for instance have eyes. 58 00:04:06,910 --> 00:04:13,240 Your dogs have some little black dots which don't really look like eyes to me and you're missing out 59 00:04:13,240 --> 00:04:17,680 on eyes or you know you're you know we don't have enough paws or you don't have tails and your dogs 60 00:04:17,680 --> 00:04:19,480 those don't look like dogs to me or they're not. 61 00:04:19,690 --> 00:04:24,580 They don't look three dimensional enough they look very flat dogs and all three look three dimensional 62 00:04:24,580 --> 00:04:25,130 to me. 63 00:04:25,150 --> 00:04:28,490 So and that's this process of back propagation. 64 00:04:28,540 --> 00:04:35,230 This error is actually that's what it's kind of like overall in human language telling that discriminate 65 00:04:35,410 --> 00:04:42,530 the generator but in reality of course it's just telling the generator that hey you know you got to 66 00:04:42,740 --> 00:04:45,970 go to this part of your image doesn't feel right. 67 00:04:45,970 --> 00:04:50,130 This part of the image doesn't feel right and just basically dealing with individual pixels and how 68 00:04:50,130 --> 00:04:54,790 it's how they should be updated for these images to better resemble dogs. 69 00:04:54,790 --> 00:04:57,750 And then the weights of the neural network are up. 70 00:04:58,090 --> 00:05:00,440 So the next time we'll do a better job the generator. 71 00:05:00,640 --> 00:05:01,390 So step three. 7884

Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.