Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,450 --> 00:00:02,530
Hello and welcome to listen to Soyo.
2
00:00:02,760 --> 00:00:07,520
All right so in the previous Steuerle we took care of the first step to obtain the weight of the neural
3
00:00:07,530 --> 00:00:09,130
network of discriminator.
4
00:00:09,360 --> 00:00:11,700
And now we're going to tackle the second step.
5
00:00:11,700 --> 00:00:15,390
Updating the weights of the neural network of this time the generator.
6
00:00:15,630 --> 00:00:18,830
So it's going to be easier than the first big step.
7
00:00:18,840 --> 00:00:23,710
Break down the area between the real error and fake error to compute the total error.
8
00:00:23,850 --> 00:00:29,490
This time there will only be one error Dallas area between the prediction of the discriminator whether
9
00:00:29,490 --> 00:00:33,900
or not the image generated by the generator should be accepted yes or no.
10
00:00:34,080 --> 00:00:36,550
And the target which will be equal to 1.
11
00:00:36,570 --> 00:00:42,630
Why will this be equal to one that's because this time we want the generator to have some weights that
12
00:00:42,720 --> 00:00:50,400
allow his brain to produce some images that look like real images and therefore we want to push the
13
00:00:50,400 --> 00:00:56,550
production close to a target of one this time we're training the brain of the generator to be able to
14
00:00:56,550 --> 00:00:59,680
generate some images that look like real images.
15
00:00:59,850 --> 00:01:04,680
So that's another key point to understand the target will be equal to one.
16
00:01:04,770 --> 00:01:10,410
Even if this time the image that will be the input of the discriminator will be the fake image of the
17
00:01:10,410 --> 00:01:11,490
generator.
18
00:01:11,490 --> 00:01:12,330
All right.
19
00:01:12,330 --> 00:01:13,350
So let's do this.
20
00:01:13,350 --> 00:01:15,710
It's going to be faster than previously.
21
00:01:15,810 --> 00:01:22,830
We're going to start by initializing the gradient of the generator with respect to the weights to zero.
22
00:01:23,070 --> 00:01:25,270
And let's do that efficiently.
23
00:01:25,380 --> 00:01:30,510
We simply need to take that line of code again because that's the same thing we did for the gradient
24
00:01:30,510 --> 00:01:31,690
of the discriminator.
25
00:01:31,830 --> 00:01:38,370
So I am pasting in here and then replacing a D by of course not g.
26
00:01:38,490 --> 00:01:44,200
We want to initialize the weight of the gradient of the generator this time.
27
00:01:44,200 --> 00:01:45,380
Then next step.
28
00:01:45,630 --> 00:01:49,600
Well the next step would naturally be to get the input.
29
00:01:49,710 --> 00:01:55,590
But the thing is we already have the inputs you know the input is going to be this fake image or should
30
00:01:55,590 --> 00:02:02,580
I say this mini batch of fake images that are going to be again the input of the discriminator.
31
00:02:02,580 --> 00:02:07,560
So we already have the input we already have the fake images of the mini batch and therefore we're directly
32
00:02:07,560 --> 00:02:09,190
going to get the targets.
33
00:02:09,230 --> 00:02:13,200
And so this time according to you what is the toy going to be.
34
00:02:13,200 --> 00:02:17,070
Is it going to be a mini batch of zeros or of ones.
35
00:02:17,340 --> 00:02:23,100
Well as I explained in the beginning of the tutorial this time we want to push the predictions to one
36
00:02:23,250 --> 00:02:28,910
because we want the discriminator to accept that the fake images are real images.
37
00:02:28,950 --> 00:02:35,760
So the target for all the input fake images of the mini batch should be all ones and therefore I'm taking
38
00:02:36,420 --> 00:02:44,810
this line of code copying it and pasting it right here to get my targets of once great.
39
00:02:44,850 --> 00:02:47,630
My target is already wrapped into a variable.
40
00:02:47,700 --> 00:02:48,440
Perfect.
41
00:02:48,510 --> 00:02:50,920
I'm allowed to move on to the next step.
42
00:02:50,940 --> 00:02:52,310
So now what is the next step.
43
00:02:52,500 --> 00:02:59,910
Well the next step is to get the output the output of the discriminator when the input is are fake images.
44
00:02:59,910 --> 00:03:07,590
Therefore I'm getting a new variable output and I'm taking my neural network of the discriminator to
45
00:03:07,590 --> 00:03:15,720
needy and I'm feeling this neural network of the discriminator with the Merabet fake input images.
46
00:03:15,840 --> 00:03:21,180
And so for each of these fake images I'm going to get the discrimination that is I'm going to get a
47
00:03:21,180 --> 00:03:26,980
discriminating number between 0 and 1 if this number is close to zero the image will be rejected.
48
00:03:27,180 --> 00:03:31,670
And if this number is close to 1 the image will be accepted.
49
00:03:31,680 --> 00:03:38,660
Now something important remember that in the previous output we detached the gradient of fake.
50
00:03:38,670 --> 00:03:40,290
This time we're not going to do it.
51
00:03:40,290 --> 00:03:41,180
Why is that.
52
00:03:41,280 --> 00:03:45,960
Because we want to keep the gradient of fake We want to keep the Great in effect because we're going
53
00:03:45,960 --> 00:03:49,410
to update the weights of the neural network and the generator.
54
00:03:49,500 --> 00:03:53,410
And two of these weight will actually need the gradient AFAIK.
55
00:03:53,540 --> 00:03:57,210
So that's why it's important here not to detach it.
56
00:03:57,210 --> 00:03:57,870
All right.
57
00:03:57,870 --> 00:04:01,520
Next step now that we have the output and the target.
58
00:04:01,620 --> 00:04:07,770
Well we are ready to get the error of prediction but this time this error of prediction is going to
59
00:04:07,770 --> 00:04:14,100
be the error related to the generator because we will back propagate this error back into the neural
60
00:04:14,100 --> 00:04:22,210
network of the generator as opposed to before where we back propagated the total error back to the discriminator.
61
00:04:22,410 --> 00:04:27,270
So that's important now to understand that this error is related to the generator and therefore I'm
62
00:04:27,270 --> 00:04:31,540
going to call it the r r g e r r g.
63
00:04:31,680 --> 00:04:38,310
Then I'm going to get from my criterion that is going to compute the last error between the outputs
64
00:04:38,940 --> 00:04:45,120
and the target the output which is the output of the discriminator when the input is the fake image
65
00:04:45,480 --> 00:04:48,590
and the target which is the mini Becci full of watts.
66
00:04:48,690 --> 00:04:55,450
All right so now that we have the error we can back propagate it in the neural network of the generator.
67
00:04:55,590 --> 00:04:57,200
So I'm taking this error.
68
00:04:57,320 --> 00:04:59,620
R R G than that.
69
00:04:59,670 --> 00:05:07,280
And then applying the backward function which keep in mind so far only compute the gradients but then
70
00:05:07,760 --> 00:05:13,370
we are going to use the optimizer of the generator to make sure that this time it's going to be the
71
00:05:13,370 --> 00:05:16,580
weight of the generator that will be updated.
72
00:05:16,580 --> 00:05:25,910
Therefore I'm exactly going to take the optimizer of the generator optimizer g to update the weights
73
00:05:26,450 --> 00:05:29,040
of the neural network of the generator.
74
00:05:29,270 --> 00:05:30,210
And here we go.
75
00:05:30,230 --> 00:05:31,640
The second step is done.
76
00:05:31,910 --> 00:05:33,130
So congratulations.
77
00:05:33,140 --> 00:05:36,130
Now basically the difficult part of the training is done.
78
00:05:36,200 --> 00:05:37,400
Now it's time for fun.
79
00:05:37,410 --> 00:05:43,010
We're gonna print the losses inside the loop so we're going to stay in the loop then we're going to
80
00:05:43,010 --> 00:05:49,520
save the real images and also of course the fake images and then eventually after the training the fake
81
00:05:49,520 --> 00:05:55,800
images and the real image will appear in this results folder that will contain the final results.
82
00:05:55,850 --> 00:05:58,080
So let's do all this fun stuff.
83
00:05:58,130 --> 00:06:02,480
In the last tutorial of this module I'm super excited to show you the result.
84
00:06:02,510 --> 00:06:06,560
It's going to be something it's going to be pure computer vision creation.
85
00:06:06,560 --> 00:06:12,170
So prepare yourself a good coffee or a good tea sit comfortably in your chair and get ready for the
86
00:06:12,170 --> 00:06:13,200
final results.
87
00:06:13,250 --> 00:06:15,080
Until then enjoy computer vision.
9399
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.