Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:00,360 --> 00:00:02,890
Hello and welcome to this new tutorial.
2
00:00:02,910 --> 00:00:08,790
The second big step of this implementation we have already created one brain.
3
00:00:08,850 --> 00:00:10,470
The brain of the generator.
4
00:00:10,590 --> 00:00:14,620
And now we're about to create the second brain of our deep convolutional Ganns.
5
00:00:14,790 --> 00:00:20,640
I'm talking of course about the brain of the discriminator and we're going to proceed exactly the same
6
00:00:20,640 --> 00:00:23,690
way as we proceeded for the generator.
7
00:00:23,820 --> 00:00:30,300
We're first going to define a class that will define the architecture of the neural network for the
8
00:00:30,300 --> 00:00:31,430
discriminator.
9
00:00:31,560 --> 00:00:38,220
Then this class will also include a forward function to forward propagate the signal inside this new
10
00:00:38,250 --> 00:00:38,960
network.
11
00:00:39,150 --> 00:00:45,720
And then once the class is made we'll create a discriminator object that is will create the brain of
12
00:00:45,720 --> 00:00:46,860
the discriminator.
13
00:00:46,860 --> 00:00:52,840
So let's do this let's tackle the second big step of our deep convolutional gets.
14
00:00:52,920 --> 00:01:01,620
So we are starting with Class Of course then we need to give a name to this new class that will define
15
00:01:01,620 --> 00:01:02,690
the discriminator.
16
00:01:02,820 --> 00:01:09,700
And since we called the class of our generated G We're going to call this one the thing that makes sense.
17
00:01:09,810 --> 00:01:19,030
And so d again is going to inherit from the end in module and so inside we use again and then that module.
18
00:01:19,470 --> 00:01:26,820
All right then Kallen and then we go inside the class to start with the init function that will define
19
00:01:27,000 --> 00:01:29,250
the architecture of the neural network.
20
00:01:29,580 --> 00:01:37,050
So that's the same we start with Def and then double underscore in it that will underscore again and
21
00:01:37,050 --> 00:01:38,880
some parenthesis.
22
00:01:38,880 --> 00:01:46,320
And inside we only put self which will refer to the future instances of our class that will be created.
23
00:01:46,320 --> 00:01:52,650
That is the object and that will allow us to associate the specific variables that are attached to the
24
00:01:52,650 --> 00:01:56,930
object that is I'm talking about the properties of the object.
25
00:01:57,090 --> 00:02:00,330
The object will be the neural network of discriminator.
26
00:02:00,360 --> 00:02:07,830
So these properties and variables will be the different modules composing the architecture of the discriminator.
27
00:02:07,830 --> 00:02:16,980
All right so now Collen we go inside the function and same we start by activating the inheritance using
28
00:02:16,980 --> 00:02:26,360
the super function which takes as arguments are discriminator class and our object self.
29
00:02:26,490 --> 00:02:32,780
Then don't forget we had a done here double underscore in it double underscore again.
30
00:02:32,910 --> 00:02:34,710
And some parenthesis.
31
00:02:34,710 --> 00:02:41,320
Perfect inheritance activated all right and now things are going to get interesting.
32
00:02:41,340 --> 00:02:44,520
I'm going to try to make you guess what we're going to start with.
33
00:02:44,610 --> 00:02:50,700
We have to start with the first module of this architecture of the neural network of the discriminator.
34
00:02:50,700 --> 00:02:56,940
And so according to you what is going to be this first module is it going to be an inverse convolution
35
00:02:56,940 --> 00:03:03,190
like before or is it going to be a convolution or is it going to be a fool connection.
36
00:03:03,570 --> 00:03:05,670
Well think about this for a second.
37
00:03:05,850 --> 00:03:13,620
And until then I will get my main again which represents the metal module that will contain the different
38
00:03:13,890 --> 00:03:16,940
modules composing the architecture of the neural network.
39
00:03:16,980 --> 00:03:18,500
So exactly like before.
40
00:03:18,510 --> 00:03:24,210
So cells that main and this module will be a big sequence of layers.
41
00:03:24,270 --> 00:03:25,980
That is a big sequence of modules.
42
00:03:26,190 --> 00:03:31,980
And therefore I'm setting this main metal module to be an object of the.
43
00:03:32,130 --> 00:03:36,990
And in that sequential class parenthesis.
44
00:03:37,200 --> 00:03:38,030
And here we go.
45
00:03:38,160 --> 00:03:40,140
Let's see if you got this right.
46
00:03:40,140 --> 00:03:43,360
For this first module of this architecture.
47
00:03:43,650 --> 00:03:44,880
Let's see.
48
00:03:44,880 --> 00:03:47,310
All right so to answer this question right.
49
00:03:47,400 --> 00:03:51,080
Well we need to understand what the discriminator is going to do.
50
00:03:51,210 --> 00:03:59,040
It's going to take as inputs a generated image coming from the generator and return as output a discriminating
51
00:03:59,040 --> 00:04:02,190
number which will be of value between 0 and 1.
52
00:04:02,250 --> 00:04:08,940
Therefore since it takes as inputs a generated image the generated image created by the generator.
53
00:04:09,060 --> 00:04:16,200
Well that means it takes as input an image and therefore the first module of our architecture is simply
54
00:04:16,200 --> 00:04:19,680
going to be a convolution not an inverse convolution.
55
00:04:19,680 --> 00:04:21,330
That's only for the generator.
56
00:04:21,330 --> 00:04:28,350
But this time a convolution it's going to take as input an image an image created by the generator and
57
00:04:28,440 --> 00:04:29,880
through the convolution.
58
00:04:30,030 --> 00:04:35,940
And then of course some more convolutions we will get in the end a simple vector of one element that
59
00:04:35,940 --> 00:04:40,080
will contain the discriminating number a value between 0 and 1.
60
00:04:40,110 --> 00:04:41,010
So there we go.
61
00:04:41,040 --> 00:04:44,140
You have the idea and now it's going to be the same as before.
62
00:04:44,220 --> 00:04:50,520
We're going to add a series of different convolutions with different parameters and will best the feature
63
00:04:50,520 --> 00:04:51,030
maps.
64
00:04:51,060 --> 00:04:54,460
And of course we'll apply some rectification.
65
00:04:54,660 --> 00:04:58,490
And we'll let you find out about some surprise regarding the rectification.
66
00:04:58,500 --> 00:05:02,900
It's not going to be a real it's going to be something a little more sophisticated.
67
00:05:03,070 --> 00:05:05,090
We will see that in one or two minutes.
68
00:05:05,280 --> 00:05:10,920
But let's start with our first module that is our convolution.
69
00:05:11,030 --> 00:05:17,210
So to get this first module we're going to create a new object of the comp to the class and therefore
70
00:05:17,240 --> 00:05:23,510
I'm taking my and then module that and then I'm getting the code to the class.
71
00:05:23,510 --> 00:05:25,320
Here it is perfect.
72
00:05:25,370 --> 00:05:28,130
There is now something less easy.
73
00:05:28,220 --> 00:05:30,730
The parameters of this convert to the class.
74
00:05:30,800 --> 00:05:35,630
So it's actually not that hard and I'm going to give you something else to get what do you think is
75
00:05:35,630 --> 00:05:38,530
going to be the parameter we have to improve here.
76
00:05:38,570 --> 00:05:41,400
For the first argument of this comes to the class.
77
00:05:41,510 --> 00:05:46,200
Remember that corresponds to in channels that is the dimensions of the inputs.
78
00:05:46,400 --> 00:05:53,810
So as we just said the input of our discriminator and therefore of this first module that is the conclusion
79
00:05:54,200 --> 00:05:57,320
is an image an image created by the generator.
80
00:05:57,320 --> 00:06:01,720
So to answer the question what is going to be the inputs of this first convolution.
81
00:06:01,850 --> 00:06:07,260
We simply need to look at the output of the generator and the output of the generator.
82
00:06:07,290 --> 00:06:11,830
Remember is these three channels here of the image.
83
00:06:11,900 --> 00:06:19,340
So the input of our discriminator is nothing else than three to three channels of the generated image
84
00:06:19,490 --> 00:06:21,150
created by the generator.
85
00:06:21,500 --> 00:06:22,210
Perfect.
86
00:06:22,370 --> 00:06:23,700
And then it's going to be the same.
87
00:06:23,720 --> 00:06:30,120
We're going to specify a number of feature maps kernel size astride a padding and choosing true or false
88
00:06:30,120 --> 00:06:31,490
for the bias.
89
00:06:31,520 --> 00:06:32,900
So here we go let's do it.
90
00:06:32,900 --> 00:06:40,370
We're going to choose 64 feature maps then a size of 4 for the Colonel's so.
91
00:06:40,420 --> 00:06:48,450
So these are going to be squares of size four by four then astride of 2 and abetting of one.
92
00:06:48,620 --> 00:06:56,140
And eventually Last but not least we choose to have no bias and therefore like before I'm choosing by
93
00:06:56,140 --> 00:06:58,080
as equals force.
94
00:06:58,100 --> 00:06:58,790
All right.
95
00:06:58,850 --> 00:07:02,150
First Mudgal done congratulations.
96
00:07:02,180 --> 00:07:05,730
Now second module What do you think the second.
97
00:07:05,840 --> 00:07:11,130
Well operation is going to be we're not going to apply a budget yet.
98
00:07:11,210 --> 00:07:13,130
We will leave that for the next convolution.
99
00:07:13,220 --> 00:07:19,340
However we always need to apply rectification and that's the surprise I wanted to keep from you but
100
00:07:19,340 --> 00:07:24,680
I'll tell you it right now we're not going to use a rectifier activation function.
101
00:07:24,710 --> 00:07:26,560
We're not going to use the release function.
102
00:07:26,560 --> 00:07:29,790
We're going to use a leaky reglue.
103
00:07:30,200 --> 00:07:33,530
So what I'm going to do now I'm going to go to Google
104
00:07:36,220 --> 00:07:37,180
here is Google.
105
00:07:37,240 --> 00:07:42,880
And here is the leaky really leaky Leakey who is very close to the loo.
106
00:07:43,000 --> 00:07:50,320
But besides having the max of zero and X which actually is the formula for the reglue we have this addition
107
00:07:50,320 --> 00:07:54,780
of the negative slope multiplied by the men of zero and x.
108
00:07:54,850 --> 00:07:57,630
So that's a slight change in the formula.
109
00:07:57,790 --> 00:08:01,030
And if you put that into a graph you can see the difference.
110
00:08:01,180 --> 00:08:08,620
There really is a horizontal line equal to zero here for the negative values of x and then the straight
111
00:08:08,620 --> 00:08:11,680
line y equals x y for the leak.
112
00:08:11,860 --> 00:08:18,290
We have this slight change here which gives some negative values for the negative values of x.
113
00:08:18,520 --> 00:08:20,350
So that's the slight difference.
114
00:08:20,350 --> 00:08:26,210
I thought it was good to show you this and now why do we choose to use a leaky review instead of a clue.
115
00:08:26,380 --> 00:08:28,870
Well again that's an artist's job.
116
00:08:28,870 --> 00:08:35,570
It comes from experimentation and research basically for the convolutions of the discriminator.
117
00:08:35,620 --> 00:08:41,380
It works better with the leaky reglue than a simple really.
118
00:08:41,420 --> 00:08:43,600
So I'm getting back to by then.
119
00:08:43,670 --> 00:08:48,180
And therefore if you all agree with the leaky Renu Let's apply a leaky really.
120
00:08:48,770 --> 00:08:56,030
So to a player like you we take our end in Mudgal and then that and then we will find very easily the
121
00:08:56,030 --> 00:08:56,630
like really.
122
00:08:56,650 --> 00:08:57,480
Here it is.
123
00:08:57,770 --> 00:09:04,460
And we need to put several arguments of course the negative slope and we'll choose not 0 point zero
124
00:09:04,490 --> 00:09:06,930
one but 0 point two.
125
00:09:07,070 --> 00:09:10,880
Again a value chosen from experimentation and research.
126
00:09:11,020 --> 00:09:12,230
So 0.2.
127
00:09:12,530 --> 00:09:18,350
And then we have a second argument in place because false but why not set the value of in place to false
128
00:09:18,430 --> 00:09:26,990
will set it to true again if we go back to the PI torch stuck documentation by the way yes this is the
129
00:09:26,990 --> 00:09:34,270
PI torch documentation that I highly recommend to go through while watching the tutorials or even any
130
00:09:34,270 --> 00:09:34,770
time.
131
00:09:34,850 --> 00:09:37,640
That's a great documentation it's very clear.
132
00:09:37,640 --> 00:09:43,880
And so if we look at this second argument in place well you can see that's an argument that if set to
133
00:09:43,880 --> 00:09:49,010
true can actually do the operation in place the default value is false.
134
00:09:49,040 --> 00:09:52,890
But we chose to set it to true to benefit from this option.
135
00:09:52,890 --> 00:09:53,560
All right.
136
00:09:53,630 --> 00:09:56,090
So let's go back to Python.
137
00:09:56,150 --> 00:10:00,260
There we are and we are done now with the leaky well.
138
00:10:00,500 --> 00:10:01,020
All right.
139
00:10:01,040 --> 00:10:05,450
Let's not forget the comma and let's move on to the next module.
140
00:10:05,480 --> 00:10:09,220
So the next module is going to be another convolution.
141
00:10:09,290 --> 00:10:15,440
Rest assured we'll get many convolutions as we got many inversed convolutions.
142
00:10:15,440 --> 00:10:18,970
So we have a long way to go before making the architecture.
143
00:10:19,070 --> 00:10:27,720
So let's do it let's add the second convolution and then that comes to the third is.
144
00:10:27,770 --> 00:10:30,560
And let's specify the arguments.
145
00:10:30,590 --> 00:10:37,070
So now you understand clearly the logic the input of this new convolution is the output of the previous
146
00:10:37,070 --> 00:10:42,020
convolution and therefore it is going to be the 64 feature maps.
147
00:10:42,020 --> 00:10:44,260
And now we need to choose a now output.
148
00:10:44,270 --> 00:10:50,450
So since it's actually the inverse operation as the previous one which was inverse convolution.
149
00:10:50,630 --> 00:10:56,480
Well this time we're not going to reduce the number of feature maps each time for each new convolution
150
00:10:56,930 --> 00:10:59,430
as you might have guessed we're going to do the exact opposite.
151
00:10:59,470 --> 00:11:00,740
We are going to increase.
152
00:11:00,800 --> 00:11:07,170
And more specifically multiply by 2 the number of feature maps in each new convolution.
153
00:11:07,280 --> 00:11:12,590
Therefore the size of outputs here that is the new number of feature maps we're getting as the output
154
00:11:12,590 --> 00:11:17,320
of this convolution is going to be one hundred and twenty years.
155
00:11:17,390 --> 00:11:18,510
There we go.
156
00:11:18,830 --> 00:11:23,060
I think now you can almost finish it yourself but we'll never know.
157
00:11:23,060 --> 00:11:24,500
Please stay with me.
158
00:11:24,650 --> 00:11:33,800
Then we're going to choose a size of four for the kernel strive to a padding of one and no bias bicycle's
159
00:11:33,950 --> 00:11:36,210
false right.
160
00:11:36,290 --> 00:11:38,770
Second convolution done.
161
00:11:38,900 --> 00:11:39,680
Perfect.
162
00:11:39,680 --> 00:11:41,390
Next step next step.
163
00:11:41,510 --> 00:11:49,880
As I just gave you a hint is to apply a batch normalization to batched normalize each of the 128 new
164
00:11:49,880 --> 00:11:53,260
feature maps and so that's exactly the same as before.
165
00:11:53,270 --> 00:11:59,720
We're going to take our end in Mudgal than dirt and then we're going to apply a batch norm to D.
166
00:11:59,900 --> 00:12:07,030
And since we're Bache normalizing the 128 feature maps will be exactly two inputs.
167
00:12:07,090 --> 00:12:10,370
One hundred and 28 exactly like before.
168
00:12:10,370 --> 00:12:14,220
No new mystery come the next step.
169
00:12:14,280 --> 00:12:14,700
All right.
170
00:12:14,760 --> 00:12:19,190
Now that's going to be the same we're going to apply a new leaky reglue.
171
00:12:19,250 --> 00:12:20,930
So I'm copying this.
172
00:12:21,200 --> 00:12:23,890
I am pasting it right here.
173
00:12:24,080 --> 00:12:25,580
And good news.
174
00:12:25,580 --> 00:12:27,350
We have the exact same arguments.
175
00:12:27,350 --> 00:12:29,080
We don't have to replace anything.
176
00:12:29,330 --> 00:12:33,260
We choose a slope of 0.2 and in place.
177
00:12:33,530 --> 00:12:35,270
Next step the next step.
178
00:12:35,300 --> 00:12:42,170
As you might guess and I strongly recommend this game to try to type the next module in the architecture
179
00:12:42,440 --> 00:12:53,260
before me the next module is going to be end that and a new convolution of course comes to the parenthesis.
180
00:12:53,330 --> 00:12:57,140
And now of course you guess what we're going to input and what we're going to output.
181
00:12:57,230 --> 00:13:04,730
We're going to input what was output by the previous convolution which was 128 feature maps.
182
00:13:04,730 --> 00:13:12,020
That becomes the new input of this new convolution and for the output Well we increase this time.
183
00:13:12,050 --> 00:13:14,800
The number of feature maps multiplied by two.
184
00:13:14,960 --> 00:13:22,330
So we're going to get the new output of two hundred and fifty six feature maps are right.
185
00:13:22,490 --> 00:13:33,080
And then again we choose a kernel of size 4 as tried to and a padding of one and again no absolutely
186
00:13:33,080 --> 00:13:37,700
no bias bicycle's false perfect come up.
187
00:13:37,730 --> 00:13:43,320
Next step the next step is to guess what well to apply a budget normalization.
188
00:13:43,370 --> 00:13:48,320
So we take our N and module that batch norm to D.
189
00:13:48,560 --> 00:13:53,990
And as arguments we are going to input the number of new feature maps that we got and that we want to
190
00:13:53,990 --> 00:13:55,540
batch normalize.
191
00:13:55,670 --> 00:13:59,500
And since what we've got is 256 new feature maps.
192
00:13:59,570 --> 00:14:08,840
Well we're going to input here 256 to batch normalize these new 256 feature maps perfect next step the
193
00:14:08,840 --> 00:14:10,690
leaky renew again.
194
00:14:10,730 --> 00:14:15,330
So in that leaky reglue there it is.
195
00:14:15,500 --> 00:14:19,140
And that's the same I don't know why I'm typing this again.
196
00:14:19,280 --> 00:14:24,200
I just need to copy and paste it and there we go.
197
00:14:24,200 --> 00:14:26,140
Now another convolution again.
198
00:14:26,180 --> 00:14:31,150
So this time let's not miss it and copy this copy.
199
00:14:32,240 --> 00:14:41,800
Based So this time we have 256 feature maps for the input and we double it for the output.
200
00:14:41,870 --> 00:14:48,980
So we get five hundred and twelve new feature maps in the output of this new convolution and then same
201
00:14:49,090 --> 00:14:54,420
kernel size for a stride of two and a pairing of 1 and No bias.
202
00:14:54,500 --> 00:14:59,420
Great then our batched on again and copying this.
203
00:14:59,430 --> 00:15:02,200
I'm basing it just below.
204
00:15:02,240 --> 00:15:11,270
I am not forgetting to replace the 256 previous feature maps by the new 512 new feature maps that we
205
00:15:11,270 --> 00:15:12,650
have to batch norm.
206
00:15:12,680 --> 00:15:18,940
Then we apply our leaky reglue with the same parameters.
207
00:15:19,040 --> 00:15:22,410
So I just need to copy this pasted here.
208
00:15:22,570 --> 00:15:25,250
I hope you are doing all this faster than me.
209
00:15:25,330 --> 00:15:28,290
All right then almost over.
210
00:15:28,300 --> 00:15:32,010
We have one final convolution to add.
211
00:15:32,050 --> 00:15:34,660
This is the last one and pasting it here.
212
00:15:34,960 --> 00:15:42,880
And so we're going to get now 512 feature maps of the input and now the question is since this is the
213
00:15:42,880 --> 00:15:47,160
last convolution what is going to be the output.
214
00:15:47,170 --> 00:15:49,860
Please don't answer 1024.
215
00:15:49,910 --> 00:15:51,750
We're done with the convolutions here.
216
00:15:51,850 --> 00:15:54,700
We stop playing the game of doubling the feature maps.
217
00:15:54,700 --> 00:15:59,190
No this time we have to specify the final output.
218
00:15:59,230 --> 00:16:01,060
Therefore the discriminator.
219
00:16:01,130 --> 00:16:06,680
And so if you understood the role of the discriminator you should be able to guess the output here.
220
00:16:06,850 --> 00:16:10,940
Well I'm going to tell you right now it is actually one.
221
00:16:11,210 --> 00:16:11,780
One.
222
00:16:11,800 --> 00:16:17,990
That's because the discriminator is returning a discriminating number and number between 0 and 1.
223
00:16:18,150 --> 00:16:23,910
So that's a simple vector of one dimension containing this number and therefore that's why we have a
224
00:16:23,920 --> 00:16:29,920
1 here and then just a little trap you can not really guess that but we're going to keep a curl size
225
00:16:29,920 --> 00:16:37,600
of 4 but then we're going to choose a stride of one and a padding of 0 and then that is the same no
226
00:16:37,600 --> 00:16:38,890
bias.
227
00:16:38,890 --> 00:16:41,140
All right we're almost done.
228
00:16:41,170 --> 00:16:45,670
We have one final thing to do for this architecture.
229
00:16:45,880 --> 00:16:49,040
If you look at what we did for the generator.
230
00:16:49,150 --> 00:16:52,850
Well you can see that after the last inverse convolution.
231
00:16:52,960 --> 00:16:58,480
Well we applied an activation function again to break the new narrative and we going to do the same
232
00:16:58,480 --> 00:17:00,030
for the discriminator.
233
00:17:00,040 --> 00:17:03,540
But this time it's not going to be a hyperbolic tangent.
234
00:17:03,850 --> 00:17:06,050
According to you what is it going to be.
235
00:17:06,070 --> 00:17:10,920
It's actually a very classic activation function to use at the end of a neural network.
236
00:17:10,920 --> 00:17:18,010
It is the sigmoid function and if you went deep into the theory of the deep convolutional Ganns and
237
00:17:18,010 --> 00:17:20,520
especially the part related to the discriminator.
238
00:17:20,650 --> 00:17:26,170
Well you would understand that the sigmoid function is the best choice because it returns some values
239
00:17:26,170 --> 00:17:32,550
between 0 and 1 and for the discriminator we actually want to be between 0 and 1.
240
00:17:32,590 --> 00:17:33,590
Why is that.
241
00:17:33,790 --> 00:17:40,590
Well the answer of that question is in the name of the discriminator it's for discriminating reason.
242
00:17:40,840 --> 00:17:48,550
The principle of the discriminator is that zero corresponds to the rejection of the image and one corresponds
243
00:17:48,550 --> 00:17:50,420
to the acceptance of the image.
244
00:17:50,560 --> 00:17:54,880
So 0 means we reject the image and one means we accept the image.
245
00:17:54,880 --> 00:18:02,100
Therefore the name discriminator and so basically we want to be between 0 and 1 to do some discrimination.
246
00:18:02,650 --> 00:18:05,660
And that's why the output is actually a value between 0 and 1.
247
00:18:05,800 --> 00:18:11,980
Because then using classic threshold technique well for the values below 0.5 we will consider it as
248
00:18:11,980 --> 00:18:16,790
zeros who will reject the image and put the values above 0.5.
249
00:18:16,800 --> 00:18:20,730
We will consider it as one and we will accept the image.
250
00:18:20,890 --> 00:18:27,790
So that's very important to understand and therefore the sigmoid function that not only break down in
251
00:18:27,790 --> 00:18:35,260
the area but also returns a value between 0 and 1 is the optimal choice for the activation function
252
00:18:35,410 --> 00:18:38,580
at the level of the output of the discriminator.
253
00:18:38,610 --> 00:18:45,130
All right so if you're convinced we're going to get R and then module and then that and then we get
254
00:18:45,250 --> 00:18:47,900
our sigmoid activation function.
255
00:18:48,070 --> 00:18:54,700
So it's an object of the sigmoid class which will represent the sigmoid activation function itself and
256
00:18:54,970 --> 00:18:58,800
we don't have any argument to input.
257
00:18:58,840 --> 00:19:04,210
So congratulations you are done with the architecture of the second brain.
258
00:19:04,240 --> 00:19:06,490
You have to create for this sequence.
259
00:19:06,560 --> 00:19:08,850
So now are DC again had the two brains.
260
00:19:08,860 --> 00:19:13,310
Let's make the forward function to propagate the signal inside the second brain.
261
00:19:13,390 --> 00:19:17,200
We'll make it in the next tutorial and until then enjoy computer vision.
27213
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.