Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:01,870 --> 00:00:03,460
Hello and welcome to this video.
2
00:00:03,490 --> 00:00:06,630
I will explain all of these seven training on custom objects.
3
00:00:06,640 --> 00:00:09,700
This time the training will be conducted on Google Collab mixer.
4
00:00:09,700 --> 00:00:12,010
YOLO v seven is installed on Google Collab.
5
00:00:12,130 --> 00:00:17,590
However, before we begin training we must first prepare the annotated dataset and create a configuration
6
00:00:17,590 --> 00:00:18,070
file.
7
00:00:18,100 --> 00:00:20,440
The first thing we will do is prepare the dataset.
8
00:00:20,470 --> 00:00:24,070
However, preparing the dataset cannot be done directly on Google Collab.
9
00:00:24,070 --> 00:00:27,280
In this example, preparing the dataset will be done on Windows.
10
00:00:27,310 --> 00:00:30,160
First, we make a folder in which to save the dataset.
11
00:00:30,190 --> 00:00:34,010
In this case we will make a folder in the D directory to create a new folder.
12
00:00:34,030 --> 00:00:34,630
Right click.
13
00:00:34,630 --> 00:00:35,680
New folder.
14
00:00:36,950 --> 00:00:38,390
In this example, we name it.
15
00:00:38,390 --> 00:00:40,310
That is a collab for datasets.
16
00:00:40,310 --> 00:00:42,440
You can use datasets that you have annotated.
17
00:00:46,980 --> 00:00:50,070
In this video, we will use an annotated face mask dataset.
18
00:00:50,070 --> 00:00:55,400
The face mask dataset can be accessed and downloaded at the following You are in the following URL.
19
00:00:55,440 --> 00:01:00,270
There is a face mask, data set and split data set dot pi that could be used to split the dataset.
20
00:01:00,990 --> 00:01:02,580
Download the following dataset.
21
00:01:10,130 --> 00:01:12,470
Also download split data set dot pie.
22
00:01:22,050 --> 00:01:23,880
Wait until the download is finished.
23
00:01:29,640 --> 00:01:32,040
When you're finished, go to the downloads folder.
24
00:01:37,670 --> 00:01:41,240
Next move these two files to the folder that was previously created.
25
00:01:42,620 --> 00:01:48,470
In this example, the dataset club folder in the directory look like this, then press control X.
26
00:01:54,780 --> 00:01:56,520
Faced by pressing control fee.
27
00:02:04,180 --> 00:02:06,040
Next extract the dataset.
28
00:02:12,650 --> 00:02:18,470
In this example, extract will use tools from Windows 11 to extract right click, then extract all.
29
00:02:23,800 --> 00:02:25,090
The face mask.
30
00:02:29,800 --> 00:02:30,700
Click extra.
31
00:02:37,660 --> 00:02:39,670
Wait until the extraction is finished.
32
00:02:44,930 --> 00:02:50,810
The following is the annotated face mask dataset following that split the dataset into train validation
33
00:02:50,810 --> 00:02:51,720
and test data.
34
00:02:51,740 --> 00:02:55,010
The split results must match the URL of seven folder structure.
35
00:02:55,040 --> 00:02:57,710
The URL of seven folder structure is shown below.
36
00:02:57,740 --> 00:03:03,350
The images folder contains images, while the labels folder contains annotations, each folder contains
37
00:03:03,350 --> 00:03:05,420
train well and test folders.
38
00:03:05,420 --> 00:03:10,600
We have previously downloaded the Python code for dataset splitting, namely split dataset archive.
39
00:03:11,480 --> 00:03:16,160
The split wants a command prompt in this follow by clicking the address, bar and type CMD.
40
00:03:16,640 --> 00:03:18,110
After that press enter.
41
00:03:22,750 --> 00:03:25,150
Make sure you have Python installed before splitting.
42
00:03:26,650 --> 00:03:30,400
Use the following command to do the splitting python split dataset.
43
00:03:30,400 --> 00:03:31,180
Dot py.
44
00:03:34,890 --> 00:03:36,090
That's the strain.
45
00:03:37,760 --> 00:03:40,820
The train argument is used to set the train that the percentage.
46
00:03:42,390 --> 00:03:43,500
We write 80.
47
00:03:46,630 --> 00:03:48,160
That's just validation.
48
00:03:49,750 --> 00:03:56,230
Validation argument is used to set the percentage of data validation we write in this last test.
49
00:03:57,970 --> 00:04:01,000
The test argument is used to set the percentage of test data.
50
00:04:01,120 --> 00:04:02,320
We write ten.
51
00:04:03,970 --> 00:04:05,260
That's that's fodder.
52
00:04:06,780 --> 00:04:10,650
The folder argument specifies the location of the data set before it is split.
53
00:04:10,710 --> 00:04:12,840
In this case, the face must folder.
54
00:04:15,640 --> 00:04:20,540
This does this the this argument specifies the formula in which the split results are stored.
55
00:04:20,560 --> 00:04:24,040
It will be safe in the face mask dataset folder in this example.
56
00:04:26,740 --> 00:04:27,580
Stress internal.
57
00:04:35,770 --> 00:04:37,910
Wait until the splitting process is finished.
58
00:04:37,930 --> 00:04:40,400
When finished, we turn to Windows Explorer.
59
00:04:40,420 --> 00:04:43,750
There is a face mask dataset folder, which is the result of the split.
60
00:04:44,020 --> 00:04:47,710
The dataset will then be compressed to make it easier to upload to Google Drive.
61
00:04:47,740 --> 00:04:53,950
In this example, we will compress using tools from Windows 11 to compress right click then click Compress
62
00:04:53,950 --> 00:04:54,760
to zip file.
63
00:04:59,730 --> 00:05:01,620
Wait until the compression is finished.
64
00:05:09,850 --> 00:05:10,890
Here are the results.
65
00:05:17,120 --> 00:05:20,000
In the next video, we will create a configuration file.
66
00:05:20,030 --> 00:05:20,870
See you then.
5874
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.