Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:05,010 --> 00:00:10,224
In this lecture, we will consider
correlation based matching strategies for
2
00:00:10,224 --> 00:00:13,470
location a robot on a map
given laser range data.
3
00:00:14,500 --> 00:00:19,550
This map registration process
provides a very precise complement
4
00:00:19,550 --> 00:00:21,570
to odometry based localization.
5
00:00:22,920 --> 00:00:26,140
First we should introduced
the LIDAR depth sensor.
6
00:00:27,200 --> 00:00:30,710
LIDAR stands for
light detection and ranging and
7
00:00:30,710 --> 00:00:32,690
it provides distance measurements.
8
00:00:32,690 --> 00:00:36,600
Often engineered in a laser scanner
to provide two dimensional data.
9
00:00:38,540 --> 00:00:41,310
The laser scanner we will
model in this lecture
10
00:00:41,310 --> 00:00:44,560
takes depth measurements
in polar coordinates,
11
00:00:44,560 --> 00:00:49,500
where a continuous distance reading
r is made at discrete angles theta.
12
00:00:50,530 --> 00:00:55,660
Here, theta encompasses 270 degrees,
not a full circle.
13
00:00:57,210 --> 00:01:00,420
The laser scanner can only
see 10 to 30 meters away.
14
00:01:01,610 --> 00:01:05,000
In this range restriction,
means that distance measurements
15
00:01:05,000 --> 00:01:09,800
showing here is black dots, can only
be found within the area in green.
16
00:01:10,910 --> 00:01:16,320
Thus, due to the rays generating from
a single point and the limited range,
17
00:01:16,320 --> 00:01:20,210
the robot can only see the dotted
lines and not the lines in brown.
18
00:01:22,150 --> 00:01:27,160
Just as in the previous lecture,
a two dimensional occupancy grid map will
19
00:01:27,160 --> 00:01:32,090
be used in localization,
where a light colored cell represents high
20
00:01:32,090 --> 00:01:37,440
probability of an obstacle and a dark
colored cell present a low probability.
21
00:01:38,620 --> 00:01:41,960
The cells here are meant
to replicate the laser skin
22
00:01:41,960 --> 00:01:46,830
shown in the previous slide when the robot
is approaching a corner in a hallway.
23
00:01:48,640 --> 00:01:54,350
Because the robot lives in a finite grid
world the grid must sometimes be expanded
24
00:01:54,350 --> 00:01:56,110
as the robot can escape the boundaries.
25
00:01:57,140 --> 00:02:01,730
In this case, the map representation
should increase in size as the robot
26
00:02:01,730 --> 00:02:06,720
turns and travels on the corridor
shown in the top left of this map or
27
00:02:06,720 --> 00:02:08,520
else information will be lost.
28
00:02:10,050 --> 00:02:14,310
In addition to mapping the laser
data discussed in week 3, we can
29
00:02:14,310 --> 00:02:18,920
access map data and try to find the robot
pose in the map given the laser data.
30
00:02:20,480 --> 00:02:25,990
The complimentary stages of mapping and
localization when performed together
31
00:02:25,990 --> 00:02:29,170
are known as SLAM,
simultaneous localization and
32
00:02:29,170 --> 00:02:32,399
mapping, which is a major
research topic in robotics.
33
00:02:34,190 --> 00:02:37,110
In the localization problem we
have two sets of information.
34
00:02:38,160 --> 00:02:42,280
First, the occupancy grid map
provides a grounds truth knowledge
35
00:02:42,280 --> 00:02:44,590
of what the robot should expect
to observe in the world.
36
00:02:45,730 --> 00:02:48,710
Second, the set of
lighter scan measurements
37
00:02:48,710 --> 00:02:52,240
provides information on what the robot
is observing at the current time.
38
00:02:53,780 --> 00:02:56,580
The lighter scan measurements
must be discretized
39
00:02:56,580 --> 00:03:00,860
according to the map representation,
as discussed in week three,
40
00:03:00,860 --> 00:03:03,880
in order to be compared to
the information from the occupancy map.
41
00:03:05,750 --> 00:03:10,680
With these two pieces of information
the goal is to find the best robot pose
42
00:03:10,680 --> 00:03:13,289
on the map that explains
the measured observations.
43
00:03:14,950 --> 00:03:19,100
Searching over all possible poses
of the robot can be difficult.
44
00:03:19,100 --> 00:03:22,350
But based on the odometry information
discussed in the last lecture,
45
00:03:22,350 --> 00:03:24,800
we have some tricks to
make the search easier.
46
00:03:26,680 --> 00:03:29,600
We can constrain the search
to a limited number of poses
47
00:03:29,600 --> 00:03:30,990
based on odometry information.
48
00:03:32,050 --> 00:03:37,310
Because we track the robot over time, we
have the last known position of the robot
49
00:03:37,310 --> 00:03:40,780
and odometry information on how
far the robot most likely moved.
50
00:03:42,120 --> 00:03:47,090
Thus, the most likely pose for the robot
is now given a new set of laser data,
51
00:03:47,090 --> 00:03:50,310
is probably close to where
the odometry predicts the robot to be.
52
00:03:51,360 --> 00:03:55,750
This prediction means that we can refine
our search to poses near the prediction
53
00:03:55,750 --> 00:03:58,420
and be more confident in
the validity of our search results.
54
00:04:00,150 --> 00:04:05,070
We measure each pose p in the search
based on a map registration metric.
55
00:04:06,120 --> 00:04:10,910
One metric is to consider the sum of
the map values m, at coordinates x and y,
56
00:04:10,910 --> 00:04:13,320
where the laser returns r, hit.
57
00:04:14,970 --> 00:04:19,270
This correlation metric can be modified
to suit the application at hand.
58
00:04:19,270 --> 00:04:23,200
In our case, the value of our map
cell will be a log odds ratio, so
59
00:04:23,200 --> 00:04:27,090
laser returns that are seen at a map
location with high probability of
60
00:04:27,090 --> 00:04:31,840
occupancy will strongly increase
the registration in the metric score.
61
00:04:33,190 --> 00:04:38,590
Laser returns with map locations known as
free cells will decrease the metric score.
62
00:04:39,610 --> 00:04:44,140
Additionally, the correlation can
be scaled where returns from far
63
00:04:44,140 --> 00:04:48,710
distances affect the metric calculation
less than nearby the laser returns.
64
00:04:50,350 --> 00:04:52,240
We register the robot on the map,
65
00:04:52,240 --> 00:04:54,740
at the pose that maximizes
the registration metric.
66
00:04:55,760 --> 00:04:57,670
Thus, when the odometry is calculated,
67
00:04:57,670 --> 00:05:00,870
it uses this pose to predict a new
position of the robot, in time.
68
00:05:02,830 --> 00:05:06,930
In addition to considering merely the
laser returns, we can consider points for
69
00:05:06,930 --> 00:05:09,460
the laser returns penetrated.
70
00:05:09,460 --> 00:05:12,380
This calculation can further
corroborate our map registration.
71
00:05:13,630 --> 00:05:15,910
It requires considerably
more computation however.
72
00:05:17,240 --> 00:05:21,180
To capture pose uncertainty using
a simple Gaussian on position and
73
00:05:21,180 --> 00:05:24,430
angle may not provide a feasible approach.
74
00:05:24,430 --> 00:05:28,700
In the next lecture, we will present
a pose filter that can capture bi-modal
75
00:05:28,700 --> 00:05:32,917
uncertainty and non-linear models and
a computationally tractable way.7184
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.