Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
0:00
this is a full-length course from treehouse we at free code camp are longtime fans of their learning platform
0:06
they were kind enough to let our non-profit make this course freely available on our youtube channel if you
0:11
like this course treehouse has a lot more courses like this one the link is in the description
0:16
along with time codes to the different sections in this course
0:21
[Music]
0:29
hi my name is passan i'm an instructor here at treehouse and welcome to introduction to algorithms
0:34
whether you are a high school or college student a developer in the industry or someone who is learning to code you have
0:41
undoubtedly run into the term algorithm for many people this word is kind of
0:46
scary it represents this body of knowledge that seems just out of reach only people with computer science
0:52
degrees know about algorithms now to others this brings up feelings of imposter syndrome
0:58
you might already know how to code but you're not a real developer because you don't know anything about algorithms
1:05
personally it made me frame certain jobs as above my skill level because the
1:10
interview contained algorithm questions well whatever your reasons are in this
1:16
course our goal is to dispel all those feelings and get you comfortable with the basics of algorithms
1:22
like any other subject i like to start my courses with what the course is and is not
1:28
in this course we're going to cover the very basic set of knowledge that you need as a foundation for learning about
1:34
algorithms this course is less about specific algorithms and more about the tools you
1:40
will need to evaluate algorithms understand how they perform compare them to each other and make a
1:46
statement about the utility of an algorithm in a given context now don't worry none of this will be
1:52
theoretical and we will learn these concepts by using well-known algorithms in this course we will also be writing
1:59
code so i do expect you to have some programming experience if you intend to continue with this topic
2:05
you can definitely stick around even if you don't know how to code but you might want to learn the basics of programming
2:11
in the meantime in this course we will be using the python programming language python reads
2:17
a lot like regular english and is the language you will most likely encounter when learning about algorithms these
2:23
days if you don't know how to code or if you know how to code in a different language
2:28
check out the notes section of this video for links to other content that might be useful to you
2:34
as long as you understand the fundamentals of programming you should be able to follow along pretty well
2:41
if you're a javascript developer or a student who's learning javascript for example chances are good that you'll
2:47
still be able to understand the code we write later i'll be sure to provide links along the way if you need anything
2:53
to follow up on let's start with something simple what is an algorithm
2:59
an algorithm is a set of steps or instructions for completing a task
3:04
this might sound like an over simplification but really that's precisely what an algorithm is
3:11
a recipe is an algorithm your morning routine when you wake up is an algorithm and the driving directions you follow to
3:18
get to a destination is also an algorithm in computer science the term algorithm
3:23
more specifically means the set of steps a program takes to finish a task
3:28
if you've written code before any code really generally speaking you have written an algorithm
3:34
given that much of the code we write can be considered an algorithm what do people mean when they say you should
3:40
know about algorithms now consider this let's say i'm a teacher in a classroom
3:46
and i tell everyone i have an assignment for them on their desks they have a picture of a maze and their task is to
3:52
come up with a way to find the quickest way out of the maze everyone does their thing and comes up
3:58
with a solution every single one of these solutions is a viable solution and is a valid example
4:04
of an algorithm the steps one needs to take to get out of the maze but from being in classrooms or any group of any
4:11
sort you know that some people will have better ideas than others we all have a diverse array of skill sets
4:18
over time our class picks the best of these solutions and any time we want to solve a maze we go with one of these
4:24
solutions this is what the field of algorithms is about there are many problems in computer
4:30
science but some of them are pretty common regardless of what project you're working on
4:36
different people have come up with different solutions to these common problems and over time the field of
4:42
computer science has identified several that do the job well for a given task
4:47
when we talk of algorithms we're referring to two points we're primarily saying there's an
4:53
established body of knowledge on how to solve particular problems well and it's
4:58
important to know what the solutions are now why is it important if you're unaware that a solution exists
5:05
you might try to come up with one yourself and there's a likelihood that your solution won't be as good or
5:11
efficient whatever that means compared to those that have been thoroughly reviewed
5:16
but there's a second component to it as well part of understanding algorithms is not
5:22
just knowing that an algorithm exists but understanding when to apply it
5:27
understanding when to apply an algorithm requires properly understanding the problem at hand and this arguably is the
5:34
most important part of learning about algorithms and data structures as you progress through this content you
5:41
should be able to look at a problem and break it down into distinct steps when you have a set of steps you should
5:47
then be able to identify which algorithm or data structure is best for the task at hand
5:53
this concept is called algorithmic thinking and it's something we're going to try and cultivate together as we work
5:59
through our content lastly learning about algorithms gives you a deeper understanding about
6:05
complexity and efficiency in programming having a better sense of how your code
6:10
will perform in different situations is something that you'll always want to develop in hone
6:16
algorithmic thinking is why algorithms also come up in big tech interviews interviewers don't care as much that you
6:23
are able to write a specific algorithm in code but more about the fact that you can break a seemingly insurmountable
6:30
problem into distinct components and identify the right tools to solve each distinct component
6:37
and that is what we plan on doing in this course though we're going to focus on some of the tools and concepts you'll
6:43
need to be aware of before we can dive into the topic of algorithms if you're ready let's get started
6:51
hey again in this video we're going to do something unusual we're going to play a game and by we i mean me and my two
6:57
friends here brittany and john this game is really simple and you may have played it before it goes something
7:02
like this i'm going to think of a number between 1 and 10 and they have to guess what the number is easy right
7:09
when they guess a number i'll tell them if their guess is too high or too low the winner is the one with the fewest
7:15
tries all right john let's start with you i'm thinking of a number between one and ten what is it
7:21
between you and me the answer is three uh quick question does the range include one and ten
7:27
that is a really good question so what john did right there was to establish the bounds of our problem
7:32
no solution works on every problem and an important part of algorithmic thinking is to clearly define what the
7:38
problem set is and clarify what values count as inputs yeah 1 and ten are both included is it
7:45
one too low is it two too low is it three correct okay so that was an easy one it took
7:51
john three tries to get the answer let's switch over to brittany and play another round using the same number as the
7:57
answer okay brittany i'm thinking of a number between 1 and 10 inclusive so both 1 and 10 are in the range what
8:02
number am i thinking of is it 5 too high 2 too low
8:08
is it 3 correct all right so what we had there was two very different ways of playing the same game
8:14
somehow with even such a simple game we saw different approaches to figuring out a solution
8:19
to go back to algorithmic thinking for a second this means that with any given problem there's no one best solution
8:27
instead what we should try and figure out is what solution works better for the current problem
8:32
in this first pass at the game they both took the same amount of turns to find the answer so it's not obvious who has
8:39
the better approach and that's mostly because the game was easy let's try this one more time now this
8:44
time the answer is 10. all right john you first is it one too low is it two still too low is it three
8:51
too low is it four too low is it five still too low is it six too low is it
8:56
seven too low is it eight low is it nine do low is it ten correct you got it okay
9:03
so now same thing but with britney this time is it five too low
9:08
eight too low is it nine still too low it's ten all right so here we start to see a
9:14
difference between their strategies when the answer was three they both took the same number of turns this is important
9:21
when the number was larger but not that much larger 10 in this case we start to see that britney strategy did better she
9:28
took four tries while john took 10. we've played two rounds so far and we've seen a different set of results based on
9:34
the number they were looking for if you look at john's way of doing things then the answer being 10 the
9:40
round we just played is his worst case scenario he will take the maximum number of turns 10 to guess it
9:48
when we picked a random number like three it was hard to differentiate which strategy was better because they both
9:54
performed exactly the same but in john's worst case scenario a clear winner in terms of strategy
10:00
emerges in terms of algorithmic thinking we're starting to get a sense that the specific value they're searching for may
10:07
not matter as much as where that value lies in the range that they've been given
10:12
identifying this helps us understand our problem better let's do this again for a range of
10:18
numbers from one to one hundred we'll start by picking five as an answer to trick them
10:23
okay so this time we're going to run through the exercise again this time from one to one hundred and both one and
10:28
one hundred are included is it one at this point without even having to run through it we can guess how many tries
10:33
john is going to take since he starts at one and keeps going he's going to take five tries as we're about to see is it
10:39
five cool correct okay now for brittany's turn is it 50 too high is it 25 still too
10:46
high is it 13 too high is it seven too high is it four too low
10:53
is it six too high is it five correct let's evaluate john took five tries
11:00
brittany on the other hand takes seven tries so john wins this round but again in determining whose strategy is
11:06
preferred there's no clear winner right now what this tells us is that it's not particularly useful to look at the easy
11:13
answers where we arrive at the number fairly quickly because it's at the start of the range instead let's try one where we know john
11:20
is going to do poorly let's look at his worst case scenario where the answer is 100 and see how britney performs in such
11:27
a scenario okay john let's do this one more time one through 100 again
11:32
is it one we can fast forward this scene because well we know what happens john takes 100 tries
11:38
hi brittany you're up is it 50 too low is it 75 too low 88 too
11:44
low 94 too low is it 97 too low 99 too low
11:51
100. okay so that took brittney seven turns again and this time she is the clear
11:57
winner if you compare their individual performances for the same number set you'll see that britney's approach
12:03
leaves john's in the dust when the answer was five so right around the start of the range john took five
12:08
turns but when the answer was 100 right at the end of the range he took 100 tries it took him 20 times the amount of
12:15
tries to get that answer compared to britney on the other hand if you compare britney's efforts when the number was 5
12:22
she took seven tries but when the number was 100 she took the same amount of tries this is pretty impressive if we
12:29
pretend that the number of tries is the number of seconds it takes britney and john to run through their attempts this
12:35
is a good estimate for how fast their solutions are ok we've done this a couple times and
12:41
brittany and john are getting tired let's take a break in the next video we'll talk about the point of this exercise
12:47
in the last video we ran through an exercise where i had some of my co-workers guess what number i was thinking so was the point of that
12:54
exercise you might be thinking hey i thought i was here to learn about algorithms
12:59
the exercise we just did was an example of a real life situation you will run into when building websites apps and
13:06
writing code both approaches taken by john and brittany to find the number i was
13:11
thinking of are examples of searching for a value it might be weird to think that there's
13:16
more than one way to search but as you saw in the game the speed at which the result was obtained differed between
13:23
john and brittany think about this problem from the perspective of a company like facebook
13:28
at the time of this recording facebook has 2.19 billion active users
13:33
let's say you're traveling in a different country and meet someone you want to add on facebook you go into the search bar and type out
13:39
this person's name if we simplify how the facebook app works it has to search across these 2.19
13:47
billion records and find the person you are looking for the speed at which you find this person
13:53
really matters imagine what kind of experience it would be if when you search for a friend facebook put up a
14:00
spinning activity indicator and said come back in a couple hours i don't think we'd use facebook as much
14:06
if that was the case from the company's perspective working on making search as fast as possible
14:12
using different strategies really matters now i said that the two strategies britney and john used were examples of
14:19
search more specifically these are search algorithms the strategy john took where he started
14:26
at the beginning of the range and just counted one number after the other is a type of search called linear search
14:33
it is also called sequential search which is a better description of how it works or even simple search since it
14:40
really is quite simple but what makes his approach an algorithm as opposed to just looking for something
14:47
remember we said that an algorithm is a set of steps or instructions to complete a task
14:53
linear search is a search algorithm and we can define it like this we start at the beginning of the list or
14:59
the range of values then we compare the current value to the target if the current value is the target value
15:06
that we're looking for we're done if it's not we'll move on sequentially to the next value in the list and then
15:13
repeat step 2. if we reach the end of the list then the target value is not in the list
15:20
this definition has nothing to do with programming and in fact you can use it in the real world for example
15:26
i could tell you to walk into a bookstore and find me a particular book and one of the ways you could do it is
15:32
using the linear search algorithm you could start at the front of the bookstore and read the cover or the
15:38
spine of every book to check that it matches the book that you're looking for if it doesn't you go to the next book
15:44
and repeat until you find it or run out of books what makes this an algorithm is the
15:50
specificity of how it is defined in contrast to just jumping into a problem and solving it as we go along an
15:58
algorithm follows a certain set of guidelines and we use the same steps to solve the problem each time we face it
16:05
an important first step to defining the algorithm isn't the algorithm itself but the problem we're trying to solve
16:12
our first guideline is that an algorithm must have a clear problem statement
16:17
it's pretty hard to define an instruction set when you don't have a clear idea of what problem you're trying
16:22
to solve in defining the problem we need to specify how the input is defined and
16:28
what the output looks like when the algorithm has done its job for linear search the input can be
16:34
generally described as a series of values and the output is a value matching the one we're looking for
16:41
right now we're trying to stay away from anything code related so this problem statement definition is pretty generic
16:46
but once we get to code we can actually tighten this up once we have a problem an algorithm is a
16:51
set of steps that solves this problem given that the next guideline is that an
16:57
algorithm definition must contain a specific set of instructions in a particular order
17:03
we really need to be clear about the order in which these instructions are executed
17:08
taking our simple definition of linear search if i switched up the order and said move sequentially to the next value
17:16
before specifying that first comparison step if the first value were the target one our algorithm wouldn't find it
17:22
because we moved to the second value before comparing now you might think okay that's just an
17:28
avoidable mistake and kind of common sense the thing is computers don't know any of that and just do exactly as we tell them
17:35
so specific order is really important the third guideline is that each step in
17:40
our algorithm definition must not be a complex one and needs to be explicitly clear what i mean by that is that you
17:47
shouldn't be able to break down any of the steps into further into additional subtasks
17:53
each step needs to be a distinct one we can't define linear search as search until you find this value because that
17:59
can be interpreted in many ways and further broken down into many more steps it's not clear
18:06
next and this one might seem obvious but algorithms should produce a result
18:11
if it didn't how would we know whether the algorithm works or not to be able to verify that our algorithm
18:18
works correctly we need a result now when using a search algorithm the end result can actually be nothing which
18:25
indicates that the value wasn't found but that's perfectly fine there are several ways to represent
18:31
nothing in code and as long as the algorithm can produce some results we can understand its behavior
18:37
the last guideline is that the algorithm should actually complete and cannot take an infinite amount of time
18:43
if we let john loose in the world's largest library and asked him to find a novel we have no way of knowing whether
18:50
he succeeded or not unless he came back to us with a result okay so quick recap what makes an
18:56
algorithm an algorithm and not just something you do one it needs to have a clearly defined
19:02
problem statement input and output when using linear search the input needs
19:07
to be just a series of values but to actually use brittany's strategy there's one additional precondition so to speak
19:15
if you think about her strategy it required that the numbers be sorted in ascending order
19:21
this means that where the input for john is just a series of values to solve the problem the input to brittany's
19:27
algorithm needs to be a sorted series of values so clearly defined problem statement
19:32
clearly defined input and clearly defined output second the steps in the algorithm need
19:38
to be in a very specific order the steps also need to be distinct you
19:44
should not be able to break it down into further subtasks next the algorithm should produce a
19:50
result and finally the algorithm should complete in a finite amount of time
19:56
these guidelines not only help us define what an algorithm is but also helps us
20:01
verify that the algorithm is correct executing the steps in an algorithm for
20:06
a given input must result in the same output every time
20:12
if in the game i played the answer was 50 every time then every single time
20:17
john must take 50 turns to find out that the answer is 50. if somehow he takes 50
20:23
turns in one round then 30 the next and we technically don't have a correct algorithm
20:29
consistent results for the same set of values is how we know that the algorithm is correct
20:35
i should stress that we're not going to be designing any algorithms on our own and we'll start off and spend most of
20:41
our time learning the tried and true algorithms that are known to efficiently solve problems
20:46
the reason for talking about what makes for a good algorithm though is that the same set of guidelines makes for good
20:53
algorithmic thinking which is one of the most important skills we want to cultivate
20:58
when we encounter a problem before rushing in and thinking about solutions what we want to do is work through the
21:04
guidelines first we break down the problem into any possible number of smaller problems
21:09
where each problem can be clearly defined in terms of an input and an output
21:15
now that we know how to generally define an algorithm let's talk about what it means to have a good algorithm
21:20
an important thing to keep in mind is that there's no one single way to measure whether an algorithm is the
21:27
right solution because it is all about context earlier we touched on two concepts
21:33
correctness and efficiency let's define correctness more clearly because before we can evaluate an
21:39
algorithm on efficiency we need to ensure its correctness before we define our algorithms we start
21:45
by defining our problem in the definition of that problem we have a clearly defined input satisfying
21:52
any preconditions and a clearly defined output an algorithm is deemed correct if on
21:59
every run of the algorithm against all possible values in the input data we always get the output we expect
22:06
part of correctness means that for any possible input the algorithm should always terminate or end
22:14
if these two are not true then our algorithm isn't correct if you were to pick up an algorithm's
22:20
textbook and look up correctness you will run into a bunch of mathematical theory this is because traditionally
22:27
algorithm correctness is proved by mathematical induction which is a form of reasoning used in mathematics to
22:34
verify that a statement is correct this approach involves writing what is called a specification and a correctness
22:41
proof we won't be going into that in this course proof through induction is an important
22:47
part of designing algorithms but we're confident that you can understand algorithms both in terms of how and when
22:53
to use them without getting into the math so if you pick up a textbook and feel daunted don't worry i do too but we
23:00
can still figure things out without it all right so once we have a correct algorithm we can start to talk about how
23:07
efficient an algorithm is remember that this efficiency ultimately matters because they help us solve
23:13
problems faster and deliver a better end user experience in a variety of fields
23:19
for example algorithms are used in the sequencing of dna and more efficient
23:24
sequencing algorithms allow us to research and understand diseases better and faster but let's not get ahead of
23:31
ourselves we'll start simple by evaluating john's linear search algorithm in terms of its efficiency
23:39
first what do we mean by efficiency there are two measures of efficiency when it comes to algorithms time and
23:46
space sounds really cool and very sci-fi huh efficiency measured by time something
23:52
you'll hear called time complexity is a measure of how long it takes the algorithm to run
23:58
time complexity can be understood generally outside the context of code and computers because how long it takes
24:04
to complete a job is a universal measure of efficiency the less time you take the more efficient you are
24:10
the second measure of efficiency is called space complexity and this is pretty computer specific
24:16
it deals with the amount of memory taken up on the computer good algorithms need to balance between
24:23
these two measures to be useful for example you can have a blazingly fast algorithm but it might not matter
24:29
if the algorithm consumes more memory than you have available both of these concepts time and space
24:35
complexity are measured using the same metric but it is a very technical sounding metric so let's build up to it
24:42
slowly and start simple a few videos ago i played a game with brittany and john where they tried to
24:49
guess the number i was thinking of effectively they were searching for a value so how do we figure out how efficient
24:56
each algorithm is and which algorithm was more suited to our purposes
25:01
if we consider the number of tries they took to guess or search for the value as
25:06
an indicator of the time they take to run through the exercise this is a good indicator of how long the algorithm runs
25:13
for a given set of values this measurement is called the running time of an algorithm and we'll use it to
25:19
define time complexity in the game we play it four rounds let's recap those here focusing on john's
25:26
performance in round one we had 10 values the target was 3 and john took 3 turns
25:33
in round 2 we had 10 values the target was 10 and john took 10 turns in round 3 we had 100 values the target
25:40
was john took five tries and finally in round four when the target was 100 given
25:46
100 values john took 100 tries on paper it's hard to gauge anything
25:52
about this performance when it comes to anything with numbers though i like to put it up on a graph and compare visually
25:58
on the vertical or y-axis let's measure the number of tries it took john to guess the answer or the running time of
26:05
the algorithm on the horizontal or x-axis what do we put
26:10
for each turn we have a number of values as well as a target value
26:16
we could plot the target value on the horizontal axis but that leaves some context and meaning behind it's far more
26:23
impressive that john took five tries when the range went up to 100 then when
26:28
he took three tries for a maximum of 10 values we could plot the maximum range of
26:34
values but then we're leaving out the other half of the picture there are data points however that
26:39
satisfy both requirements if we only plot the values where the target the number john was looking for
26:46
was the same as the maximum range of values we have a data point that includes both the size of the data set
26:53
as well as his effort there's an additional benefit to this approach as well
26:58
there are three ways we can measure how well john does or in general how well any algorithm does
27:04
first we can check how well john does in the best case or good scenarios from the perspective of his strategy
27:11
in the range of 100 values the answer being a low number like three at the start of the range is a good scenario he
27:18
can guess it fairly quickly one is his best case scenario or we could check how well he does on
27:24
average we could run this game a bunch of times and average out the running time this would give us a much better picture
27:30
of john's performance over time but our estimates would be too high if the value he was searching for was at the start of
27:37
the range or far too low if it was at the end of the range let's imagine a scenario where facebook
27:43
naively implements linear search when finding friends they looked at the latest u.s census saw
27:48
that 50 of names start with the letters a through j which is the first 40 of the
27:54
alphabet and thought okay on average linear search serves us well but what about the rest of those whose
28:01
names start with the letter after j in the alphabet searching for my name would take longer
28:06
than the average and much longer for someone whose name starts with the letter z so while measuring the run time of an
28:12
algorithm on average might seem like a good strategy it won't necessarily provide an accurate picture
28:19
by picking the maximum in the range we're measuring how our algorithm does in the worst case scenario
28:26
analyzing the worst case scenario is quite useful because it indicates that the algorithm will never perform worse
28:32
than we expect there's no room for surprises back to our graph we're going to plot
28:38
the number of tries a proxy for running time of the algorithm against the number of values in the range which will
28:44
shorten to n n here also represents john's worst case scenario when n is 10 he takes 10 turns
28:52
when n is 100 he takes 100 turns but these two values alone are
28:57
insufficient to really get any sort of visual understanding moreover it's not realistic
29:03
john may take a long time to work through 100 numbers but a computer can do that in no time
29:09
to evaluate the performance of linear search in the context of a computer we should probably throw some harder and
29:15
larger ranges of values at it the nice thing is by evaluating a worst
29:20
case scenario we don't actually have to do that work we know what the result will be for a
29:26
given value of n using linear search it will take n tries to find the value in the worst case scenario so let's add a
29:33
few values in here to build out this graph okay so we have a good picture of what this is starting to look like as the
29:40
values get really large the running time of the algorithm gets large as well we sort of already knew that
29:47
before we dig into this runtime any deeper let's switch tracks and evaluate brittany's work
29:52
by having something to compare against it should become easier to build a mental model around time complexity
29:58
the algorithm john used linear search seemed familiar to us and you could understand it because it's how most of
30:04
us search for things in real life anyway brittany's approach on the other hand got results quickly but it was a bit
30:11
harder to understand so let's break it down just like john's approach britney started with a series of values or a
30:17
list of numbers as her input where john just started at the beginning of the list and searched sequentially
30:24
brittany's strategy is to always start in the middle of the range from there she asks a comparison
30:30
question is the number in the middle of the range equal to the answer she's looking for and if it's not is it greater than or
30:37
less than the answer if it's greater than she can eliminate all the values less than the one she's
30:43
currently evaluating if it's lesser than the answer she can eliminate all the values greater than the one she's
30:49
currently evaluating with the range of values that she's left over with she repeats this process until
30:56
she arrives at the answer let's visualize how she did this by looking at round three
31:02
in round three the number of values in the range was 100 the answer was 5. the bar here represents the range of
31:08
values one of the left 100 at the right and this pointer represents the value britney chooses to evaluate
31:15
so she starts in the middle at 50. she asks is it equal to the answer i say it's too high so this tells her that the
31:23
value she is evaluating is greater than our target value which means there's no
31:28
point in searching any of the values to the right of 50 that is values greater than 50 in this range so she can discard
31:35
those values altogether she only has to consider values from 1 to 50 now
31:41
the beauty of this strategy and the reason why britney was able to find the answer in such few turns is that with
31:47
every value she evaluates she can discard half of the current range
31:53
on her second turn she picks the value in the middle of the current range which is 25. she asks the same question i say
32:00
that the value is too high again and this tells her that she can discard everything greater than 25 and the range
32:06
of values drops from 1 to 25. again she evaluates the number in the middle roughly so that'd be 13 here i
32:14
tell her this is still too high she discards the values greater moves to value at 7 which is still too high
32:21
then she moves to 4 which is now too low she can discard everything less than 4 which leaves the numbers 4 through 7.
32:28
here she picked 6 which was too high which only leaves one value 5.
32:34
this seems like a lot of work but being able to get rid of half the values with each turn is what makes this algorithm
32:41
much more efficient now there's one subtlety to using binary search and you might have caught on to
32:46
this for this search method to work as we've mentioned the values need to be sorted
32:51
with linear search it doesn't matter if the values are sorted since a linear search algorithm just progresses
32:58
sequentially checking every element in the list if the target value exists in the list it will be fouled but let's say
33:05
this range of values 100 was unsorted britney would start at the middle with
33:10
something like 14 and ask if this value was too low or too high i say it's too high so she discards everything less
33:17
than 14. now this example starts to fall apart here because well britney knows what
33:22
numbers are less than 14 and greater than one she doesn't need an actual range of values to solve this a computer
33:29
however does need that remember search algorithms are run against lists containing all sorts of
33:35
data it's not always just a range of values containing numbers in a real use case of binary search
33:41
which we're going to implement in a bit the algorithm wouldn't return the target value because we already know that it's
33:48
a search algorithm so we're providing something to search for instead what it returns is the position in the list that
33:54
the target occupies without the list being sorted a binary search algorithm would discard all the values to the left
34:01
of 14 which over here could include the position where our target value is eventually we'd get a result back saying
34:07
the target value doesn't exist in the list which is inaccurate earlier when defining linear simple
34:14
search i said that the input was a list of values and the output was the target value or more specifically the position
34:21
of the target value in the list so with binary search there's also that precondition the input list must be
34:27
sorted so let's formally define binary search first the input a sorted list of values
34:34
the output the position in the list of the target value we're searching for or
34:39
some sort of values indicate that the target does not exist in the list remember our guidelines for defining an
34:45
algorithm let me put those up again really quick the steps in the algorithm need to be in a specific order the steps also need to
34:52
be very distinct the algorithms should produce a result and finally the algorithm should
34:58
complete in a finite amount of time let's use those to define this algorithm step one we determine the middle
35:05
position of the sorted list step two we compare the element in the middle position to the target element
35:12
step three if the elements match we return the middle position and end
35:17
if they don't match in step 4 we check whether the element in the middle position is smaller than the target
35:23
element if it is then we go back to step 2 with a new list that goes from the middle
35:29
position of the current list to the end of the current list in step five if the element in the
35:34
middle position is greater than the target element then again we go back to step two with a new list that goes from
35:40
the start of the current list to the middle position of the current list we repeat this process until the target
35:47
element is found or until a sub list contains only one element
35:52
if that single element sublist does not match the target element then we end the
35:57
algorithm indicating that the element does not exist in the list okay so that is the magic behind how
36:04
britney managed to solve the round much faster in the next video let's talk about the
36:09
efficiency of binary search [Music]
36:16
we have a vague understanding that britney's approach is better in most cases but just like with linear search
36:22
it helps to visualize this much like we did with linear search when determining the efficiency of an
36:27
algorithm and remember we're still only looking at efficiency in terms of time time complexity as it's called we always
36:35
want to evaluate how the algorithm performs in the worst case scenario now you might be thinking well that doesn't
36:41
seem fair because given a series of data if the target value we're searching for
36:46
is somewhere near the front of the list then linear search may perform just as well if not slightly better than binary
36:52
search and that is totally true remember a crucial part of learning algorithms is understanding what works
36:59
better in a given context when measuring efficiency though we always use the worst case scenarios as a
37:06
benchmark because remember it can never perform worse than the worst case let's plot these values on the graph we
37:12
started earlier with the number of tris or the runtime of the algorithm on the y axis and the maximum number of values in
37:20
the series or n on the horizontal axis to represent the worst case scenario we
37:25
have two data points when n equals 10 britney took four tries using binary search and when n equals 100 it took
37:33
seven tries but even side by side these data points are sort of meaningless
37:38
remember that while there is quite a difference between the run time of linear search and binary search at an n
37:44
value of 100 for a computer that shouldn't matter what we should check out is how the
37:49
algorithm performs at levels of n that might actually slow a computer down
37:54
as n grows larger and larger how do these algorithms compare to one another
37:59
let's add that to the graph okay now a picture starts to emerge as n gets really large the performance
38:07
of these two algorithms differs significantly the difference is kind of staggering actually
38:13
even with the simple game we saw that binary search was better but now we have a much more complete idea of how much
38:19
better for example when n is 1000 the runtime of linear search measured by the number
38:26
of operations or turns is also 1000. for binary search it takes just 10
38:32
operations now let's look at what happens when we increase n by factor of 10
38:37
at 10 000 linear search takes 10 000 operations while binary search takes 14
38:43
operations and increased by a factor of 10 in binary search only needs four more
38:48
operations to find a value if we increase it again by a factor of 10 once more to an n value of 100 000
38:57
binary search takes only 17 operations it is blazing fast
39:02
what we've done here is plotted on a graph how the algorithm performs as the input set it is working on increases
39:10
in other words we've plotted the growth rate of the algorithm also known as the order of growth
39:17
different algorithms grow at different rates and by evaluating their growth rates we get a much better picture of
39:23
their performance because we know how the algorithm will hold up as n grows larger
39:29
this is so important in fact it is the standard way of evaluating an algorithm
39:34
and brings us to a concept called big o you might have heard this word thrown about and if you found it confusing
39:41
don't worry we've already built up a definition in the past few videos we just need to bring it all together
39:48
let's start with a common statement you'll see in studies on algorithms big o is a theoretical definition of the
39:55
complexity of an algorithm as a function of the size wow what a mouthful this sounds really
40:01
intimidating but it's really not let's break it down big o is a notation used to describe
40:08
complexity and what i mean by notation is that it simplifies everything we've talked about down into a single variable
40:16
an example of complexity written in terms of big o looks like this
40:21
as you can see it starts with an uppercase letter o that's why we call it big o it's literally a big o
40:28
the o comes from order of magnitude of complexity so that's where we get the big o from now complexity here refers to
40:35
the exercise we've been carrying out in measuring efficiency if it takes brittany 4 tries when n is
40:42
10 how long does the algorithm take when n is 10 million when we use big o for this the variable
40:49
used which we'll get to distills that information down so that by reading the variable you get a big picture view
40:55
without having to run through data points and graphs just like we did it's important to remember that
41:01
complexity is relative when we evaluate the complexity of the binary search algorithm we're doing it
41:07
relative to other search algorithms not all algorithms bigo is a useful notation for
41:14
understanding both time and space complexity but only when comparing amongst algorithms that solve the same
41:20
problem the last bit in that definition of big o is a function of the size and all this
41:26
means is that big o measures complexity as the input size grows because it's not
41:32
important to understand how an algorithm performs in a single data set but in all possible data sets
41:38
you will also see big o referred to as the upper bound of the algorithm and what that means is that big o measures
41:45
how the algorithm performs in the worst case scenario so that's all big o is
41:51
nothing special it's just a notation that condenses the data points and graphs that we've built up down to one
41:57
variable okay so what do these variables look like for john's strategy linear search we say
42:03
that it has a time complexity of big o and then n so that's again big o with an
42:08
n inside parentheses for britney strategy binary search we say that it has a time complexity of big
42:15
o of log n that's big o with something called a log and an n inside parentheses
42:20
now don't worry if you don't understand that we'll go into that in more detail later on in the course
42:26
each of these has a special meaning but it helps to work through all of them to get a big picture view so over the next
42:32
few videos let's examine what are called common complexities or common values of big o that you will run into and should
42:38
internalize in our discussions of complexity we made one assumption that the algorithm as a
42:44
whole had a single measure of complexity that isn't true and we'll get at how we arrive at these measures for the entire
42:51
algorithm at the end of this exercise but each step in the algorithm has its own space and time complexity
42:59
in linear search for example there are multiple steps and the algorithm goes like this start at the beginning of the list or
43:06
range of values compare the current value to the target if the current value is the target value that we're looking
43:12
for we're done if it's not we'll move on sequentially to the next value in the list and repeat
43:17
step two if we reach the end of the list then the target value is not in the list
43:23
let's go back to step two for a second comparing the current value to the target
43:28
does the size of the data set matter for this step when we're at step two we're already at
43:34
that position in the list and all we're doing is reading the value to make a comparison reading the value is a single
43:41
operation and if we were to plot it on a graph of runtime per operations against
43:46
n it looks like this a straight line that takes constant time regardless of
43:52
the size of n since this takes the same amount of time in any given case we say
43:58
that the run time is constant time it doesn't change in big o notation we represent this as
44:04
big o with a 1 inside parentheses now when i first started learning all this i
44:10
was really confused as to how to read this even if it was in my own head should i say big o of one
44:16
when you see this written you're going to read this as constant time so reading a value in a list is a constant time
44:23
operation this is the most ideal case when it comes to run times because input size does not matter and we know that
44:30
regardless of the size of n the algorithm runtime will remain the same the next step up in complexity so to
44:37
speak is the situation we encountered with the binary search algorithm
44:42
traditionally explaining the time complexity of binary search involves math i'm going to try to do it both with
44:48
and without when we played the game using binary search we notice that with every turn we
44:55
were able to discard half of the data but there's another pattern that emerges
45:00
that we didn't explore let's say n equals 10. how long does it take to find an item at the 10th
45:06
position of the list we can write this out so we go from 10 to 5 to 8 to 9 and
45:12
then down to 10. here it takes us four tries to cut down the list to just one element and find
45:18
the value we're looking for let's double the value of n to 20 and see how long it takes for us to find an
45:25
item at the 20th position so we start at 20 and then we pick 10 from there we go to 15 17 19 and finally 20.
45:33
so here it takes us five tries okay let's double it again so that n is 40 and we try to find the item in the
45:40
40th position so when we start at 40 the first midpoint we're going to pick is 20 from
45:46
there we go to 30 then 35 37 39 and then 40.
45:51
notice that every time we double the value of n the number of operations it
45:57
takes to reduce the list down to a single element only increases by 1.
46:02
there's a mathematical relationship to this pattern and it's called a logarithm of n
46:08
you don't really have to know what logarithms truly are but i know that some of you like underlying explainers
46:13
so i'll give you a quick one if you've taken algebra classes you may have learned about exponents here's a
46:20
quick refresher 2 times 1 equals 2. now this can be written as 2 raised to the first power
46:27
because it is our base case two times one is two now two times two is four this can be
46:33
written as two raised to the second power because we're multiplying two twice first we multiply two times one
46:40
then the result of that times 2. 2 times 2 times 2 is 8 and we can write
46:46
this as 2 raised to the 3rd power because we're multiplying 2 3 times
46:51
in 2 raised to 2 and 2 raised to 3 the 2 and 3 there are called exponents and
46:57
they define how the number grows with 2 raised to 3 we start with the
47:02
base value and multiply itself 3 times the inverse of an exponent is called a
47:08
logarithm so if i say log to the base 2 of 8 equals 3 i'm basically saying the
47:15
opposite of an exponent instead of saying how many times do i have to multiply this value i'm asking
47:20
how many times do i have to divide 8 by two to get the value one this takes three operations
47:27
what about the result of log to the base two of sixteen that evaluates to four
47:33
so why does any of this matter notice that this is sort of how binary search works
47:38
log to the base 2 of 16 is 4. if n was 16 how many triads does it take
47:45
to get to that last element well we start in the middle at 8 that's too low so we move to 12 then we move to
47:51
14 then to 15 and then to 16 which is 5 tries or log to the base 2 of 16 plus 1.
48:00
in general for a given value of n the number of tries it takes to find the worst case scenario
48:06
is log of n plus one and because this pattern is overall a
48:12
logarithmic pattern we say that the runtime of such algorithms is logarithmic
48:17
if we plot these data points on our graph a logarithmic runtime looks like this
48:23
in big o notation we represent a logarithmic runtime as big o of log n
48:28
which is written as big o with log n inside parentheses or even sometimes as
48:34
l n n inside parentheses when you see this read it as logarithmic
48:39
time as you can see on the graph as n grows really large the number of operations
48:46
grows very slowly and eventually flattens out since this line is below the line for a
48:52
linear runtime which we'll look at in a second you might often hear algorithms with logarithmic runtimes being called
48:59
sublinear logarithmic or sub-linear runtimes are preferred to linear because they're more
49:05
efficient but in practice linear search has its own set of advantages which we'll take a look at in the next video
49:13
next up let's look at the situation we encountered with the linear search algorithm we saw that in the worst case scenario
49:20
whatever the value of n was john took exactly that many tries to find the answer
49:25
as in linear search when the number of operations to determine the result in the worst case scenario is at most the
49:32
same as n we say that the algorithm runs in linear time we represent this as big o of n now you
49:40
can read that as big o of n like i just said or you can say linear time which is more common
49:45
when we put that up on a graph against constant time and logarithmic time we get a line that looks like this
49:52
any algorithm that sequentially reads the input will have linear time so remember anytime you know a problem
50:00
involves reading every item in a list that means a linear run time as you saw
50:05
from the game we played brittany's strategy using binary search was clearly better and we can see that on the graph
50:12
so if we had the option why would we use linear search which runs in linear time
50:17
remember that binary search had a precondition the input set had to be sorted
50:22
while we won't be looking at sorting algorithms in this course as you learn more about algorithms you'll find that
50:28
sorting algorithms have varying complexities themselves just like search does so we have to do additional work
50:34
prior to using binary search for this reason in practice linear
50:40
search ends up being more performant up to a certain value of n because the combination of sorting first and then
50:47
searching using binary search adds up the next common complexity you will hear about is when an algorithm runs in
50:54
quadratic time if the word quadratic sounds familiar to you it's because you might have heard about it in math class
51:01
quadratic is a word that means an operation raised to the second power or
51:06
when something is squared let's say you and your friends are playing a tower defense game and to
51:12
start it off you're going to draw a map of the terrain this map is going to be a grid and you
51:18
pick a random number to determine how large this grid is let's set n the size of the grid to four
51:25
next you need to come up with a list of coordinates so you can place towers and enemies and stuff on this map so how
51:31
would we do this if we start out horizontally we'd have coordinate points that go 1 1 1 2 1 3
51:38
and 1 4. then you go up one level vertically and we have points 2 1 2 2 2 3 and 2 4.
51:46
go up one more and you have the points 3 1 3 2 3 3 and 3 4 and on that last row
51:52
you have the points 4 1 4 2 4 3 and 4 4. notice that we have a pattern here
51:59
for each row we take the value and then create a point by adding to that every column value
52:05
the range of values go from 1 to the value of n so we can generally think of it this way
52:11
for the range of values from 1 to n for each value in that range we create a
52:16
point by combining that value with the range of values from 1 to n again
52:21
doing it this way for each value in the range of 1 to n we create an n number of
52:27
values and we end up with 16 points which is also n times n or n squared
52:34
this is an algorithm with a quadratic runtime because for any given value of n we carry out n squared number of
52:41
operations now i picked a relatively easy so to speak example here because in english at
52:47
least we often denote map sizes by height times width so we would call this
52:52
a 4 by 4 grid which is just another way of saying 4 squared or n squared
52:58
in big o notation we would write this as big o of n squared or say that this is
53:03
an algorithm with a quadratic runtime many search algorithms have a worst case
53:09
quadratic runtime which you'll learn about soon now in addition to quadratic runtimes
53:14
you may also run into cubic runtimes as you encounter different algorithms in such an algorithm for a given value of n
53:21
the algorithm executes n raised to the third power number of operations
53:26
these aren't as common as quadratic algorithms though so we won't look at any examples but i think it's worth
53:31
mentioning thrown up on our graph quadratic and cubic runtimes look like this
53:37
so this is starting to look pretty expensive computationally as they say we can see here that for small changes in n
53:44
there's a pretty significant change in the number of operations that we need to carry out
53:50
the next worst case runtime we're going to look at is one that's called quasi-linear and a sort of easier to
53:55
understand for lack of better word by starting with the big o notation quasi-linear runtimes are written out as
54:03
big o of n times log n we learned what log n was right a
54:08
logarithmic runtime whereas n grew the number of operations only increased by a small factor with a quasi-linear runtime
54:16
what we're saying is that for every value of n we're going to execute a log n number of operations hence the run
54:23
time of n times log n so you saw earlier with the quadratic runtime that for each value of n we
54:30
conducted n operations it's sort of the same in that as we go through the range of values in n we're executing login
54:37
operations in comparison to other runtimes a quasi-linear algorithm has a runtime
54:43
that lies somewhere between a linear runtime and a quadratic runtime so where would we expect to see this
54:50
kind of runtime in practical use well sorting algorithms is one place you will definitely see it
54:56
merge sort for example is a sorting algorithm that has a worst case runtime of big o of n log n
55:02
let's take a look at a quick example let's say we start off with a list of numbers that looks like this and we need
55:08
to sort it merge sort starts by splitting this list into two lists down the middle
55:15
it then takes each sub list and splits that in half down the middle again it keeps doing this until we end up with
55:22
a list of just a single number when we're down to single numbers we can do one sort operation and merge these
55:29
sub-lists back in the opposite direction the first part of merge sort cuts those
55:34
lists into sub-lists with half the numbers this is similar to binary search where
55:40
each comparison operation cuts down the range to half the values
55:45
you know the worst case runtime in binary search is log n so these splitting operations have the same
55:51
runtime big o of log n or logarithmic but splitting into half isn't the only
55:56
thing we need to do with merge sort we also need to carry out comparison operations so we can sort those values
56:02
and if you look at each step of this algorithm we carry out an n number of comparison operations and that brings
56:09
the worst case runtime of this algorithm to n times log n also known as quasi
56:15
linear don't worry if you didn't understand how merge sort works that wasn't the point of this demonstration
56:21
we will be covering merge sorts soon in a future course the run times we've looked at so far are
56:27
all called polynomial runtimes an algorithm is considered to have a polynomial runtime if for a given value
56:34
of n its worst case runtime is in the form of n raised to the k power where k
56:40
just means some value so it could be n squared where k equals 2 for a quadratic runtime n cubed for a cubic runtime and
56:47
so on all of those are in the form of n raised to some power anything that is bounded by this and
56:54
what i mean by that is if we had a hypothetical line on our graph of n raised to the k power anything that
57:00
falls under this graph is considered to have a polynomial runtime algorithms with an upper bound or a
57:07
runtime with a big o value that is polynomial are considered efficient algorithms and are likely to be used in
57:14
practice now the next class of runtimes that we're going to look at are a runtimes
57:19
that we don't consider efficient and these are called exponential runtimes
57:25
with these runtimes as n increases slightly the number of operations increases exponentially and as we'll see
57:32
in a second these algorithms are far too expensive to be used an exponential runtime is an algorithm
57:39
with a big o value of some number raised to the nth power imagine that you wanted to break into a
57:45
locker that had a padlock on it let's assume you forgot your code this lock takes a two digit code and the
57:52
digit for the code ranges from zero to nine you start by setting the dials to zero
57:57
and then with the first dial remaining on zero you change the second dial to one and try and open it if it doesn't
58:04
work you set it to two then try again you would keep doing this and if you still haven't succeeded with the second
58:10
dial set to 9 then you go back to that first dial set it to 1 and start the second dial over
58:16
the range of values you'd have to go through is 0 0 to 9 9 which is 100
58:22
values this can be generalized as 10 to the second power since there are 10 values
58:28
on each dial raised to two dials searching through each individual value
58:33
until you stumble on the right one is a strategy called brute force and brute force algorithms have exponential run
58:40
times here there are two dials so n is 2 and each dial has 10 values so again we can
58:46
generalize this algorithm as 10 raised to n where n represents the number of dials
58:52
the reason that this algorithm is so inefficient is because with just one more dial on the lock the number of
58:58
operations increases significantly with three dials the number of combinations in the worst case scenario
59:05
where the correct code is the last digit in the range is 10 raised to 3 or 1 000
59:10
values with an additional wheel it becomes 10 raised to 4 or 10 000 values
59:16
as n increases the number of operations increases exponentially to a point where
59:21
it's unsolvable in a realistic amount of time now you might think well any computer
59:27
can crack a four digit numerical lock and that's true because n here is sufficiently small but this is the same
59:34
principle that we use for passwords in a typical password field implemented
59:39
well users are allowed to use letters of the english alphabet so up to 26 characters numbers from 0 to 9 and a set
59:46
of special characters of which there can be around 33 so typically that means each character
59:52
in a password can be one out of 69 values this means that for a one character
59:58
password it takes 69 to the nth power so 1 which equals 69 operations in the
1:00:05
worst case scenario to figure out the password just increasing n to 2 increases the
1:00:11
number of operations needed to guess the password to 69 squared or
1:00:17
4761 operations now usually on a secure website there isn't really a limit but in general
1:00:24
passwords are limited to around 20 characters in length with each character being a possible 69
1:00:30
values and there being 20 characters the number of operations needed to guess the password in the worst case scenario is
1:00:38
69 raised to the 20th power or approximately 6 followed by 36 zeros
1:00:44
number of operations an intel cpu with five cores can carry
1:00:50
out roughly about 65 000 million instructions per second that's a funny number i know to crack our 20-digit
1:00:57
passcode in this very simplistic model it would take this intel cpu
1:01:02
to race to 20th power years to brute force the password
1:01:08
so while this algorithm would eventually produce a result it is so inefficient that it's pointless
1:01:14
this is one of the reasons why people recommend you have longer passwords since brute forcing is exponential in
1:01:20
the worst case each character you add increases the number of combinations by an exponent
1:01:26
the next class of exponential algorithms is best highlighted by a popular problem
1:01:31
known as the traveling salesman the problem statement goes like this given a list of cities and the distance
1:01:38
between each pair of cities what is the shortest possible route that visits each
1:01:43
city and then returns to the origin city this seems like a simple question but let's start with a simple case three
1:01:50
cities a b and c to figure out what the shortest route is we need to come up with all the possible
1:01:57
routes with three cities we have six routes in theory at least some of these routes can
1:02:03
be discarded because abc is the same as c b a but in the opposite direction
1:02:08
but as we do know sometimes going from a to c through b may go through a different route than c to a through b so
1:02:15
we'll stick to the six routes and from there we could determine the shortest no big deal
1:02:20
now if we increase this to four cities we jump to 24 combinations the mathematical relationship that
1:02:26
defines this is called a factorial and is written out as n followed by an exclamation point
1:02:33
factorials are basically n times n minus one repeated until you reach the number
1:02:39
one so for example the factorial of three is three times two times one which
1:02:44
is six which is the number of combinations we came up with for three cities the factorial of four is four times
1:02:51
three times two times one or 24 which is the number of combinations we arrived at
1:02:56
with four cities in solving the traveling salesman problem the most efficient algorithm
1:03:03
will have a factorial runtime or a combinatorial runtime as it's also called
1:03:09
at low values of n algorithms with a factorial runtime may be used but with an n value of say 200 it would take
1:03:16
longer than humans have been alive to solve the problem for sake of completeness let's plot a
1:03:22
combinatorial runtime on our graph so that we can compare an algorithm such as one that solves the
1:03:28
traveling salesman problem as a worst case run time of big o of n factorial
1:03:34
studying exponential runtimes like this are useful for two reasons first in studying how to make such
1:03:40
algorithms efficient we develop strategies that are useful across the board and can potentially be used to
1:03:46
make existing algorithms even more efficient second it's important to be aware of problems that take a long time to solve
1:03:54
knowing right off the bat that a problem is somewhat unsolvable in a realistic time means you can focus your efforts on
1:04:01
other aspects of the problem as beginners though we're going to steer clear of all this and focus our efforts
1:04:07
on algorithms with polynomial runtimes since we're much more likely to work with and learn about such algorithms
1:04:14
now that we know some of the common complexities in the next video let's talk about how we determine the
1:04:19
complexity of an algorithm because there are some nuances over the last few videos we took a look
1:04:26
at common complexities that we would encounter in studying algorithms but the question remains how do we determine
1:04:32
what the worst case complexity of an algorithm is earlier i mentioned that even though we
1:04:38
say that an algorithm has a particular upper bound or worst case runtime each step in a given algorithm can have
1:04:45
different run times let's bring up the steps for binary search again assuming the list is sorted the first
1:04:52
step is to determine the middle position of the list in general this is going to be a
1:04:57
constant time operation many programming languages hold on to information about the size of the list
1:05:04
so we don't actually need to walk through the list to determine the size now if we didn't have information about
1:05:10
the size of the list we would need to walk through counting each item one by one until we reached the end of the list
1:05:18
and this is a linear time operation but realistically this is a big o of 1 or
1:05:24
constant time step 2 is to compare the element in the middle position to the target element
1:05:31
we can assume that in most modern programming languages this is also a constant time operation because the
1:05:37
documentation for the language tells us it is step 3 is our success case and the
1:05:42
algorithm ends this is our best case and so far we have only incurred two constant time
1:05:49
operations so we would say that the best case run time of binary search is constant time
1:05:55
which is actually true but remember that best case is not a useful metric
1:06:00
step 4 if we don't match is splitting the list into sub-lists assuming the worst case scenario the
1:06:07
algorithm would keep splitting into sub-lists until a single element list is reached with the value that we're
1:06:13
searching for the run time for this step is logarithmic since we discard half the
1:06:18
values each time so in our algorithm we have a couple steps that are constant time and one
1:06:25
step that is logarithmic overall when evaluating the run time for an algorithm we say that the algorithm has
1:06:32
as its upper bound the same runtime as the least efficient step in the algorithm
1:06:38
think of it this way let's say you're participating in a triathlon which is a race that has a swimming running and a
1:06:45
cycling component you could be a phenomenal swimmer and a really good cyclist but you're a pretty
1:06:51
terrible runner no matter how fast you are at swimming or cycling your overall race time is
1:06:57
going to be impacted the most by your running race time because that's the part that takes you the longest
1:07:03
if you take an hour 30 to finish the running component 55 minutes to swim and
1:07:08
38 minutes to bike it won't matter if you can fine tune your swimming technique down to finish in 48 minutes
1:07:15
and your cycle time to 35 because you're still bounded at the top by your running
1:07:20
time which is close to almost double your bike time similarly with the binary search
1:07:26
algorithm it doesn't matter how fast we make the other steps they're already as fast as they can be
1:07:32
in the worst case scenario the splitting of the list down to a single element list is what will impact the overall
1:07:38
running time of your algorithm this is why we say that the time complexity or run time of the algorithm
1:07:44
in the worst case is big o of log n or logarithmic as i alluded to though your algorithm
1:07:50
may hit a best case runtime and in between the two best and worst case have an average run time as well
1:07:57
this is important to understand because algorithms don't always hit their worst case but this is getting a bit too
1:08:02
complex for us for now we can safely ignore average case performances and focus only on the worst case in the
1:08:10
future if you decide to stick around we'll circle back and talk about this more
1:08:15
now that you know about algorithms complexities and big o let's take a break from all of that and write code in
1:08:21
the next video [Music]
1:08:28
so far we've spent a lot of time in theory and while these things are all important things to know you get a much
1:08:33
better understanding of how algorithms work when you start writing some code as i mentioned earlier we're going to be
1:08:39
writing python code in this and all subsequent algorithm courses if you do have programming experience
1:08:46
but in another language check the notes section of this video for an implementation in your language
1:08:52
if you don't have any experience i'll try my best explain as we go along on the video you're watching right now
1:08:59
you should see a launch workspaces button we're going to use a treehouse coding environment call workspaces to write all
1:09:06
of our code if you're familiar with using python in a local environment then feel free to
1:09:12
keep doing so workspaces is an in-browser coding environment and will take care of all the setup and
1:09:18
installation so you can focus on just writing and evaluating code workspaces
1:09:24
is quite straightforward to use on the left here we have a file navigator pane which is currently empty since we
1:09:31
haven't created a new file on the top we have an editor where we write all our code and then below that
1:09:37
we have a terminal or a command line prompt where we can execute the scripts that we write let's add a new file here
1:09:43
so at the top in the editor area we're going to go to file new file and we'll name this linear
1:09:50
underscore search dot pi in here we're going to define our linear
1:09:57
search algorithm as a standalone function we start with the keyword def which
1:10:03
defines a function or a block of code and then we give it the name linear
1:10:08
underscore search this function will accept two arguments first the list we're searching through
1:10:15
and then the target value we're looking for both of these arguments are enclosed in a set of parentheses and there's no
1:10:22
space between the name of the function and the arguments after that we have a colon
1:10:28
now there might be a bit of confusion here since we already have this target value what are we searching for unlike
1:10:35
the game we played at the beginning where john's job was to find the value in a true implementation of linear
1:10:42
search we're looking for the position in the list where the value exists if the target is in the list then we
1:10:48
return its position and since this is a list that position is going to be denoted by an index value
1:10:55
now if the target is not found we're going to return none the choice of what to return in the failure case may be
1:11:01
different in other implementations of linear search you can return -1 since that isn't
1:11:07
typically an index value you can also raise an exception which is python speak for indicating an error
1:11:13
occurred now i think for us the most straightforward value we can return here is none now let's add a comment to
1:11:19
clarify this so hit enter to go to the next line and then we're going to add three
1:11:26
single quotes and then below that on the next line we'll say returns the position or the index
1:11:33
position of the target if found else returns none
1:11:40
and then on the next line we'll close off those three quotes this is called a doc string and is a
1:11:46
python convention for documenting your code the linear search algorithm is a sequential algorithm that compares each
1:11:53
item in the list until the target is found to iterate or loop or walk through our
1:11:59
list sequentially we're going to use a for loop now typically when iterating over a list
1:12:05
in python we would use a loop like this we'd say for item in list
1:12:11
this assigns the value at each index position to that local variable item
1:12:16
we don't want this though since we primarily care about the index position instead we're going to use the range
1:12:23
function in python to create a range of values that start at 0 and end at the
1:12:29
number of items in the list so we'll say 4 i i stands for index here
1:12:35
in range starting at 0 and going all the way up to the length of the list
1:12:42
we can get the number of items in the list using the len function now going back to our talk on complexity
1:12:48
and how individual steps in an algorithm can have its own run times this is a line of code that we would have to be
1:12:55
careful about python keeps track of the length of a list so this function call here len list
1:13:02
is a constant time operation now if this were a naive implementation let's say we
1:13:07
wrote the implementation of the list and we iterate over the list every time we call this length function then we've
1:13:15
already incurred a linear cost okay so once we have a range of values that represent index positions in this
1:13:21
list we're going to iterate over that using the for loop and assign each index value to this local variable i using
1:13:29
this index value we can obtain the item at that position using subscript notation on the list
1:13:35
now this is also a constant time operation because the language says so so we'll do if list so once we have this
1:13:43
value which we'll get by using subscript notation so we'll say list i once we have this value we'll check if
1:13:49
it matches the target so if the value at i equals target
1:13:55
well if it does then we'll return that index value because we want the position and once we hit this return statement
1:14:02
we're going to terminate our function if the entire for loop is executed and we don't hit this return statement then the
1:14:08
target does not exist in the list so at the bottom here we'll say return none
1:14:14
even though all the individual operations in our algorithm run in constant time
1:14:19
in the worst case scenario this for loop here will have to go through the entire range of values and read every single
1:14:26
element in the list therefore giving the algorithm a big o value of n or running in linear time now
1:14:34
if you've written code before you've definitely written code like this a number of times and i bet you didn't know but all along you are implementing
1:14:40
what is essentially a well-known algorithm so i hope this goes to show you that algorithms are pretty approachable topic
1:14:48
like everything else this does get advanced but as long as you take things slow there's no reason for it to be
1:14:53
impossible remember that not any block of code counts as an algorithm to be a
1:14:58
proper implementation of linear search this block of code must return a value
1:15:04
must complete execution in a finite amount of time and must output the same
1:15:09
result every time for a given input set so let's verify this with a small test
1:15:15
let's write a function called verify that accepts an index value
1:15:20
if the value is not none it prints the index position if it is none it informs us that the target was not found in the
1:15:27
list so def verify and this is going to take an index value
1:15:33
and we'll say if index is not none then we'll print
1:15:40
target found at index
1:15:47
oops that's a colon here index else
1:15:54
that needs to go back there we go else we'll say target
1:16:01
not found in list okay using this function let's define a
1:16:06
range of numbers now so this will be a list numbers and we'll just go from 1 to
1:16:14
let's say 10.
1:16:20
now if you've written python code before you know that i can use a list comprehension to make this easier but
1:16:26
we'll keep things simple we can now use our linear search function to search for the position of a
1:16:32
target value in this list so we can say result equal linear underscore search
1:16:39
and we're going to pass in the numbers list that's the one we're searching through and we want to look for the position where the value 12 exists
1:16:47
and then we'll verify this result if our algorithm works correctly the
1:16:53
verify function should inform us that the target did not exist so make sure you save the file which you can do by
1:16:58
going up to file and save or hitting command s and then below in the terminal
1:17:06
you're going to type out python linear search or you can hit tab and it
1:17:11
should auto complete linear search dot pi as you can see correct the target was
1:17:16
not found in the list so the output of our script is what we expect for our second test let's search for the
1:17:22
value 6 in the list so you can copy this command c to copy and then paste it
1:17:28
again and we'll just change 12 here to 6 and then come back down to the terminal
1:17:33
hit the up arrow to execute the same command again and hit enter you'll notice that i forgot to hit save so it
1:17:39
did not account for that new change we'll try that again and there you'll see that if it works
1:17:45
correctly which it did the index should be number five run the program on your
1:17:50
end and make sure everything works as expected our algorithm returned a result in each case it executed in a finite time and
1:17:58
the results were the ones we expect in the next video let's tackle binary search
1:18:03
in the last video we left off with an implementation of linear search let's do the same for binary search so
1:18:09
that we get an understanding of how this is represented in code so we'll do this in a new file back to
1:18:15
file new file and we'll name this one binary search
1:18:22
dot py like before we're going to start with a function named binary search so we'll
1:18:27
say def binary underscore search that takes a list and a target
1:18:34
if you remember binary search works by breaking the array or list down into
1:18:39
smaller sets until we find the value we're looking for we need a way to keep track of the
1:18:45
position of the list that we're working with so let's create two variables first and last to point to the beginning and
1:18:52
end of the array so first equal zero now if you're new to programming
1:18:59
list positions are represented by index values that start at zero instead of one
1:19:04
so here we're setting first to zero to point to the first element in the list
1:19:09
last is going to point to the last element in the list so we'll say last equal
1:19:15
len list minus one now this may be confusing to you so a quick sidebar to explain what's
1:19:22
going on let's say we have a list containing 5 elements if we called len on that list
1:19:28
we should get 5 back because there are 5 elements but remember that because the position
1:19:33
numbers start at 0 the last value is not at position 5 but at 4. in nearly all
1:19:39
programming languages getting the position of the last element in the list is obtained by determining the length of
1:19:46
the list and deducting 1 which is what we're doing okay so we know what the first and last
1:19:52
positions are when we start the algorithm for our next line of code we're going to create a while loop
1:19:58
a while loop takes a condition and keeps executing the code inside the loop until the condition evaluates to false
1:20:06
for our condition we're going to say to keep executing this loop until the value
1:20:11
of first is less than or equal to the value of last so while first less than or equal to
1:20:19
last well why you ask why is this our condition well let's work through this
1:20:24
implementation and then a visualization should help inside the while loop we're going to
1:20:30
calculate the midpoint of our list since that's the first step of binary search
1:20:35
midpoint equal so we'll say first plus last
1:20:41
and then we'll use the floor division double slash here divided by two now the two forward slashes here are
1:20:48
what python calls a floor division operator what it does is it rounds down to the nearest whole number so if we
1:20:55
have an eight element array first is zero last is 7 if we divided 0 plus 7
1:21:02
which is 7 by 2 we would get 3.5 now 3.5 is not a valid index position so we
1:21:08
round that down to 3 using the floor division operator okay so now we have a midpoint the next step of binary search
1:21:15
is to evaluate whether the value at this midpoint is the same as the target we're looking for so say if list
1:21:23
value at midpoint equals the target well if it is then we'll go ahead and
1:21:30
return the midpoint so we'll say return midpoint the return statement terminates our
1:21:36
algorithm and over here we're done this is our best case scenario
1:21:42
next we'll say else if list at midpoint
1:21:48
or value at midpoint is less than the target now here if the value is less the
1:21:53
value at midpoint is less than the target then we don't care about any of the values lower than the midpoint so we
1:22:00
redefine first to point to the value after the midpoint so we'll say midpoint plus 1.
1:22:07
now if the value at the midpoint is greater than the target then we can discard the values after the midpoint
1:22:14
and redefine last to point to the value prior to the midpoint so we'll say else
1:22:22
last equal midpoint minus 1. let's visualize this we're going to
1:22:28
start with a list of nine integers to make this easier to understand let's specify these integers to be of the same
1:22:35
value as its index position so we have a range of values from 0 to 8.
1:22:40
our target is the worst case scenario we're looking for the position of the value 8. at the start our algorithm sets
1:22:47
first to point to the index 0 and last to point to the length of the list minus
1:22:53
1 which is 8. next we hit our while loop the logic of this loop is going to be executed as
1:22:59
long as the value of first is not greater than the value of last or as we've defined it we're going to keep
1:23:06
executing the contents of the loop as long as first is less than or equal to last
1:23:12
on the first pass this is true so we enter the body of the loop the midpoint is first plus last divided
1:23:18
by two and rounded down so we get a nice even four the value at this position is
1:23:24
four now this is not equal to the target so we move to the first else if
1:23:29
four is less than eight so now we redefine first to point to midpoint plus
1:23:34
one which is five first is still less than last so we run
1:23:40
through the body of the loop again the midpoint is now six six is less than eight so we move first
1:23:47
to point to seven seven is still less than or equal to eight so we go for another iteration of
1:23:53
the loop the midpoint is seven oddly enough and seven is still less than the target so
1:23:59
we move first to point to eight first is equal to last now but our condition says
1:24:05
keep the loop going as long as first is less than or equal to last so this is
1:24:11
our final time through the loop the midpoint is now 8 which makes the value at the midpoint equal to the
1:24:18
target and we finally exit our algorithm and return the position of the target
1:24:23
now what if we had executed all this code and never hit a case where midpoint equal the target well that would mean
1:24:30
the list did not contain the target value so after the while loop at the bottom
1:24:36
will return none we have several operations that make up
1:24:41
our binary search algorithm so let's look at the runtime of each step we start by assigning values to first and
1:24:48
last the value assigned to last involves a call to the len function to get the size
1:24:54
of the list but we already know this is a constant time operation in python so both of these operations run in constant
1:25:01
time inside the loop we have another value assignment and this is a simple division
1:25:07
operation so again the runtime is constant in the next line of code we're reading a
1:25:12
value from the list and comparing the midpoint to the target both of these
1:25:17
again are constant time operations the remainder of the code is just a series
1:25:22
of comparisons and value assignments and we know that these are all constant time operations as well
1:25:28
so if all we have are a series of constant time operations why does this algorithm have in the worst case a
1:25:35
logarithmic runtime it's hard to evaluate by just looking at the code but the while loop is what
1:25:41
causes the run time to grow even though all we're doing is a comparison operation by redefining first
1:25:48
and last over here or rather in the last two steps over here we're asking the
1:25:54
algorithm to run as many times as it needs until first is equal or greater
1:26:00
than last now each time the loop does this the size of the data set the size of the
1:26:05
list grows smaller by a certain factor until it approaches a single element
1:26:11
which is what results in the logarithmic runtime okay just like with linear search let's
1:26:17
test that our algorithm works so we'll go back to linear search.hi and we're going to copy paste
1:26:23
so command c to copy if you're on a mac then go back to binary search and at the bottom
1:26:30
oops we're going to paste in that verify function okay we'll also go back and grab this
1:26:36
numbers you know what let's go ahead and copy all all of these things so numbers and the two verify cases we'll paste that in
1:26:45
as well and the only thing we need to change here is instead of calling linear search this is going to call binary search
1:26:54
okay we'll hit command s to save the file and then i'm going to drag up my console and we'll run python binary
1:27:02
search dot and hit enter and you'll see like just like before we get the same results back
1:27:08
now note that an extremely important distinction needs to be made here the numbers list that we've defined
1:27:15
for our test cases right here has to be sorted the basic logic of
1:27:21
binary search relies on the fact that if the target is greater than the midpoint then our potential values lie to the
1:27:28
left or vice versa since the values are sorted in ascending order if the values
1:27:34
are unsorted our implementation of binary search may return none even if
1:27:39
the value exists in the list and just like that you've written code to implement two search algorithms how
1:27:46
fun was that hopefully this course has shown you that it isn't a topic to be afraid of and
1:27:51
that algorithms like any other topic with code can be broken down and understood piece by piece
1:27:58
now we have a working implementation of binary search but there's actually more than one way to write it so in the next
1:28:04
video let's write a second version i'm going to create a new file
1:28:09
as always file new file and we'll name this recursive
1:28:15
underscore binary underscore search dot p y
1:28:21
okay so we're going to add our new implementation here so that we don't get rid of that first implementation we
1:28:27
wrote let's call this new function recursive binary search unlike our previous implementation this
1:28:33
version is going to behave slightly differently in that it won't return the index value of the target element if it
1:28:40
exists instead it will just return a true value if it exists and a false if it doesn't
1:28:46
so recursive underscore binary underscore search
1:28:52
and like before this is going to take a list it accepts a list and a target to look for in that list
1:28:59
we'll start the body of the function by considering what happens if an empty list is passed in in that case we would
1:29:06
return false so i would say if the length of the list which is one way to figure out if it's empty if it's equal
1:29:12
to zero then we'll return false now you might be thinking that in the
1:29:18
previous version of binary search we didn't care if the list was empty well we actually did but in a roundabout sort
1:29:25
of way so in the previous version of binary search our function had a loop
1:29:30
and that loop condition was true when first was less than or equal to last so
1:29:36
as long as it's less than or equal to last we continue the loop now if we have an empty list then first
1:29:42
is greater than last and the loop would never execute and we return none at the bottom
1:29:48
so this is the same logic we're implementing here we're just doing it in a slightly different way if the list is
1:29:54
not empty we'll implement an else clause now here we'll calculate the midpoint
1:30:01
by dividing the length of the list by 2 and rounding down again there's no use of first and last
1:30:08
here so we'll say length of list and then using the floor division operator we'll divide that by 2.
1:30:15
if the value at the midpoint which we'll check by saying if list using
1:30:21
subscript notation we'll say midpoint as the index now if this value at the
1:30:27
midpoint is the same as the target then we'll go ahead and return true
1:30:34
so far this is more or less the same except for the value that we're returning
1:30:40
let me actually get rid of all that
1:30:45
okay all right so if this isn't the case let's implement an else clause now here
1:30:50
we have two situations so first if the value at the midpoint is less than the target so if
1:30:58
value at midpoint is less than the target
1:31:04
then we're going to do something new we're going to call this function again
1:31:09
this recursive binary search function that we're in the process of defining we're going to call that again and we're
1:31:16
going to give it the portion of the list that we want to focus on in the previous version of binary search we moved the
1:31:23
first value to point to the value after the midpoint now here we're going to create a new
1:31:29
list using what is called a slice operation and create a sub list that starts at midpoint plus 1 and goes all
1:31:37
the way to the end we're going to specify the same target as a search target and when this
1:31:43
function call is done we'll return the value so we'll say return the return is
1:31:48
important then we'll call this function again recursive binary search
1:31:55
and this function takes a list and here we're going to use that subscript notation to perform a slice operation by
1:32:02
using two indexes a start and an end so we'll say our new list that we're passing in needs to start at midpoint
1:32:08
plus one and then we'll go all the way to the end and this is a
1:32:13
python syntactic sugar so to speak if i don't specify an end index python knows
1:32:19
to just go all the way to the end all right so this is our new list that we're working with and we need a target we'll just pass it
1:32:26
through if you're confused bear with me just like before we'll visualize this at the end okay we have another else case here
1:32:34
and this is a scenario where the value at the midpoint is greater than the target
1:32:39
which means we only care about the values in the list from the start going up to the midpoint now in this case as
1:32:45
well we're going to call the binary search function again and specify a new list to work with this time the list is
1:32:52
going to start at the beginning and then go all the way up to the midpoint so it looks the same we'll say return
1:32:58
recursive binary search
1:33:03
we're going to pass in a list here so if we just put a colon here without a start index python knows to
1:33:10
start at the beginning and we're going to go all the way up to the midpoint the target here is the same
1:33:16
and this is our new binary search function so let's see if this works
1:33:22
actually yes down here we'll make some space and we'll define a verify function
1:33:29
we're not going to copy paste the previous one because we're not returning none or an integer here so we'll verify the result
1:33:36
that we pass in and we'll say print target found
1:33:41
and this is just going to say true or false whether we found it okay so like before we need a numbers
1:33:47
list and we'll do something one two three four all the way up to eight
1:33:54
okay and now let's test this out so we'll call our recursive
1:34:00
binary search function and we'll pass in the numbers list
1:34:05
and the target here is 12. we're going to verify this
1:34:11
verify the result make sure it works and then we'll call it again this time making sure that we give it a target
1:34:16
that is actually in the list so here we'll say 6 and we'll verify this again
1:34:23
make sure you hit command s to save and then in the console below we're
1:34:28
going to type out python recursive binarysearch.pi
1:34:34
run it and you'll see that we've verified that search works while we can't verify the index position
1:34:39
of the target value which is a modification to how our algorithm works we can guarantee by running across all
1:34:46
valid inputs that search works as intended so why write a different search
1:34:52
algorithm here a different binary search algorithm and what's the difference between these two implementations anyway
1:34:58
the difference lies in these last four lines of code that you see here
1:35:05
we did something unusual here now before we get into this a small word of advice
1:35:11
this is a confusing topic and people get confused by it all the time don't worry that doesn't make you any
1:35:18
less of a programmer in fact i have trouble with it often and always look it up including when i made this video
1:35:24
this version of binary search is a recursive binary search a recursive function is one that calls
1:35:31
itself this is hard for people to grasp sometimes because there's few easy
1:35:37
analogies that make sense but you can think of it and sort this way so let's say you have this book that contains
1:35:43
answers to multiplication problems you're working on a problem and you look up an answer
1:35:49
in the book the answer for your problem says add 10 to the answer for problem 52
1:35:56
okay so you look up problem 52 and there it says add 12 to the answer for problem
1:36:02
85 well then you go and look up the answer to problem 85 and finally instead of
1:36:08
redirecting you somewhere else that answer says 10. so you take that 10 and
1:36:13
then you go back to problem 52 because remember the answer for problem 52 was to add 12 to the answer for problem 85
1:36:22
so you take that 10 and then you now have the answer to problem 85 so you add
1:36:27
10 to 12 to get 22. then you go back to your original problem where it said to add 10 to the
1:36:33
answer for problem 52 so you add 10 to 22 and you get 32 to end up with your
1:36:38
final answer so that's a weird way of doing it but this is an example of recursion
1:36:44
the solution to your first lookup in the book was the value obtained by another
1:36:49
lookup in the same book which was followed by yet another lookup in the same book the book told you to check the
1:36:55
book until you arrived at some base value our function works in a similar manner
1:37:01
so let's visualize this with an example of list like before we have a nine element list
1:37:07
here with values zero through eight the target we're searching for is the
1:37:12
value eight we'll check if the list is empty by calling len on it this list is not empty
1:37:18
so we go to the else clause next we calculate the midpoint 9 divided by 2 is 4.5 rounded down is 4 so our first
1:37:26
midpoint value is 4. we'll perform our first check is the value at the midpoint equal to the
1:37:32
target not true so we go to our else clause we'll perform another check here is the
1:37:38
value at the midpoint less than the target now in our case this is true earlier when we evaluated this condition
1:37:44
we simply change the value of first here we're going to call the recursive binary search function again and give it
1:37:51
a new list to work with the list starts at midpoint plus 1 so at
1:37:57
index position 5 all the way to the end notice that this call to recursive
1:38:02
binary search inside of recursive binary search includes a return statement
1:38:08
this is important and we'll come back to that in a second so now we're back at the top
1:38:14
of a new call to recursive binary search with effectively a new list although
1:38:20
technically just a sub list of the first one the list here contains the numbers 6 7
1:38:26
and 8. starting with the first check the list is not empty so we move to the else
1:38:32
the midpoint in this case length of the list 3 divided by 2 rounded down is 1.
1:38:39
is the value of the midpoint equal to the target well the value at that position is 7 so no in the else we
1:38:46
perform the first check is the value at the midpoint less than the target indeed
1:38:51
it is so we call recursive binary search again and provided a new list
1:38:56
this list starts at midpoint plus 1 and goes to the end so in this case that's a single element list
1:39:03
since this is a new call to recursive binary search we start back up at the top
1:39:09
is the list empty no the midpoint is zero is the value at the midpoint the same as
1:39:15
the target it is so now we can return true remember a minute ago i pointed out that
1:39:22
when we call recursive binary search from inside the function itself it's preceded by a return statement
1:39:29
that plays a pretty important role here so back to our visualization we start at the top and recall binary
1:39:36
search with a new list but because that's got a return statement before it what we're saying is hey when you run
1:39:42
binary search on this whatever value you get back return it to the function that
1:39:48
called you then at the second level we call binary search again along with another return
1:39:53
statement like with the first call we're instructing the function to return a value back to the code that called it
1:40:01
at this level we find the target so the function returns true back to the caller but since this inner function was also
1:40:08
called by a function with instructions to return it keeps returning that true value back up until we reach the very
1:40:15
first function that called it going back to our book of answers recursive binary
1:40:20
search instructs itself to keep working on the problem until it has a concrete answer
1:40:25
once it does it works its way backwards giving the answer to every function that called it until the original caller has
1:40:33
an answer now like i said at the beginning this is pretty complicated so you should not be
1:40:39
concerned if this doesn't click honestly this is not one thing that you're going to walk away with knowing fully how to
1:40:44
understand recursion after your first try i'm really not lying when i say i have a pretty hard time with recursion
1:40:51
now before we move on i do want to point out one thing even though the implementation of
1:40:56
recursion is harder to understand it is easier in this case to understand how we arrive at the logarithmic run
1:41:03
time since we keep calling the function with smaller lists let's take a break here in the next video let's talk a bit
1:41:11
more about recursion and why it matters [Music]
1:41:19
in the last video we wrote a version of binary search that uses a concept called recursion
1:41:25
recursion might be a new concept for you so let's formalize how we use it
1:41:30
a recursive function is one that calls itself in our example the recursive binary
1:41:36
search function called itself inside the body of the function
1:41:42
when writing a recursive function you always need a stopping condition and typically we start the body of the
1:41:48
recursive function with this stopping condition it's common to call this stopping condition the base case
1:41:55
in our recursive binary search function we had two stopping conditions
1:42:01
the first was what the function should return if an empty list is passed in
1:42:07
it seems weird to evaluate an empty list because you wouldn't expect to run search on an empty list but if you look
1:42:14
at how our function works recursive binary search keeps calling itself and
1:42:19
with each call to itself the size of the list is cut in half if we searched for a target that didn't
1:42:25
exist in the list then the function would keep halving itself until it got
1:42:30
to an empty list consider a three element list with numbers one two three where we're
1:42:36
searching for a target of four on the first pass the midpoint is 2 so
1:42:42
the function would call itself with the list 3. on the next pass the midpoint is 0 and
1:42:48
the target is still greater so the function would call itself this time passing in an empty list because an
1:42:55
index of 0 plus 1 in a single element list doesn't exist when we have an empty list this means
1:43:02
that after searching through the list the value wasn't found this is why we define an empty list as a
1:43:08
stopping condition or a base case that returns false if it's not an empty list then we have an entirely different set
1:43:15
of instructions we want to execute first we obtain the midpoint of the list
1:43:20
once we have the midpoint we can introduce our next base case or stopping condition
1:43:25
if the value at the midpoint is the same as the target then we return true
1:43:31
with these two stopping conditions we've covered all possible paths of logic through the search algorithm you can
1:43:38
either find the value or you don't once you have the base cases the rest of the implementation of the recursive
1:43:45
function is to call the function on smaller sub-lists until we hit one of
1:43:50
these base cases going back to our visualization for a second we see that recursive binary search calls itself a
1:43:57
first time which then calls itself again for the initial list we started with the
1:44:03
function only calls itself a few times before a stopping condition is reached the number of times a recursive function
1:44:10
calls itself is called recursive depth now the reason i bring all of this up is
1:44:16
because if after you start learning about algorithms you decide you want to go off and do your own research you may
1:44:22
start to see a lot of algorithms implemented using recursion the way we implemented binary search the
1:44:29
first time is called an iterative solution now when you see the word iterative it
1:44:35
generally means the solution was implemented using a loop structure of some kind a recursive solution on the other hand
1:44:42
is one that involves a set of stopping conditions and a function that calls itself computer scientists and computer
1:44:49
science textbooks particularly from back in the day favor and are written in what are called
1:44:55
functional languages in functional languages we try to avoid changing data that is given to a
1:45:01
function in our first version of binary search we created first and last variables using
1:45:07
the list and then modified first and last as we needed to arrive at a solution functional languages don't like to do
1:45:14
this all this modification of variables and prefer a solution using recursion
1:45:19
a language like python which is what we're using is the opposite and doesn't like recursion in fact python has a
1:45:27
maximum recursion depth after which our function will halt execution python
1:45:32
prefers an iterative solution now i mentioned all of this for two reasons
1:45:38
if you decide that you want to learn how to implement the algorithm in a language of your choice that's not python then
1:45:45
you might see a recursive solution as the best implementation in that particular language
1:45:51
i'm an ios developer for example and i work with a language called swift swift is different from python in that
1:45:58
it doesn't care about recursion depth and does some neat tricks where it doesn't even matter how many times your
1:46:03
function calls itself so if you want to see this in swift code then you need to know how recursion
1:46:09
works well and now you have some idea now the second reason i bring it up is actually way more important and to find out on to
1:46:16
the next video at the beginning of this series i mentioned that there were two ways of measuring the efficiency of an algorithm
1:46:23
the first was time complexity or how the run time of an algorithm grows as n grows larger
1:46:29
the second is space complexity we took a pretty long route to build up this example but now we're in a good
1:46:35
place to discuss space complexity space complexity is a measure of how
1:46:40
much working storage or extra storage is needed as a particular algorithm grows
1:46:47
we don't think about it much these days but every single thing we do on a computer takes up space in memory in the
1:46:54
early days of computing considering memory usage was of paramount importance because memory was limited and really
1:47:01
expensive these days were spoiled our devices are rich with memory this is okay when we
1:47:07
write everyday code because most of us aren't dealing with enormously large data sets
1:47:13
when we write algorithms however we need to think about this because we want to design our algorithms to perform as
1:47:19
efficiently as it can as the size of the data set n grows really large
1:47:25
like time complexity space complexity is measured in the worst case scenario using big-o notation
1:47:32
since you are familiar with the different kinds of complexities let's dive right into an example
1:47:38
in our iterative implementation of binary search the first one we wrote that uses a while loop let's look at
1:47:45
what happens to our memory usage as n gets large let's bring up that function
1:47:52
let's say we start off with a list of 10 elements now inspecting the code we see
1:47:57
that our solution relies heavily on these two variables first and last
1:48:02
first points to the start of the list and last to the end when we eliminate a set of values we
1:48:08
don't actually create a sub list instead we just redefine first and last as you see here
1:48:15
to point to a different section of the list since the algorithm only considers the values between first and last when
1:48:22
determining the midpoint by redefining first and last as the algorithm proceeds we can find a
1:48:29
solution using just the original list this means that for any value of n
1:48:34
the space complexity of the iterative version of binary search is constant or
1:48:40
that the iterative version of binary search takes constant space remember that we would write this as big
1:48:47
o of one this might seem confusing because as n grows we need more storage to account
1:48:53
for that larger list size now this is true but that storage is not what space complexity cares about
1:49:00
measuring we care about what additional storage is needed as the algorithm runs and tries
1:49:06
to find a solution if we assume something simple say that for a given size of a list represented
1:49:12
by a value n it takes n amount of space to store it whatever that means
1:49:18
then for the iterative version of binary search regardless of how large the list
1:49:23
is at the start middle and end of the algorithm process the amount of storage
1:49:29
required does not get larger than n and this is why we consider it to run in
1:49:34
constant space now this is an entirely different story with the recursive version however in
1:49:40
the recursive version of binary search we don't make use of variables to keep track of which portion of the list we're
1:49:46
working with instead we create new lists every time with a subset of values or sub-lists
1:49:53
with every recursive function call let's assume we have a list of size n
1:49:58
and in the worst case scenario the target element is the last in the list calling the recursive implementation of
1:50:05
binary search on this list and target would lead to a scenario like this
1:50:10
the function would call itself and create a new list that goes from the midpoint to the end of the list
1:50:17
since we're discarding half the values the size of the sub list is n by 2.
1:50:22
this function will keep calling itself creating a new sub list that's half the size of the current one until it arrives
1:50:29
at a single element list and a stopping condition this pattern that you see here where the
1:50:35
size of the sublist is reduced by a factor on each execution of the algorithmic logic well we've seen that
1:50:41
pattern before do you remember where this is exactly how binary search works
1:50:46
it discards half the values every time until it finds a solution now we know that because of this pattern the running
1:50:53
time of binary search is logarithmic in fact the space complexity of the recursive version of binary search is
1:51:00
the same if we start out with a memory allocation of size n that matches the list
1:51:06
on each function call of recursive binary search we need to allocate additional memory of size n by 2 n by 4
1:51:14
and so on until we have a sub list that is either empty or contains a single value because of this we say that the
1:51:22
recursive version of the binary search algorithm runs in logarithmic time with
1:51:27
a big o of log n now there's an important caveat here this totally depends on the language
1:51:34
remember how i said that a programming language like swift can do some tricks to where recursion depth doesn't matter
1:51:40
the same concept applies here if you care to read more about this concept it's called tail
1:51:46
optimization it's called tail optimization because if you think of a function as having a head and a tail
1:51:54
if the recursive function call is the last line of code in the function as it
1:51:59
is in our case we call this tail recursion since it's the last part of the function that calls
1:52:06
itself now the trick that swift does to reduce the amount of space and therefore
1:52:11
computing overhead to keep track of this recursive calls is called tail call
1:52:16
optimization or tail call elimination it's one of those things that you'll see thrown around a loss in algorithm
1:52:23
discussions but may not always be relevant to you now what if any of this is relevant to
1:52:29
us well python does not implement tail call optimization so the recursive
1:52:34
version of binary search takes logarithmic space if we had to choose between the two
1:52:40
implementations given that time complexity or run time of both versions
1:52:45
the iterative and the recursive version are the same we should definitely go with the iterative implementation in
1:52:51
python since it runs in constant space okay that was a lot but all of this with
1:52:57
all of this we've now established two important ways to distinguish between algorithms that handle the same task and
1:53:04
determine which one we should use we've arrived at what i think is a good spot to take a long break and let all of
1:53:11
these new concepts sink in but before you go off to the next course let's take a few minutes to recap everything we've
1:53:17
learned so far while we did implement two algorithms in this course in actual code much of what
1:53:24
we learned here was conceptual and will serve as building blocks for everything we're going to learn in the future so
1:53:30
let's list all of it out the first thing we learned about and arguably the most important was
1:53:35
algorithmic thinking algorithmic thinking is an approach to problem solving that involves breaking a
1:53:41
problem down into a clearly defined input and output along with a distinct set of steps that solves the problem by
1:53:48
going from input to output algorithmic thinking is not something you develop overnight by taking one
1:53:55
course so don't worry if you're thinking i still don't truly know how to apply what i learned here
1:54:00
algorithmic thinking sinks in after you go through several examples in a similar fashion to what we did today
1:54:07
it also helps to apply these concepts in the context of a real example which is another thing we will strive to do
1:54:13
moving forward regardless it is important to keep in mind that the main goal here is not to
1:54:18
learn how to implement a specific data structure or algorithm off the top of your head i'll be honest i had to look
1:54:25
up a couple code snippets for a few of the algorithms myself in writing this course but in going through this you now know
1:54:32
that binary search exists and can apply to a problem where you need a faster search algorithm
1:54:39
unlike most courses where you can immediately apply what you have learned to build something cool learning about
1:54:44
algorithms and data structures will pay off more in the long run the second thing we learned about is how
1:54:51
to define and implement algorithms we've gone over these guidelines several times i won't bore you here again at the end
1:54:58
but i will remind you that if you're often confused about how to effectively break down a problem in code to
1:55:04
something more manageable following those algorithm guidelines is a good place to start
1:55:09
next we learned about big o and measuring the time complexity of algorithms this is a mildly complicated
1:55:16
topic but once you've abstracted the math away it isn't as hazy a topic as it seems
1:55:21
now don't get me wrong the math is pretty important but only for those designing and analyzing algorithms
1:55:27
our goal is more about how to understand and evaluate algorithms we learned about common run times like
1:55:35
constant linear logarithmic and quadratic runtimes these are all fairly
1:55:40
new concepts but in time you will immediately be able to distinguish the runtime of an algorithm based on the
1:55:46
code you write and have an understanding of where it sits on an efficiency scale you will also in due time internalize
1:55:53
runtimes of popular algorithms like the fact that binary search runs in logarithmic time and constant space
1:56:00
and be able to recommend alternative algorithms for a given problem all in all over time the number of tools
1:56:07
in your tool belt will increase next we learned about two important search algorithms and the situations in
1:56:14
which we select one over the other we also implemented these algorithms in code so that you got a chance to see
1:56:20
them work we did this in python but if you are more familiar with a different language and haven't gotten the chance to check
1:56:27
out the code snippets we've provided you should try your hand at implementing it yourself it's a really good exercise to
1:56:34
go through finally we learned about an important concept and a way of writing algorithmic
1:56:39
code through recursion recursion is a tricky thing and depending on the language you write code with you may run
1:56:46
into it more than others it is also good to be aware of because as we saw in our implementation of
1:56:52
binary search whether recursion was used or not affected the amount of space we used
1:56:58
don't worry if you don't fully understand how to write recursive functions i don't truly know either the
1:57:04
good part is you can always look these things up and understand how other people do it anytime you encounter recursion in our
1:57:12
courses moving forward you'll get a full explanation of how and why the function is doing what it's doing
1:57:18
and that brings us to the end of this course i'll stress again that the goal of this course was to get you prepared
1:57:24
for learning about more specific algorithms by introducing you to some of the tools and concepts you will need
1:57:31
moving forward so if you're sitting there thinking i still don't know how to write many algorithms or how to use algorithmic
1:57:37
thinking that's okay we'll get there just stick with it as always have fun and happy coding
Introduction to Data Structures
1:57:46
[Music]
1:57:53
hi my name is passant i'm an instructor at treehouse and welcome to the introduction to data structures course
1:57:59
in this course we're going to answer one fundamental question why do we need more data structures than a programming
1:58:05
language provides before we answer that question some housekeeping if you will
1:58:11
in this course we're going to rely on concepts we learned in the introduction to algorithms course
1:58:16
namely big-o notation space and time complexity and recursion if you're unfamiliar with those concepts
1:58:23
or just need a refresher check out the prerequisites courses listed in addition this course does assume that
1:58:29
you have some programming experience we're going to use data structures that come built into nearly all programming
1:58:36
languages as our point of reference while we will explain the basics of how these structures work we won't be going
1:58:42
over how to use them in practice if you're looking to learn how to program before digging into this content
1:58:49
check the notes section of this video for helpful links if you're good to go then awesome let's
1:58:54
start with an overview of this course the first thing we're going to do is to explore a data structure we are somewhat
1:59:00
already familiar with arrays if you've written code before there's a high chance you have used an array
1:59:06
in this course we're going to spend some time understanding how arrays work what are the common operations on an array
1:59:13
and what are the run times associated with those operations once we've done that we're going to
1:59:18
build a data type of our own called a linked list in doing so we're going to learn that
1:59:23
there's more than one way to store data in fact there's way more than just one way
1:59:29
we're also going to explore what motivates us to build specific kinds of structures and look at the pros and cons
1:59:34
of these structures we'll do that by exploring four common operations accessing a value searching
1:59:41
for a value inserting a value and deleting a value after that we're actually going to
1:59:46
circle back to algorithms and implement a new one a sorting algorithm in the introductions to algorithms
1:59:52
course we implemented a binary search algorithm a precondition to binary
1:59:57
search was that the list needed to be sorted we're going to try our hand at sorting a list and open the door to an entirely
2:00:04
new category of algorithms we're going to implement our sorting algorithm on two different data
2:00:10
structures and explore how the implementation of one algorithm can differ based on the data structure being
2:00:16
used we'll also look at how the choice of data structure potentially influences the run time of the algorithm
2:00:23
in learning about sorting we're also going to encounter another general concept of algorithmic thinking called
2:00:29
divide and conquer along with recursion dividing conquer will be a fundamental tool that we will
2:00:34
use to solve complex problems all in due time in the next video let's talk about arrays
2:00:41
a common data structure built into nearly every programming language is the array
2:00:46
arrays are a fundamental data structure and can be used to represent a collection of values but it is much more
2:00:52
than that arrays are also used as building blocks to create even more custom data types and structures
2:00:58
in fact in most programming languages text is represented using the string type and under the hood strings are just
2:01:05
a bunch of characters stored in a particular order in an array before we go further and dig into arrays
2:01:12
what exactly is a data structure a data structure is a way of storing data when programming it's not just a
2:01:19
collection of values and the format they're stored in but the relationship between the values in the collection as
2:01:25
well as the operations applied on the data stored in the structure an array is one of very many data
2:01:32
structures in general an array is a data structure that stores a collection of
2:01:37
values where each value is referenced using an index or a key
2:01:42
a common analogy for thinking about arrays is as a set of train cars each car has a number and these cars are
2:01:49
ordered sequentially inside each car the array or the train in this analogy stores some data
2:01:57
while this is the general representation of an array it can differ slightly from one language to another but for the most
2:02:03
part all these fundamentals remain the same in a language like swift or java
2:02:08
arrays are homogeneous containers which means they can only contain values of the same type
2:02:14
if you use an array to store integers in java it can only store integers
2:02:20
in other languages arrays are heterogeneous structures that can store any kind of value in python for example
2:02:27
you can mix numbers and text with no issues now regardless of this nuance the fundamental concept of an array is the
2:02:34
index this index value is used for every operation on the array from accessing
2:02:40
values to inserting updating and deleting in python the language we're going to be
2:02:46
using for this course it's a tiny bit confusing the type that we generally refer to as
2:02:51
an array in most languages is best represented by the list type in python
2:02:57
python does have a type called array as well but it's something different so we're not going to use it
2:03:03
while python calls it a list when we use a list in this course we'll be talking about concepts that apply to arrays as
2:03:10
well in other languages so definitely don't skip any of this there's one more thing
2:03:15
in computer science a list is actually a different data structure than an array and in fact we're going to build a list
2:03:22
later on in this course generally though this structure is called a linked list as opposed to just
2:03:28
list so hopefully the terminology isn't too confusing to properly understand how arrays work
2:03:35
let's take a peek at how arrays are stored under the hood an array is a contiguous data structure
2:03:42
this means that the array is stored in blocks of memory that are right beside each other with no gaps
2:03:48
the advantage of doing this is that retrieving values is very easy in a non-contiguous data structure we're
2:03:55
going to build one soon the structure stores a value as well as a reference to where the next value is
2:04:01
to retrieve that next value the language has to follow that reference also called a pointer to the next block of memory
2:04:08
this adds some overhead which as you will see increases the runtime of common operations a second ago i mentioned that
2:04:16
depending on the language arrays can either be homogeneous containing the same type of value or heterogeneous
2:04:22
where any kind of value can be mixed this choice also affects the memory layout of the array
2:04:28
for example in a language like c swift or java where arrays are homogeneous
2:04:34
when an array is created since the kind of value is known to the language compiler and you can think of the
2:04:40
compiler as the brains behind the language it can choose a contiguous block of memory that fits the array size and
2:04:47
values created if the values were integers assuming an integer took up space represented by one
2:04:54
of these blocks then for a five item array the compiler can allocate five
2:04:59
blocks of equally sized memory in python however this is not the case
2:05:04
we can put any value in a python list there's no restriction the way this works is a combination of
2:05:11
contiguous memory and the pointers or references i mentioned earlier
2:05:16
when we create a list in python there is no information about what will go into that array which makes it hard to
2:05:23
allocate contiguous memory of the same size there are several advantages to having contiguous memory
2:05:29
since the values are stored beside each other accessing the values happens in almost constant time so this is a
2:05:36
characteristic we want to preserve the way python gets around this is by allocating contiguous memory and storing
2:05:43
init not the value we want to store but a reference or a pointer to the value
2:05:49
that's stored somewhere else in memory by doing this it can allocate equally sized contiguous memory since regardless
2:05:56
of the value size the size of the pointer to that value is always going to be equal this incurs an additional cost
2:06:03
in that when a value is accessed we need to follow the pointer to the block of memory where the value is actually
2:06:09
stored but python has ways of dealing with these costs that are outside the scope of this course
2:06:15
now that we know how an array stores its values let's look at common operations that we execute on an array
2:06:21
regardless of the kind of data structure you work with all data structures are expected to carry out four kinds of
2:06:27
operations at minimum we need to be able to access and read values stored in the structure we need
2:06:33
to be able to search for an arbitrary value we also need to be able to insert a value at any point into the structure
2:06:40
and finally we need to be able to delete structures let's look at how these operations are
2:06:45
implemented on the array structure in some detail starting with access elements in an array are identified
2:06:52
using a value known as an index and we use this index to access and read the
2:06:57
value most programming languages follow a zero-based numbering system when it
2:07:02
comes to arrays and all this means is that the first index value is equal to zero not one
2:07:08
generally speaking when an array is declared a base amount of contiguous memory is allocated as the array storage
2:07:16
computers refer to memory through the use of an address but instead of keeping a reference to all the memory allocated
2:07:22
for an array the array only has to store the address of the first location
2:07:27
because the memory is contiguous using the base address the array can calculate the address of any value by using the
2:07:34
index position of that value as an offset if you want to be more specific think of
2:07:40
it this way let's say we want to create an array of integers and then each integer takes up
2:07:45
a certain amount of space in memory that we'll call m let's also assume that we know how many
2:07:50
elements we're going to create so the size of the array is some number of elements we'll call n
2:07:56
the total amount of space that we need to allocate is n times the space per item m
2:08:01
if the array keeps track of the location in memory where the first value is held so let's label that m0 then it has all
2:08:09
the information it needs to find any other element in the list when accessing a value in an array we
2:08:15
use the index to get the first element in the list we use the zeroth index to get the second
2:08:21
we use the index value 1 and so on given that the array knows how much storage is needed for each element it
2:08:28
can get the address of any element by starting off with the address for the first element and adding to that the
2:08:34
index value times the amount of storage per element for example to access the second value
2:08:41
we can start with m0 and to that add m times the index value 1 giving us m1 as
2:08:47
the location in memory for the second address this is a very simplified model but
2:08:52
that's more or less how it works this is only possible because we know that array memory is contiguous with no
2:08:59
gaps let's switch over to some code as i mentioned earlier we're going to be
2:09:05
using python in this course if you don't know how to code or you're interested in this content but know a
2:09:11
language other than python check the notes section of this video for more information while the code will be in
2:09:17
python the concepts are universal and more importantly simple enough that you should have no issue following along in
2:09:24
your favorite programming language and to get started click on the launch workspaces button on the video page that
2:09:31
you're watching right now this should spin up an instance of a treehouse workspace an in-browser coding
2:09:38
environment right now your workspace should be empty and that's expected so let's add a new file in here i'm going
2:09:44
to go to file new file and we'll call this arrays dot py pi
2:09:51
creating a list in python is quite simple so we'll call this new underscore list
2:09:57
we use a set of square brackets around a set of values to create a list so one and we comma separate them so space two
2:10:06
and space three this allocates a base amount of memory for the array to use or
2:10:12
when i say array know that in python i mean a list since this is python the values aren't
2:10:18
stored in memory instead the values 1 2 and 3 are stored
2:10:23
elsewhere in memory and the array stores references to each of those objects
2:10:28
to access a value we use a subscript along with an index value so to get the
2:10:34
first value we use the index 0 and if we were to assign this to another variable
2:10:40
we would say result equal new list we write out new lists since this is the
2:10:46
array that we're accessing the value from and then a subscript notation which is a square bracket
2:10:51
and then the index value as we saw since the array has a reference to the base location in memory
2:10:59
the position of any element can be determined pretty easily we don't have to iterate over the entire
2:11:05
list all we need to do is a simple calculation of an offset from the base
2:11:11
memory since we're guaranteed that the memory is contiguous for this reason access is a constant
2:11:18
time operation on an array or a python list this is also why an array crashes if you
2:11:24
try to access a value using an index that is out of bounds of what the array stores
2:11:30
if you've used an array before you've undoubtedly run into an error or a crash where you try to access a value using an
2:11:37
index that was larger than the number of elements in the array since the array calculates the memory address on the fly
2:11:45
when you access a value with an out of bounds index as it's called the memory address returned is not one that's part
2:11:52
of the array structure and therefore cannot be read by the array now in python this is represented by an index
2:11:58
error and we can make this happen by using an index we know our array won't contain
2:12:04
now i'm writing out my code here inside of a text editor which obviously doesn't run the code so let's drag up this
2:12:10
console area here and i'm going to write python to bring up the python interpreter
2:12:18
and in here we can do the same thing so i can say new list equal one
2:12:23
comma two comma three and now this is an interpreter so it's actually going to evaluate our code
2:12:29
all right so now we have a new list if i type out new list it gets printed out into the console
2:12:34
okay i can also do new list square bracket 0 and you'll see that i get the value 1 which is the value stored at the
2:12:41
zeroth index now to highlight that index error we can do new list
2:12:47
and inside the square brackets we can provide an index that we know our array doesn't contain so here i'll say index
2:12:54
10 and if i hit enter you'll see it say index error list index out of range
2:13:00
and those are the basics of how we create and read values from an array in the next video let's take a look at
2:13:06
searching in the last video we learned what happens under the hood when we create an array and read a value using an index
2:13:13
in this video we're going to look at how the remaining data structure operations work on arrays
2:13:19
if you took the introduction to algorithms course we spent time learning about two search algorithms linear
2:13:25
search and binary search while arrays are really fast at accessing values they're pretty bad at
2:13:30
searching taking an array as is the best we can do is use linear search for a worst case
2:13:37
linear runtime linear search works by accessing and reading each value in the list until the element in concern is
2:13:44
found if the element we're looking for is at the end of the list then every single element in the list will have been
2:13:50
accessed and compared even though accessing and comparing our constant time operations having to do
2:13:56
this for every element results in an overall linear time let's look at how search works in code
2:14:03
in python we can search for an item in an array in one of two ways
2:14:08
we can use the in operator to check whether a list contains an item so i can
2:14:13
say if one in new underscore list
2:14:19
then print true the in operator actually calls a
2:14:25
contains method that is defined on the list type which runs a linear search
2:14:31
operation in addition to this we can also use a for loop to iterate over the list
2:14:36
manually and perform a comparison operation so i can say
2:14:42
for n in new list
2:14:48
if n equals one then print true
2:14:53
and then after that break out of the loop this is more or less the implementation of linear search
2:14:59
if the array were sorted however we could use binary search but because sort operations incur a cost of their own
2:15:07
languages usually stay away from sorting the list and running binary search since for smaller arrays linear search on its
2:15:13
own may be faster now again remember that since this is an editor this is just a
2:15:19
text file none of these lines of code are evaluated so you can try that out in here so we'll copy that we can come down
2:15:26
here and say python and hit enter and then when it starts up we can paste in our list
2:15:32
and now we can try what we just did so if one in new list
2:15:38
print true and there you go it prints true now because we've already learned about
2:15:44
linear and binary search in a previous course there's nothing new going on here
2:15:50
what's more interesting to look at in my opinion is inserting and deleting values in an array let's start with inserting
2:15:57
in general most array implementations support three types of insert operations
2:16:03
the first is a true insert using an index value where we can insert an
2:16:08
element anywhere in the list this operation has a linear runtime imagine
2:16:13
you wanted to insert an item at the start of the list when we insert into the first position what happens to the
2:16:20
item that is currently in that spot well it has to move to the next spot at index value one what happens to the
2:16:27
second item at index position one that one moves to the next spot at index position two
2:16:33
this keeps happening until all elements have been shifted forward one index position
2:16:39
so in the worst case scenario inserting at the zeroth position of an array every
2:16:44
single item in the array has to be shifted forward and we know that any operation that involves iterating
2:16:51
through every single value means a linear runtime now the second way we can insert an item
2:16:57
into an array is by appending appending although technically an insert operation
2:17:03
in that it inserts an item into an existing array doesn't incur the same runtime cost because appends simply add
2:17:10
the item to the end of the list we can simplify and say that this is constant time this is a constant time
2:17:18
operation but it depends on the language implementation of array to highlight why that matters let's
2:17:24
consider how lists in python work in python when we create a list the list
2:17:30
doesn't know anything about the size of the list and how many elements we're going to store
2:17:35
creating a new empty list like so so numbers equal and two empty brackets
2:17:42
so this creates a list and allocates a space of size n plus one
2:17:48
since n here is zero there are no elements in this array in this list
2:17:53
space is allocated for a one element list to start off because the space allocated for the list
2:17:59
and the space used by the list are not the same what do you think happens when we ask
2:18:05
python for the length of this list so i can say len numbers
2:18:10
we correctly get 0 back this means that the list doesn't use the memory allocation as an indicator of its
2:18:17
size because as i mentioned it has allocated space for a one element list
2:18:22
but it returns zero so it determines it in other ways okay so numbers this list currently has
2:18:29
space for one element let's use the append method defined on the type to insert a number at the end
2:18:36
of the list so you can say numbers dot append and i'll pass in 2.
2:18:42
now the memory allocation and the size of the list are the same since the list
2:18:47
contains one element now what if i were to do something like this numbers.append
2:18:53
there needs to be a dot and i'll add another value 200.
2:18:59
now since the list only has an allocation for one item at this point before it can add the new element to the
2:19:06
list it needs to increase the memory allocation and thereby the size of the list it does this by calling a list
2:19:13
resize operation list resizing is quite interesting because it shows the
2:19:18
ingenuity in solving problems like this python doesn't resize the list to
2:19:24
accommodate just the element we want to add instead in this case it would allocate
2:19:30
four blocks of memory to increase the size to a total of four contiguous blocks of memory
2:19:36
it does this so that it doesn't have to resize the list every single time we add an element but at very specific points
2:19:44
the growth pattern of the list type in python is 0 4 8 16 25 35 46 and so on
2:19:54
this means that as the list size approaches these specific values resize is called again if you look at
2:20:02
when the size of the list is four this means that when appending four more values until the size of eight
2:20:09
each of those append operations do not increase the amount of space taken
2:20:14
at specific points however when resizing is triggered space required increases as
2:20:20
memory allocation increases this might signify that the append method has a non-constant space
2:20:27
complexity but it turns out that because some operations don't increase space and
2:20:32
others do when you average all of them out append operations take constant space
2:20:39
we say that it has an amortized constant space complexity this also happens with
2:20:45
insert operations if we had a four element array we would have four elements and a memory
2:20:50
allocation of four an insert operation at that point doesn't matter where it happens on the
2:20:55
list but at that point it would trigger a resize inserting is still more expensive though
2:21:01
because after the resize every element needs to be shifted over one the last insert operation that is
2:21:08
supported in most languages is the ability to add one list to another
2:21:13
in python this is called an extend and looks like this so i'll say numbers now if you let me
2:21:19
actually clear out the console oh actually you will let's exit python
2:21:25
we'll clear this out so we're back at the top and we'll start again so i'll say numbers
2:21:31
and we'll set it to an empty list and now we can say numbers dot extend
2:21:37
and as an argument we're going to pass in a new list entirely so here we'll say 4 comma 5 comma 6
2:21:45
and then once i hit enter if i were to print out numbers you'll see that it now contains the values 4 5 and 6.
2:21:52
so extend takes another list to add extend effectively makes a series of
2:21:58
append calls on each of the elements in the new list until all of them have been
2:22:03
appended to the original list this operation has a run time of big o of k
2:22:09
where k represents the number of elements in the list that we're adding to our existing list
2:22:14
the last type of operation we need to consider are delete operations deletes are similar to inserts in that when a
2:22:22
delete operation occurs the list needs to maintain correct index values so
2:22:27
where an insert shifts every element to the right a delete operation shifts every element to the left
2:22:34
just like an insert as well if we delete the first element in the list every single element in the list needs to be
2:22:40
shifted to the left delete operations have an upper bound of big o of n also known as a linear
2:22:47
runtime now that we've seen how common operations work on a data structure that we're quite familiar with let's switch
2:22:54
tracks and build our own data structure [Music]
2:23:02
over the next few videos we're going to build a data structure that you may have worked with before a linked list
2:23:09
before we get into what a linked list is let's talk about why we build data structures instead of just using the
2:23:14
ones that come built into our languages each data structure solves a particular
2:23:19
problem we just went over the basics of the array data structure and looked at the cost of common operations that we carry
2:23:26
out on arrays we found that arrays were particularly good at accessing reading values happens
2:23:32
in constant time but arrays are pretty bad at inserting and deleting both of which run in linear time
2:23:38
linked lists on the other hand are somewhat better at this although there are some caveats and if we're trying to
2:23:44
solve a problem that involves far more inserts and deletes than accessing a linked list can be a better tool than an
2:23:51
array so what is a linked list a linked list is a linear data structure
2:23:57
where each element in the list is contained in a separate object called a node a node models two pieces of
2:24:04
information an individual item of the data we want to store and a reference to the next node in the list
2:24:11
the first node in the linked list is called the head of the list while the last node is called the tail
2:24:17
the head and the tail nodes are special the list only maintains a reference to the head although in some
2:24:23
implementations it keeps a reference to the tail as well this aspect of linked lists is very
2:24:29
important and as you'll see most of the operations on the list need to be implemented quite differently compared
2:24:34
to an array the opposite of the head the tail denotes the end of the list
2:24:40
every node other than the tail points to the next node in the list but tail doesn't point to anything this is
2:24:47
basically how we know it's the end of the list nodes are what are called self-referential objects the definition
2:24:54
of a node includes a link to another node and self-referential here means the
2:24:59
definition of node includes the node itself linked lists often come in two forms a singly linked list where each
2:25:06
node stores a reference to the next node in the list or a doubly linked list
2:25:11
where each node stores a reference to both the node before and after if an array is a train with a bunch of cars in
2:25:19
order then a linked list is like a treasure hunt when you start the hunt you have a piece
2:25:24
of paper with the location of the first treasure you go to that location and you find an item along with a location to
2:25:31
the next item of treasure when you finally find an item that doesn't also include a location you know
2:25:37
that the hunt has ended now that we have a high level view of what a linked list is let's jump into
2:25:42
code and build one together we'll focus on building a singly linked list for this course there are advantages to
2:25:49
having a doubly linked list but we don't want to get ahead of ourselves
2:25:54
let's start here by creating a new file we're going to put all our code for our
2:26:00
linked list so we'll call this linked underscore list dot pi and first we're going to create a
2:26:08
class to represent a node say class
2:26:13
node now node is a simple object in that it won't model much so first we'll add a
2:26:21
data variable it's an instance variable here called data and we'll assign the value none
2:26:27
initially and then we'll add one more we'll call this next node and to this we'll assign
2:26:33
none as well so we've created two instance variables data to hold on to the data that we're storing and next
2:26:41
node to point to the next node in the list now we need to add a constructor to make
2:26:46
this class easy to create so we'll add an init method here that takes self and some
2:26:53
data to start off and all we're going to do is assign data to that instance variable we
2:26:59
created so that's all we need to model node before we do anything else though let's
2:27:05
document this so right after the class definition let's create a docs string so
2:27:10
three quotes next line and we'll say an object for storing
2:27:16
a single node of a linked list
2:27:22
and then on the next line we'll say models two attributes
2:27:28
data and the link to the next
2:27:34
node in the list and then we'll close this doc string off
2:27:39
with three more quotation marks okay using the node class is fairly straightforward so we can create a new
2:27:46
instance of node with some data to store now the way we're going to do this is we're going to bring up the console
2:27:51
and we're going to type out like we've been typing out before python followed by the name of the script that we wrote
2:27:57
which is linked list linked underscore list.pi but before we do that we're going to pass an argument to the python
2:28:03
command we're going to say dash or python i and then the name of the script
2:28:09
linked underscore list dot pi so what this does is this is going to run the
2:28:14
python repl the read evaluate print loop in the console but it's going to load the
2:28:19
contents of our file into that so that we can use it so i'll hit enter and we have a new
2:28:25
instance going and now we can use the node in here so we can say n1 equal node
2:28:31
and since we defined that constructor we can pass it some data so we'll say 10 here
2:28:36
now if we try to inspect this object the representation returned isn't very useful
2:28:42
which will make things really hard to debug as our code grows so for example if i type out n1 you'll see that
2:28:49
we have a valid instance here but it's not very helpful the way it's printed out so we can customize this by adding a
2:28:56
representation of the object using the wrapper function now in the terminal still we'll type out exit
2:29:02
like that hit enter to exit the console and then down here
2:29:08
let's add in some room okay and here we'll say def
2:29:14
double underscore wrapper another set of double underscores and then this function takes the
2:29:20
argument self and in here we can provide a string representation of what we want printed
2:29:26
to the console when we inspect that object inside of it inside of a console
2:29:32
so here we'll say return again this is a string representation so inside quotes we'll say
2:29:38
node so this represents a node instance and the data it contains here we'll say
2:29:44
percent s which is a python way of substituting something into a string string
2:29:51
interpolation and outside of the string we can say percent again and here we're saying we want to replace
2:29:57
this percent s with self.data okay let's hit save and before we move on
2:30:04
let's verify that this works so i'm going to come in here type clear to get rid of everything
2:30:11
and then we'll do what we did again and you can just hit the up arrow a couple times to get that command
2:30:17
all right so hit enter and now just so you know every time you run this you start off you know from scratch so n1
2:30:23
that we created earlier not there anymore so let's go ahead and create it n1 equal node
2:30:29
10 and we can type n1 again and hit enter and you have a much better representation now so we can see that we
2:30:35
have a node and it contains the data 10. we can also create another one n2 equal
2:30:40
node that contains the data 20 and now we can say n1.next n1.nextnode
2:30:46
equal n2 so n1 now points to n2 and if we say n1.nextnode
2:30:53
you'll see that it points to that node the node containing 20. nodes are the building blocks for a list
2:31:00
and now that we have a node object we can use it to create a singly linked list so again i'm going to exit out of
2:31:06
this and then go back to the text editor
2:31:12
and here we'll create a new class so class linked list the linked list class is going to define
2:31:20
a head and this attribute models the only node that the list is going to have
2:31:25
a reference to so here we'll say head and we'll assign none initially and then
2:31:30
like we did earlier let's create a constructor so double underscore init double
2:31:35
underscore this takes self and then inside like before we'll say
2:31:41
self dot head equal none this is the same as doing this so we can actually get rid
2:31:48
of that and just use the constructor okay so again this head attribute models
2:31:54
the only node that the list will have a reference to since every node points to
2:31:59
the next node to find a particular node we can go from one node to the next in a
2:32:04
process called list traversal so in the class constructor here we've set the default value of head to none so
2:32:11
that new lists created are always empty again you'll notice here that i didn't explicitly declare the head attribute at
2:32:18
the top of the class definition and don't worry that's not an oversight the self.head in the initializer means
2:32:25
that it's still created okay so that's all there is to modeling a linked list now we can add methods that make it
2:32:32
easier to use this data structure first a really simple docstring to provide some information
2:32:38
so here we'll to create a docstring three quotation marks and then we'll say singly linked list
2:32:44
and then close it off a common operation carried out on data structures is checking whether it
2:32:51
contains any data or whether it's empty at the moment to check if a list is empty we would need to query these
2:32:58
instance variables head and so on every time ideally we would like to not expose the
2:33:04
inner workings of our data structure to code that uses it instead let's make this operation more
2:33:10
explicit by defining a method so we'll say def is empty
2:33:16
and this method takes self as an argument and here we'll say return self.head double equal none
2:33:24
all we're doing here is checking to see if head is none if it is this condition evaluates to
2:33:29
true which indicates the list is empty now before we end this video let's add one more convenience method to calculate
2:33:37
the size of our list the name convenience method indicates that what this method is doing is not providing
2:33:43
any additional functionality that our data structure can't handle right now but instead making existing
2:33:49
functionality easier to use we could calculate the size of our linked list by traversing it every time
2:33:56
using a loop until we hit a tail node but doing that every time is a hassle
2:34:01
okay so we'll call this method size and as always it takes self
2:34:07
unlike calling len on a python list not to be confused with a linked list which
2:34:12
is a constant time operation our size operation is going to run in linear time
2:34:18
the only way we can count how many items we have is to visit each node and call
2:34:23
next until we hit the tail node so we'll start by getting a reference to the head we'll say current
2:34:30
equal self.head let's also define a local variable named count with an
2:34:36
initial value of 0 that will increment every time we visit a node once we hit
2:34:42
the tail count will reflect the size of that list next we'll define a while loop that will
2:34:48
keep going until there are no more nodes so say while current
2:34:53
while current is the same as writing out while current does not equal none but
2:34:59
it's more succinct so we'll go with this former if the ladder is more precise for you you can go with that
2:35:05
now inside this loop we'll increment the count value so count plus equal one plus equal if you haven't encountered it
2:35:12
before is the same as writing count equal count plus one so if count is zero initially so it's zero plus one is one
2:35:19
and then we'll assign that back to count okay so count plus equal one
2:35:24
next we're going to assign the next node in the list to current so current equal
2:35:30
current dot next node this way once we get to the tail and
2:35:36
call next node current will equal none and the while loop terminates so the end
2:35:41
we can return count as you can see we need to visit every
2:35:46
node to determine the size meaning our algorithm runs in linear time so let's document this
2:35:52
up in our docs string which we'll add now to size we'll say
2:35:57
returns the number of nodes in the list
2:36:02
takes linear time let's take a break here we can now
2:36:08
create lists check if they're empty and check the size in the next video let's start
2:36:13
implementing some common operations at the moment we can create an empty list but nothing else let's define a
2:36:20
method to add data to our list technically speaking there are three ways we can add data to a list
2:36:26
we can add nodes at the head of the list which means that the most recent node we created will be the head and the first
2:36:32
node we created will be the tail or we could flip that around most recent nodes are the tail of the list and the
2:36:38
first node to be added is the head i mentioned that one of the advantages of linked lists over arrays is that
2:36:44
inserting data into the list is much more efficient than to the array this is only true if we're inserting at
2:36:50
the head or the tail technically speaking this isn't an insert and you'll often see this method
2:36:56
called add prepend if the data is added to the head or append if it's added to the tail
2:37:02
a true insert is where you can insert the data at any point in the list which is our third way of adding data we're
2:37:09
going to circle back on that if we wanted to insert at the tail then the list needs a reference to the tail node
2:37:15
otherwise we would have to start at the head and walk down the length of the list or traverse it to find the tail
2:37:22
since our list only keeps a reference to the head we're going to add new items at the head of the list
2:37:29
now before we add our new method i forgot that i didn't show you in the last video how to actually use the code
2:37:36
we just added and how to check every time you know when we add new code that it works correctly
2:37:41
so like before we're gonna bring up the console and here we're gonna say python dash i
2:37:47
linked underscore list dot pi which should load it load the contents of our file
2:37:53
and now we'll start here by creating a linked list so l equal linked list
2:37:59
and then we'll use a node so n1 equal node
2:38:04
with the value 10 and now we can assign n1 to the nodes or
2:38:09
to the linked lists head attribute so l1 dot head equal n1 and then
2:38:15
we can see if size works correctly so if we call l1 dot size and since this is a
2:38:21
method we need a set of parentheses at the end and enter you'll see that we get back one correctly okay so it works
2:38:29
now let's add our new method which we're going to call add add is going to accept some data to add
2:38:36
to the list inside of a node so we'll say def
2:38:41
add and every python method takes self as an argument and then we want to add some
2:38:47
data to this node so we're going to say data for the second argument inside the method first we'll create a
2:38:53
new node to hold on to the data so new underscore node equal
2:38:59
node with the data before we set the new node as the head of the list we need to point the new
2:39:05
node's next property at whatever node is currently at head this way when we set
2:39:10
the new node as the head of the list we don't lose a reference to the old head so new underscore node dot next node
2:39:20
equal self.head now if there was no node at head this
2:39:25
correctly sets next node to none now we can set the new node as the head
2:39:31
of the node so say self.head equal new underscore node because the insert
2:39:39
operation is simply a reassignment of the head and next node properties this is a constant time operation so let's
2:39:46
add that in as a docs string first what the method does so it adds a
2:39:52
new node containing data at the head of the list
2:40:02
this operation takes constant time which is our best case
2:40:07
scenario okay let's test this out so i'm going to bring the console back up we'll exit out
2:40:13
of our current reply and we'll load the contents of the file again
2:40:20
and now we don't need to create a node like we did earlier so we can say l equal linked
2:40:26
list l.add one okay let's see if this works we'll call
2:40:32
size and if it worked the linked list should now have a size of one there we go you can also do
2:40:39
l.add2 l.add three and l dot size should now be three there
2:40:47
we go now if we i were to type l and just hit print again what we get in the repel is
2:40:53
nothing useful so like before we'll implement the wrapper function for our linked list
2:41:00
now i'm just going to copy paste this in and we'll walk through it okay so this is what our implementation
2:41:08
of wrapper looks like for the linked list object you can grab this code from the notes section of this video
2:41:15
okay so at the top you'll see a docs string where it says it returns a string representation of the list and like
2:41:20
everything we need to do with a linked list we need to traverse it so this is going to take linear time we start by
2:41:27
creating an empty list now i need to distinguish this is a python list not a linked list so we create an empty list
2:41:34
called nodes and two nodes we're going to add strings that have a description that provide a description of each node
2:41:41
but we're not going to use the description that we implemented in the node class because we're going to customize it a bit here
2:41:48
next we start by assigning self.head to current so we sort of have a pointer to
2:41:53
the head node as long as current does not equal none which means we're not at the tail we're going to implement some
2:42:00
logic so in the first scenario if the node assigned to current is the same as the
2:42:05
head then we're going to append this string to our nodes list
2:42:11
and the string is simply going to say that hey this is a head node and it contains some data which
2:42:17
will extract using current.data next scenario is if the node assigned to
2:42:23
current's next node is none meaning we're at the tail node then we'll assign
2:42:28
a different kind of string so it's the same as earlier except we're saying tail here and then finally in any other
2:42:33
scenario which means we're not at the head or not of the tail we'll simply print the node's value inside and again
2:42:40
we'll extract it using current.data with every iteration of the loop we'll move current forward by calling
2:42:46
current.nextnode and reassigning it and then at the very end when we're done we'll join all the strings that are
2:42:52
inside the nodes list together using the python join method and we'll say that
2:42:59
with every join so when you join these two strings together to make one string you need to put this set of characters
2:43:05
in between all right so let's see what this looks like so i'm going to come down here exit out of the console again
2:43:11
clear it out load the contents of the file again and let's try that so we'll say l equal
2:43:18
linked list all right so l dot add one l dot add two
2:43:25
l dot add three that seems enough and then now if i type out l and hit enter we get a nice string
2:43:32
representation of the list so you can see that we add every new node to the head so we added one first one ends up
2:43:39
being the tail because it keeps getting pushed out then two and then finally three so three
2:43:44
is at the head so far we've only implemented a single method which functions much like the
2:43:50
append method on a python list or an array except it adds it to the start of the
2:43:57
linked list it pre-pens it like append this happens in constant time in the next video let's add the
2:44:04
ability to search through our list for the search operation we're going to define a method that takes a value to
2:44:10
search for and returns either the node containing the value if the value is found or none if it isn't
2:44:17
so right after actually you know what we'll make sure wrapper is the last function our last method
2:44:24
in our class so we'll add it above it so here we'll say def search self
2:44:30
and then key in the last video we implemented the wrapper method to provide a string
2:44:36
representation of the list so we're going to use similar logic here to implement the search function we'll
2:44:43
start by setting a local variable current to point to the head of the list
2:44:49
while the value assigned to current is a valid node that is it isn't none
2:44:54
we'll check if the data on that node matches the key that we're searching for
2:44:59
so while current we'll say if current.data
2:45:05
is the key then we'll return current if it does match we'll go ahead and
2:45:11
return it like we've done here but if it doesn't we'll assign the next node in
2:45:16
the list to current and check again so say else current equal current dot next node
2:45:26
once we hit the tail node and haven't found the key current gets set to none and the while
2:45:32
loop exits at this point we know the list doesn't contain the key so we can return
2:45:39
none okay that completes the body of our method let's add a docs string to document this
2:45:46
so up at the top we'll say search for the first node
2:45:52
containing data that matches
2:45:57
the key now this is important because if our linked list contains more than one node
2:46:02
with the same value it doesn't matter we're going to return the first one with this implementation
2:46:08
we'll also say here that it returns the node or none
2:46:13
if not found in the worst case scenario we'll need to check every single node in the list
2:46:20
before we find the key or fail and as a result this operation runs in linear
2:46:25
time so i'll say takes o of n or linear time
2:46:33
so far we haven't seen anything that indicates this data structure has any advantage over an array or a python list
2:46:40
but we knew that i mentioned the strength of linked lists comes in inserts and deletes at specific
2:46:47
positions we'll check that out in the next video but as always before we end this one let's make sure everything
2:46:54
works so we'll load the contents of the file again
2:47:01
l equal linked list and then we'll say l.add 10
2:47:08
l dot add 20 2 doesn't matter l dot add
2:47:13
45 and one more metal dot add 15. now we can say
2:47:18
l.search and we need to give it a value so we'll say 45 and this returns a node
2:47:24
or none so we'll say n equal and then we'll hit enter if this works
2:47:30
n should be a node okay weirdly n does not work here
2:47:36
at least it says it's not a node which means i made a mistake in typing out our code and looking at it immediately it's
2:47:42
fairly obvious so this return none needs to be outside of the while loop okay so
2:47:48
i'm going to hit save now so make sure it's on the same indentation here which means it's outside the while loop
2:47:54
and then we'll run through this again okay so l is linked list
2:48:02
l.add 10 l dot add 2 l.add
2:48:08
45 and what was the last one we did i believe it was 15 and now we should be able to say
2:48:14
l.search remember we're assigning this to a node to a variable so l.search
2:48:21
45 and there you go we get that node back and we can hit l
2:48:27
and we'll see a representation of our list okay so again in the next video inserts
2:48:32
and deletes at specific positions insert operations on linked lists are quite interesting
2:48:39
unlike arrays where when you insert an element into the array all elements after the particular index need to be
2:48:45
shifted with a linked list we just need to change the references to next on a
2:48:50
few nodes and we're good to go since each node points to the next one by swapping out these references we can
2:48:56
insert a node at any point in the list in constant time much like binary search though there's a
2:49:02
catch to find the node at that position we want to insert we need to traverse the
2:49:08
list and get to that point we just implemented our search algorithm for the linked list type and we know
2:49:14
that this runs in linear time so while actually inserting is fast finding the position in the list you want to insert
2:49:21
it is not this is why i mentioned that there were some caveats to inserting anyway let's see what this looks like in
2:49:28
code we'll define a method named insert that takes data to insert along with an index
2:49:34
position so we'll do this after search right here say def
2:49:40
insert and this takes some data to insert and a
2:49:46
position to insert it at you may be thinking wait a minute linked
2:49:52
lists don't have index positions right and you're correct but we can mimic that behavior by just counting the number of
2:49:59
times we access next node if the index value passed into this argument is 0 that means we want to
2:50:06
insert the new node at the head of the list this is effectively the same behavior as calling add which means the
2:50:13
logic is the same so we don't need to repeat it we can call the add method we wrote earlier so we'll say if
2:50:20
index if index equals 0 or if index is 0 then self dot add
2:50:28
data if the index is greater than 0 then we need to traverse the list to find the
2:50:35
current node at that index so if index is greater than zero
2:50:40
now before we do that we need to create a new node containing the data we want to insert so we'll say new equal node
2:50:48
with some data i'm going to assign index the argument passed to our function to a local
2:50:55
variable named position and the head of the list to a variable named current
2:51:00
position equal index current equal self.head
2:51:07
every time we call current.nextnode meaning we're moving to the next node in the list we'll decrease
2:51:14
the value of position by 1. when position is zero we'll have arrived
2:51:19
at the node that's currently at the position we want to insert in in reality though we don't want to
2:51:24
decrease it all the way to zero imagine we have a list with five nodes and we want to insert a node at position
2:51:32
3. to insert a node at position 3 we need to modify the nodes at positions 2
2:51:37
and 3. node 2's next node attribute is going to point to the new node and the new node's
2:51:44
next node attribute will point to node 3. in this way an insert is a constant time
2:51:49
operation we don't need to shift every single element we just modify a few next node references
2:51:56
in a doubly linked list we can use node 3 to carry out both of these operations
2:52:02
node 3 in a doubly linked list would have a reference to node 2 and we can use this reference to modify all the
2:52:09
unnecessary links and a singly linked list though which is what we have if we kept decreasing
2:52:15
position until we're at 0 we arrive at node 3. we can then set the new node's next node
2:52:22
property to point to node 3 but we have no way of getting a reference to node 2
2:52:27
which we also need for this reason it's easier to decrease position to just 1 when it equals 1 and
2:52:34
stop at node 2. so in here we'll say while
2:52:40
position is greater than one now while the position is greater than one we'll keep calling next node and
2:52:47
reassigning the current node so current equal node.next
2:52:53
node and at the same time we'll decrement position so position
2:52:58
equal to position minus one which you can also succinctly
2:53:04
write as minus equal one this way when the position equals one
2:53:10
the loop exits and current will refer to the node at the position before the
2:53:15
insert point so outside the while loop we'll say previous equal current
2:53:22
and next equal current dot next node to make things more clear what i've done
2:53:29
here is name the node before the new one previous and the node after the new one
2:53:34
next all that's left to do now is to insert the new node between previous and next
2:53:39
so we'll say previous dot next node equal
2:53:45
new and then new dot next node equal next
2:53:52
now it seems like there's an issue with variable naming here and i'm most probably conflicting with some globally
2:53:58
named next variable so actually go ahead and call this next node and
2:54:04
previous node so that we don't mess things up here previous node
2:54:12
so the dot next node is obviously the attribute on a node but this is just a local variable let's document this
2:54:18
method so up at the top we'll add a docs string and it will say inserts a new node
2:54:26
containing data at index position
2:54:33
insertion takes constant time
2:54:39
but finding the node at the insertion point
2:54:46
takes linear time
2:54:52
let's add this to the next line there we go and then we'll say therefore it takes an
2:54:58
overall linear time
2:55:03
this is why even though we can easily insert a new node without having to shift the rest ultimately adding to
2:55:10
either the head or the tail if you have a reference is much more efficient we have one more operation to add to our
2:55:17
linked list that will make it a robust data structure much like inserts removing a node is
2:55:22
actually quite fast and occurs in constant time but to actually get to the node that we want to remove and modify
2:55:29
the next connections we need to traverse the entire list in our worst case so in the worst case this takes linear time
2:55:36
let's add this operation to our data structure there are two ways we can define the
2:55:41
remove method one where we provide a key to remove as an argument and one where
2:55:46
we provide an index now in the former the key refers to the data the node
2:55:52
stores so in order to remove that node we would first need to search for data that matches the key i'm going to
2:55:58
implement that first method which we'll call remove and i'll leave it up to you to get some practice in and implement a
2:56:05
remove at index method to complete our data structure so we'll add this after the insert method right here
2:56:14
remove is going to accept a key which we'll need to search for before we can remove
2:56:20
a node earlier we defined a search method that found a node containing data that matches a key but we can't use that
2:56:27
method as is for the implementation of remove when we remove a node much like
2:56:32
the insert operation we need to modify the next node references the node before the match needs to point
2:56:39
to the node after the match if we use the search method we defined earlier we get the node we want to
2:56:45
remove as a return value but because this is a singly linked list we can't
2:56:51
obtain a reference to the previous node like i said earlier if this was a doubly linked list we could use the search
2:56:57
method since we would have a reference to that previous node we'll start here by setting a local
2:57:03
variable named current to point to the head let's also define a variable named
2:57:09
previous that will set to none to keep track of the previous node as we traverse the
2:57:15
list finally let's declare a variable named found that we'll set to false
2:57:21
found is going to serve as a stopping condition for the loop that we'll define
2:57:26
we'll use the loop to keep traversing the linked list as long as found is false meaning we haven't found the key
2:57:33
that we're looking for once we've found it we'll set found to true and the loop terminates so let's set up our loop so
2:57:39
we'll say while current and not found
2:57:46
here we're defining a while loop that contains two conditions first we tell the loop to keep iterating
2:57:53
as long as current does not equal none when current equals none this means
2:57:59
we've gone past the tail node and the key doesn't exist the second condition asks the loop to
2:58:05
keep evaluating as long as not found equals true now this might be tricky
2:58:11
because it involves a negation here right now found is set to false so not
2:58:16
found not false equals true this not operator flips the value
2:58:22
when we find the key and we set found to true not true not found we'll equal false
2:58:29
then and the loop will stop the end in the while loop means that both conditions current being a valid
2:58:36
node and not found equalling true both have to be true if either one of them evaluates to false
2:58:42
then the loop will terminate now inside the loop there are three situations that we can run into
2:58:48
first the key matches the current node's data and current is still at the head of the list
2:58:54
this is a special case because the head doesn't have a previous node and it's the only node being referenced by the
2:59:00
list let's handle this case so we'll say if current.data
2:59:05
double equals the key and current is self.head which you can write out as
2:59:12
current equal self.head or current is self.head now if we hit this case
2:59:18
we'll indicate that we found the key by setting found to true and then this means that on the next
2:59:24
pass this is going to evaluate to false because not true will be false
2:59:31
and then the loop terminates once we do that we want to remove the current node and since it's the head node all we need
2:59:38
to do is point head to the second node in the list which we can get by referencing the next node attribute on
2:59:45
current self.head equal current.nextnode
2:59:50
so when we do this there's nothing pointing to that first node so it's automatically removed the next scenario
2:59:57
is when the key matches data in the node and it's a node that's not the head so here we'll say else if current dot
3:00:05
data equal key if the current node contains the key
3:00:10
we're looking for we need to remove it to remove the current node we need to go to the previous node and modify its next
3:00:17
node reference to point to the node after current but first we'll set found
3:00:23
to true and then we'll switch out the references so previous.nextnode
3:00:29
equal current.nextnode so far we haven't written any code to
3:00:34
keep track of the previous node we'll do that in our else case here
3:00:40
so if we hit the else case it means that the current node we're evaluating doesn't contain the data that matches
3:00:46
the key so in this case we'll make previous point to the current node and then set current to the next node so
3:00:52
previous equal current and current equal current.nextnode
3:00:59
and that's it for the implementation of remove now we're not doing anything at the
3:01:04
moment with the node we're removing but it's common for remove operations to return the value being removed so at the
3:01:10
bottom outside the while loop let's return
3:01:16
current and with that we have a minimal implementation of a linked list and your
3:01:21
first custom data structure how cool is that there's quite a bit we can do here to
3:01:26
improve our data structure particularly in making it easy to use but this is a good place to stop
3:01:33
before we move on to the next topic let's document our method so the top another docs string
3:01:39
and here we'll say removes node containing data that matches the key
3:01:46
also it returns the node or none if the key doesn't exist
3:01:53
and finally this takes linear time because in the worst case scenario we need to search the entire
3:01:59
list if you'd like to get in some additional practice implementing functionality for linked lists two methods you can work on
3:02:07
are remove it index and node at index to allow you to easily delete or read
3:02:14
values in a list at a given index now that we have a linked list let's
3:02:19
talk about where you can use them the honest answer is not a lot of places
3:02:24
linked lists are really useful structures to build for learning purposes because they're relatively
3:02:29
simple and are a good place to start to introduce the kinds of operations we need to implement for various data
3:02:36
structures it is quite rare however that you will need to implement a linked list on your own
3:02:42
there are typically much better and by that i mean much more efficient data structures that you can use
3:02:48
in addition many languages like java for example provide an implementation of a linked list already
3:02:54
now that we have a custom data structure let's do something with it let's combine the knowledge we have and look at how a
3:03:01
sorting algorithm can be implemented across two different data structures
3:03:07
[Music] now that we've seen two different data
3:03:12
structures let's circle back and apply what we know about algorithms to these new concepts one of the first algorithms you learned
3:03:19
about was binary search and we learned that with binary search there was one precondition the data collection needs
3:03:25
to be sorted over the next few videos let's implement the merge sort algorithm which is one of
3:03:30
many sorting algorithms on both arrays or python lists and the singly linked list we just created
3:03:37
this way we can learn a new sorting algorithm that has real world use cases and see how a single algorithm can be
3:03:44
implemented on different data structures before we get into code let's take a look at how merge sort works
3:03:50
conceptually and we'll use an array to work through this we start with an unsorted array of
3:03:56
integers and our goal is to end up with an array sorted in ascending order
3:04:01
merge sort works like binary sort by splitting up the problem into sub problems but it takes the process one
3:04:08
step further on the first pass we're going to split the array into two smaller arrays now in
3:04:14
binary search one of these subarrays would be discarded but that's not what happens here
3:04:19
on the second pass we're going to split each of those subarrays into further smaller evenly sized arrays and we're
3:04:25
going to keep doing this until we're down to single element arrays after that the merge sort algorithm
3:04:32
works backwards repeatedly merging the single element arrays and sorting them
3:04:37
at the same time since we start at the bottom by merging to single element arrays we only need to
3:04:44
make a single comparison to sort the resulting merge array by starting with smaller arrays that are sorted as they
3:04:51
grow merge sort has to execute fewer sort operations than if it sorted the entire
3:04:57
array at once solving a problem like this by recursively breaking down the problem
3:05:02
into subparts until it is easily solved is an algorithmic strategy known as divide and conquer but instead of
3:05:09
talking about all of this in the abstract let's dive into the code this way we can analyze the runtime as we
3:05:15
implement it for our first implementation of merge sort we're going to use an array or a
3:05:21
python list while the implementation won't be different conceptually for a linked list
3:05:27
we will have to write more code because of list traversal and how nodes are arranged so once we have these concepts
3:05:33
squared away we'll come back to that let's add a new file here
3:05:39
we'll call this merge underscore sort dot pi
3:05:45
in our file let's create a new function named merge sort that takes a list and remember when i say list unless i
3:05:52
specify linked list i mean a python list which is the equivalent of an array so
3:05:57
we'll say def merge underscore sort and takes a list
3:06:03
in the introduction to algorithms course we started our study of each algorithm by defining the specific steps that
3:06:10
comprise the algorithm let's write that out as a docstring in here the steps of the algorithm so
3:06:16
that we can refer to it right in our code this algorithm is going to sort the
3:06:22
given list in an ascending order so we'll start by putting that in here as a simple definition
3:06:28
sorts a list in ascending order there are many variations of merge sort
3:06:36
and in the one we're going to implement we'll create and return a new sorted list other implementations will sort the
3:06:43
list we pass in and this is less typical in an operation known as sort in place
3:06:50
but i think that returning a new list makes it easier to understand the code now these choices do have implications
3:06:56
though and we'll talk about them as we write this code for our next bit of the docs string
3:07:02
let's write down the output of this algorithm so returns a new
3:07:07
sorted list merge sort has three main steps the first is the divide step where we
3:07:14
find the midpoint of the list so i'll say divide find the mid point of the list and
3:07:22
divide into sub-lists
3:07:29
the second step is the conquer step where we sort the sub-list that we created in the divide step so we'll say
3:07:35
recursively sort the sub-lists created in previous
3:07:42
step and finally the combine the combined step where we merge these recursively
3:07:48
sorted sub-lists back into a single list so merge the sorted sub-lists
3:07:55
created in previous step when we learned about algorithms we
3:08:01
learned that a recursive function has a basic pattern first we start with a base
3:08:07
case that includes a stopping condition after that we have some logic that breaks down the problem and recursively
3:08:14
calls itself our stopping condition is our end goal a sorted array
3:08:20
now to come up with a stopping condition or a base case we need to come up with the simplest condition that satisfies
3:08:28
this end result so there are two possible values that fit a single element list or an empty
3:08:35
list now in both of these situations we don't have any work to do if we give the merge sort function an
3:08:42
empty list or a list with one element it's technically already sorted we call this naively sorting so let's add that
3:08:49
as our stopping condition we'll say if len list if the length of the list is less than or equal to one
3:08:57
then we can return the list okay so this is a stopping condition
3:09:02
and now that we have a stopping condition we can proceed with the list of steps
3:09:09
first we need to divide the list into sub lists to make our functions easier to
3:09:14
understand we're going to put our logic in a couple different functions instead of one large one so i'll say it left
3:09:21
half comma right half equal
3:09:27
split list so here we're calling a split function that splits the list we pass in
3:09:34
and returns two lists split at the midpoint because we're returning two lists we can capture them in two
3:09:40
variables now you should know that this split function is not something that comes built into python this is a global
3:09:47
function that we're about to write next is the conquer step where we sort
3:09:52
each sub-list and return a new sorted sub-list so we'll say left equal
3:09:59
merge sort left half
3:10:04
and right equal merge sort right half
3:10:11
this is the recursive portion of our function so here we're calling merge sort on this divided sub list so we
3:10:18
divide the list into two here and then we call merge sort on it again this further splits that sublist into
3:10:25
two in the next pass through of merge sort this is going to be called again and again and again until we reach our
3:10:32
stopping condition where we have single element lists or empty lists
3:10:37
when we've subdivided until we cannot divide any more then we'll end up with a left and a right half
3:10:44
and we can start merging backwards so we'll say return merge
3:10:51
left and right that brings us to the combined step once two sub-lists are sorted and
3:10:57
combined we can return it now obviously none of these functions merge merge sort well merge sort is
3:11:04
written but merge and split haven't been written so all we're going to do here if we run it is raise an error so in the
3:11:10
next video let's implement the split operation the first bit of logic we're going to
3:11:16
write is the divide step of the algorithm this step is fairly straightforward and only requires a few
3:11:21
lines of code but is essential to get the sorting process going all right so as we saw earlier we're
3:11:27
going to call the function for the divide step split so we'll say def split
3:11:32
and split is going to take as an argument a list to split up let's document how this function works
3:11:40
so we'll say divide the unsorted list at midpoint
3:11:47
into sub lists and it's always good to say what we're returning as well so
3:11:52
we'll say returns to sub-lists left and right
3:12:00
all right so the first step is to determine the midpoint of this list of this array
3:12:05
we're going to use the floor division operator for this floor division carries out a division
3:12:11
operation and if we get a non-integer value like 2.5 back it just gets rounded
3:12:16
down to two we'll define the midpoint to be the length of the list divided by two
3:12:22
and then rounded down so lan list and using the
3:12:28
two forward slashes for the floor division operator we'll put number two after it
3:12:34
okay once we have the midpoint we can use the slicing notation in python to
3:12:40
extract portions of the list we want to return for instance we can define left
3:12:47
as the left sub-list that goes all the way from the start of the list
3:12:52
all the way up to the midpoint without including the midpoint now over here we're using the slicing
3:12:59
syntax where it's like using the you know subscript notation to access a
3:13:04
value from a list but instead we give two index values as a start and stop
3:13:09
if we don't include a start value as i've done here python interprets that as starting from the zeroth index or the
3:13:16
start of the list now similarly we can define right [Music]
3:13:22
to be values on the right of the midpoint so starting at the midpoint and
3:13:27
going all the way up to the end of the list so a couple things to note as i said
3:13:33
earlier when you don't include the starting index it interprets it as to start at the very beginning of the list
3:13:39
the index you give as the stopping condition that value is not included in the slice so over here we're starting at
3:13:46
the very beginning of list and we go all the way up to midpoint but not including midpoint and then right starts at
3:13:53
midpoint so it includes that value and then goes all the way to the end of the list now once we have these two sub-lists we
3:14:00
can return them so we'll return left and right notice that we're returning two values here and
3:14:07
then in the merge sort function when we call that split function we're declaring two variables left half
3:14:14
and right half to assign so that we can assign these two sub lists to them
3:14:20
okay and that's all there is to the split function in the next video let's implement the crucial portion of the
3:14:26
merge sort logic once we run the split function recursively over the array we should end
3:14:32
up with several single member or empty arrays at this point we need to merge them all
3:14:37
back and sort them in the process which is what our merge function is for the merge function is going to take two
3:14:44
arrays or lists as arguments and to match the naming conventions we used in the split function we'll call this left
3:14:51
and right as well so we'll say def merge takes a left and a right list
3:14:58
now like before let's add some documentation to our function so this function merges to lists or arrays
3:15:07
sorting them in the process and then it returns a new merged list
3:15:15
since our function is going to return a new list let's start by creating one
3:15:21
now in the process of merging we need to sort the values in both lists
3:15:27
to sort we need to compare values from each array or each list so next let's
3:15:33
create two local variables to keep track of index values that we're using for each list
3:15:39
so the convention here is i and j so we'll stick to it so i equals 0 j equals 0.
3:15:45
as we inspect each value in either list we'll use the variables to keep track of
3:15:51
the indexes of those values so we'll use i to keep track of indexes in the left
3:15:56
list and j for indexes in the right list when merging we want to keep sorting the
3:16:02
values until we've worked through both lists so for our loop let's set up two
3:16:08
conditions with an and operator so we'll say while let's just stay up here while i is less
3:16:15
than while i is less than the length of the
3:16:21
left list and j is less than the length
3:16:27
of the right list then we'll keep executing our loop so here we're ensuring that as long as i is less than
3:16:34
the length of the left list and the and is important and j is less than the length of the right list we're
3:16:41
going to keep executing the code now i and j are both set to zero initially
3:16:46
which means that our first comparison operation will be on the first element of each list respectively so we'll say
3:16:55
if left i so i zero so this is going to get the first value out of the left list
3:17:02
is less than right j and again here
3:17:07
j is zero so we're going to get the first value out of the right list now if the value at index i in the left list is
3:17:14
less than the value at index j in the right list what do we do well that means the value being compared in left is less
3:17:22
than the value in the right and can be placed at position 0 in the new array l
3:17:28
that we created earlier so here we'll say l dot append left
3:17:33
i since we've read and done something with the value at position i let's increment
3:17:41
that value so we move forward to evaluate the next item in the left list
3:17:49
i plus one or we can say i plus equal one okay next is an else statement
3:17:57
and here we'll say if the value at index i so i don't have to write out the actual logic because
3:18:04
it's implied so here we're saying that left the value at left is less than the value at right now in the else clause if
3:18:11
the value at so i equal is greater and i haven't written out
3:18:16
that condition because it's implied so here we're saying if the value in the left is less than the value in the right
3:18:22
so in the else clause it's going to mean that the value in the left is either greater than or equal to the value in
3:18:28
the right but when we hit the else clause if the value at index i in the left list is greater
3:18:34
then we place the value at index j from the right list at the start of the new
3:18:40
one list l and similarly increment j so here we'll say l dot append
3:18:47
right j and then j equal j plus one
3:18:55
doing this doesn't necessarily mean that in one step we'll have a completely sorted array but remember that because
3:19:01
we start with single element arrays and combine with each merge step we will
3:19:06
eventually sort all the values more than one time and by the time the entire process is done all the values are
3:19:13
correctly sorted now this isn't all we need to do in the merge step however there are two
3:19:19
situations we can run into one where the left array is larger than the right and
3:19:24
vice versa so this can occur when an array containing an odd number of elements needs to be split so how do you
3:19:31
split a three element array or list well the left can have two elements and the right can have one or the other way
3:19:38
around in either case our while loop uses an and condition where the variables used to store the
3:19:44
indexes need to be less than the length of the lists if the left list is shorter than the right then the first condition
3:19:51
returns false and the entire loop returns false because it's an and condition
3:19:56
this means that in such an event when the while loop terminates not all the values in the right list will have been
3:20:02
moved over to the new combined list so to account for this let's add two more while loops
3:20:09
the first while loop is going to account for a situation where the right list is
3:20:14
shorter than the left and the previous loop terminated because we reached the end of the right list first
3:20:21
so in this case what we're going to do is simply add the remaining elements in the left to the new list
3:20:27
we're not going to compare elements because we're going to assume that within a list the elements are already sorted
3:20:33
so while i is less than length of left
3:20:38
then it's the same logic l dot append left i
3:20:44
and i plus equal one so the while loop is going to have the
3:20:50
similar condition keep the loop going until it's at the last index inside the body we're incrementing the
3:20:56
index with every iteration of the loop our final loop accounts for the opposite scenario where the left was shorter than
3:21:03
the right the only difference here is that we're going to use the variable j along with the right list so we'll say while j
3:21:10
is less than length of right l dot append
3:21:16
right j and j plus equal one okay let's stop
3:21:23
here in the next video let's test out merge sort make sure our code is running correctly and everything is written well
3:21:30
and then we'll wrap up this stage by documenting our code and evaluating the run time of our algorithm in the last
3:21:36
video we completed our implementation for the merge sort algorithm but we didn't test it in any way let's define a
3:21:43
new list at the bottom that contains several numbers you can put whatever you want in there
3:21:48
but make sure that the numbers are not in order i'll call mine a list
3:21:55
and in here we'll say 54 26 or 62 doesn't matter 93 17
3:22:04
77 31 just add enough so that you can make out
3:22:10
that it's sorted okay next we're going to call the merge sort algorithm
3:22:17
and pass in our list let's assign this to some variables so we'll say l equal merge
3:22:22
underscore sort a list and then if it works correctly we should
3:22:29
be able to print this list and see what it looks like so i'm going to hit save down here in the console we'll tap out
3:22:35
python merge sort dot pi and before i hit enter i actually
3:22:41
noticed i made an error in the last video but i'll hit enter anyway and you should see the error pop up okay so what
3:22:47
i forgot to do which is a pretty crucial part of our algorithm is in the merge
3:22:52
function i forgot to return the list containing the sorted numbers after carrying out all this logic
3:22:58
so here at the bottom we'll say return l all right we'll save again
3:23:05
and now we'll clear this out and try that one more time and there we go
3:23:11
you should see a sorted list printed out we can write out a more robust function
3:23:16
to test this because with bigger arrays visually evaluating that printed list won't always be feasible so bring this
3:23:23
back down let's get rid of this and we'll call our
3:23:29
function verify sorted and this will take a list
3:23:35
first we're going to check inside the body of the function we'll check the length of the list
3:23:40
if the list is a single element list or an empty list we don't need to do any
3:23:46
unnecessary work because remember it is naively sorted so we'll say if n
3:23:51
equals 0 or if n equals 1
3:23:57
then we'll return true we've verified that it's sorted now to conclude our function we're going
3:24:02
to write out one line of code that will actually do quite a bit of work so first we'll say return
3:24:09
list zero so we'll take the first element out of the list and we'll compare and see if
3:24:14
that's less than the second element in the list okay so first we'll check that
3:24:20
the first element in the list is less than the second element in the list this returns either true or false so we
3:24:26
can return that directly but this isn't sufficient if it were we could trick the verify function by only
3:24:33
sorting the first two elements in the list so to this return statement we're going to use an and operator to add on one
3:24:41
more condition for this condition we're going to make a recursive function call
3:24:46
back to verify sorted and for the argument we're going to pass
3:24:52
in the list going from the second element all the way to the end let's visualize how this
3:24:59
would work we'll use a five element list as an example so we'll call verify sorted and
3:25:05
pass in the entire list this list is not one or zero elements long so we skip that first if statement
3:25:12
there's only one line of code left in the function and first we check that the element at index 0 is less than the
3:25:18
element at index 1. if this is false the function returns immediately with a
3:25:23
false value an and operator requires both conditions to be true for the entire line of code
3:25:30
to return true since the first condition evaluates to false we don't need to bother evaluating
3:25:35
the second the second condition is a recursive call with a sub-list
3:25:41
containing elements from the original list starting at position 1 and going to the end
3:25:46
so on the second call again we can skip that first if statement and proceed to check whether the value at element 0 is
3:25:53
less than the value at element 1. remember that because this list is a sub-list of the original starting at the
3:25:59
element that was the second element in the original list by comparing the elements at position 0
3:26:05
and 1 in the sub list we're effectively comparing the elements at position 1 and
3:26:10
2 in the original list with each recursive call as we create new sub
3:26:16
lists that start at index position 1 we're able to check the entire list
3:26:21
without having to specify any checks other than the first two elements since this is a recursive function it
3:26:28
means we need a stopping condition and we have it already it's that first if condition
3:26:34
as we keep making sub lists once we reach a single element list that element is already sorted by definition so we
3:26:41
can return true since this recursive function call is part of an and condition
3:26:46
it means that every single recursive call has to return true all the way back
3:26:51
to the beginning for our top level function to return true and for the function to say yes this is sorted
3:26:58
now we could have easily done this using an iterative solution and a for loop but this way you get another example of
3:27:04
recursion to work through and understand so let's use this function at the bottom we'll say print
3:27:11
verify sorted and first we'll pass in a list oops we got rid of that didn't we
3:27:19
okay let me write it out again so a list equal
3:27:24
and i think i have those original numbers here somewhere so we'll say 54
3:27:29
26 93
3:27:36
okay and then we assigned to l the result of calling merge
3:27:41
sort on a list okay so now here we're going to use the verify sorted function
3:27:48
and we'll check first that a list is sorted that should return false and then we'll check the same call on we'll pass
3:27:56
an l and this should return true okay so now at the bottom here in the console
3:28:03
we'll call python merge sort dot pi and there we go it returned false for a list
3:28:08
meaning it's not sorted but l is sorted cool so our merge sort function works in
3:28:14
the next video let's talk about the cost of this algorithm if we go back to the top level the merge
3:28:21
sort function what is the run time of this function look like and what about space complexity how does memory usage
3:28:27
grow as the algorithm runs to answer those questions let's look at the individual steps starting with the
3:28:33
split function in the split function all we're doing is finding the midpoint of the list and splitting the list at the
3:28:40
midpoint this seems like a constant time operation but remember that the split function isn't called once it's called
3:28:47
as many times as we need it to to go from the initial list down to a single element list
3:28:53
now this is a pattern we've seen a couple times now and we know that overall this runs in logarithmic time so
3:29:00
let's add that as a comment so here i'll say takes overall
3:29:06
big o of log n time now there's a caveat here but we'll come back to that
3:29:13
so next up is the merge step in the merge step we've broken the original list down into single element lists and
3:29:21
now we need to make comparison operations and merge them back in the reverse order
3:29:26
for a list of size n we will always need to make an n number of merge operations
3:29:31
to get back from single element lists to a merge list this makes our overall runtime big o of
3:29:38
n times log n because that's an n number of merge steps multiplied by log n
3:29:44
number of splits of the original list so to our merge step here let's add a comment we'll say it runs
3:29:51
in overall oops there we go runs an overall linear
3:29:56
time right it takes an n number of steps number of merge steps but now that we
3:30:02
have these two so linear here and logarithmic here we can multiply these and say that the merge sort function the
3:30:10
top level function we can conclude that the runtime of the overall sorting process is big o of n times log n
3:30:20
now what about that caveat i mentioned earlier so if we go back to our split function
3:30:26
here right here there we go
3:30:31
let's take a look at the way we're actually splitting the list so we're using python's list slicing operation
3:30:37
here and passing in two indexes where the split occurs now if you go and poke around the python
3:30:44
documentation which i've done it says that a slicing operation is not a constant time operation and in fact has
3:30:51
a runtime of big o of k where k represents the slice size
3:30:57
this means that in reality our implementation of split this implementation of split does not run in
3:31:03
logarithmic time but k times logarithmic time because there is a slice operation
3:31:09
for each split this means that our implementation is much more expensive so
3:31:15
overall that makes our overall top level merge sort function not n times log n but k n
3:31:22
times log n which is much more expensive now let's get rid of all that
3:31:29
to fix this we would need to remove this slicing operation now we can do that by using a technique we learned in a
3:31:36
previous course in the introduction to algorithms course we looked at two versions of binary
3:31:42
search in python a recursive and an iterative version in the recursive one we use list slicing
3:31:49
with every recursion call but we achieve the same end result using an iterative approach without using list slicing
3:31:56
over there we declared two variables to keep track of the starting and ending positions in the list
3:32:02
we could rewrite merge sort to do the same but i'll leave that as an exercise for you if you want some hints if you
3:32:09
want any direction i've included a link in the notes with an implementation so that is time complexity now just so we
3:32:16
know before moving on for python here our overall run time is not what i've listed
3:32:22
here but this is what the actual run time of the merge sort algorithm looks like so the merge step runs in linear
3:32:29
time and the split step takes logarithmic time for an overall n times
3:32:34
log n and that is how merge sort actually works okay so what about space complexity
3:32:40
the merge sort algorithm takes linear space and this is weird to think about it first but as always a visualization
3:32:47
helps so if we start at the top again with our full list and carry out the split method
3:32:53
until we have single element lists each of these new lists take up a certain amount of space
3:32:59
so the second level here we have two lists where each take up an n by two amount of space
3:33:05
now this makes it seem that the sum of all this space is the additional space needed for merge sort but that's not
3:33:11
actually the case in reality there are two factors that make a difference first not every single one of these sub
3:33:18
lists are created simultaneously at step two we create two n by two size
3:33:24
sub lists when we move to the next step however we don't hold on to the n by two sub lists
3:33:30
and then create four n by four size sub lists for the next split instead after the four n by four size
3:33:38
sub lists are created the n by two ones are deleted from memory there's no reason to hold on to them any longer now
3:33:45
the second point is that our code doesn't execute every path simultaneously
3:33:50
think of it this way when we pass our list to the top level merge sort function
3:33:56
our implementation calls split which returns a left half and a right
3:34:01
half the next line of code then calls merge sort on the left half again
3:34:08
this runs the function the merge sort function again with a new list in that second run of the function split
3:34:14
is called again we get a second left and right half and then again like before we
3:34:20
call merge sort on this left half as well what this means is that the code walks down the left path all the way
3:34:27
down until that initial left half is sorted and merged back into one array
3:34:33
then it's going to walk all the way down the right path and sort that until we're back to that first split with two n by
3:34:40
two sized sublists essentially we don't run all these paths of code at once so the algorithm doesn't
3:34:47
need additional space for every sub-list in fact it is the very last step that
3:34:52
matters in the last step the two sub-lists are merged back into the new sorted list and
3:34:59
returned that sorted list has an equal number of items as the original unsorted list
3:35:06
and since this is a new list it means that at most the additional space the
3:35:11
algorithm will require at a given time is n yes at different points in the algorithm
3:35:17
we require log n amount of space but log n is smaller than n and so we consider
3:35:23
the space complexity of merge sort to be linear because that is the overall factor
3:35:28
okay that was a lot so let's stop here don't worry if you've got questions about merge sort because we're not done
3:35:34
yet over the next few videos let's wrap up this course by implementing merge sort on a linked list
3:35:41
[Music] over the last few videos we implemented
3:35:47
the merge sort algorithm on the array or list type in python merge sort is
3:35:52
probably the most complicated algorithm we've written so far but in doing so we learned about an important concept
3:35:59
divide and conquer we also concluded the last video by figuring out the run time of merge sort
3:36:04
based on our implementation over the next few videos we're going to implement merge sort again this time on
3:36:11
the linked list type in doing so we're going to get a chance to see how the implementation differs
3:36:17
based on the data structure while still keeping the fundamentals of the algorithm the same and we'll also see
3:36:22
how the run time may be affected by the kinds of operations we need to implement
3:36:27
let's create a new file to put our second implementation of merge sort in so file over here new file
3:36:35
and it's going to have a rather long name we'll call this linked list
3:36:40
merge sort with underscores everywhere dot pi we're going to need the linked list
3:36:47
class that we created earlier so we'll start at the top by importing the linked
3:36:52
list class from the linkedlist.pi file the way we do that is we'll say from
3:36:58
linked list import linked list
3:37:03
right so that imports the class uh let's test if this works really quick
3:37:09
we'll just do something like l equal linked list l.add
3:37:15
ten or one doesn't matter print l okay and if i hit save
3:37:22
and then down here we'll say python linked list merge sword dot pi
3:37:28
okay it works so this is how we get some of the code how we reuse the code that we've written in other files into this
3:37:34
current file and get rid of this now okay like we did with the first
3:37:40
implementation of merge sort we're going to split the code up across three functions
3:37:45
the main function merge sort a split function and a merge function
3:37:52
now if you were to look up a merge sort implementation in python both for a regular list an array or a linked list
3:37:58
you would find much more concise versions out there but they're kind of hard to explain so splitting it up into
3:38:04
three will sort of help it you know be easier to understand so we'll call this merge sort at the top level and this
3:38:11
time it's going to take a linked list let's add a dog string to document the
3:38:17
function so say that this function sorts a linked
3:38:22
list in ascending order and like before we'll add the steps in here so we'll say you first recursively
3:38:30
divide the linked list into sub lists containing
3:38:37
a single node then we repeatedly
3:38:43
merge these sub-lists to produce sorted sub-lists
3:38:49
until one remains and then finally this function returns a
3:38:54
sorted linked list the implementation of this top level
3:39:01
merge function is nearly identical to the array or list version we wrote earlier so first we'll provide a
3:39:07
stopping condition or two if the size of the list is one or it's an empty list
3:39:12
we'll return the linked list since it's naively sorted so if linked
3:39:18
list dot size remember that function we run equal one then we'll return
3:39:24
linked list else if linked list dot head
3:39:31
is none meaning it's an empty list then we'll return linked list as well okay
3:39:38
next let's split the linked list into a left and right half
3:39:43
conceptually this is no different but in practice we need to actually traverse the list we'll implement a helper method
3:39:50
to make this easier but we'll say left half comma right half
3:39:56
equal split linked list now once we have two sub lists like
3:40:01
before we can call merge sort the top level function on each
3:40:06
so left equal merge sort left half
3:40:14
and right equal merge sort on the right half
3:40:20
finally we'll merge these two top-level sub-lists and return it so merge left
3:40:26
and right okay nothing new here but in the next video let's implement the split logic
3:40:32
the next step in the merge sort algorithm is the divide step or rather an implementation of the split function
3:40:39
so down here we'll call this split like before and this is going to take a linked
3:40:44
list documenting things is good and we've been doing it so far so let's add a
3:40:50
docstring divide the unsorted list at midpoint
3:40:58
into sub-lists now of course when i say sub-lists here i mean sub-linked lists but that's a
3:41:04
long word to say now here's where things start to deviate from the previous version
3:41:10
with the list type we could rely on the fact that finding the midpoint using an index and list slicing to split into two
3:41:17
lists would work even if an empty list was passed in since we have no automatic behavior like
3:41:23
that we need to account for this when using a linked list so our first condition is if the linked list is none
3:41:30
or if it's empty that is if head is equal to none so we'll say if linked list
3:41:37
equal none or you can write is there it doesn't matter or linked list dot head is none
3:41:45
well linked list can be none for example if we call split on a linked list containing a single node a split on such
3:41:51
a list would mean left would contain the single node while right would be none
3:41:56
now in either case we're going to assign the entire list to the left half and assign none to the right so we'll say
3:42:03
left half equal linked list and then right half
3:42:11
equal none you could also assign the single element list or none to left and then create a
3:42:17
new empty linked list assigned to the right half but that's unnecessary work
3:42:22
so now that we've done this we can return left half and right half
3:42:29
so that's our first condition let's add an else clause to account for non-empty
3:42:34
linked lists first we'll calculate the size of the list now this is easy because we've done the work already and
3:42:41
we can just call the size method that we've defined we'll say size equal linked underscore list dot size
3:42:49
using this size we can determine the midpoint so mid equal size and here we'll use that floor division operator
3:42:55
to divide it by two once we have the midpoint we need to get the node at that midpoint
3:43:01
now make sure you hit command s to save here and we're going to navigate back to linkedlist.hi
3:43:09
in here we're going to add a convenience method at the very bottom right before the wrapper function right here
3:43:16
and this convenience method is going to return a node at a given index so i'll call this node
3:43:24
at index and it's going to take an index value
3:43:29
this way instead of having to traverse the list inside of our split function we can simply call node at index and pass
3:43:36
it the midpoint index we calculated to give us the node right there so we can perform the split
3:43:41
okay so this method accepts as an argument the index we want to get the node for if this index is zero then
3:43:48
we'll return the head of the list so if index double equals zero return
3:43:55
self.head the rest of the implementation involves traversing the linked list and counting
3:44:01
up to the index as we visit each node the rest of the implementation involves
3:44:07
traversing the linked list and counting up to the index as we visit each node so
3:44:13
i'll add an else clause here and we'll start at the head so we'll say current equal self.head
3:44:19
let's also declare a variable called position to indicate where we are in the list
3:44:26
we can use a while loop to walk down the list our condition here is as long as
3:44:32
the position is less than the index value so i'll say while position
3:44:39
is less than index inside the loop we'll assign the next node to current and increment the value
3:44:46
of position by one so current equal current dot next node
3:44:51
position plus equal one
3:44:57
once the position value equals the index value current refers to the node we're looking for and
3:45:03
we can return it we'll say return current let's get rid of all this empty space
3:45:10
there we go now back in linked list merge sort dot pi
3:45:16
we can use this method to get at the node after we've calculated the midpoint to get the node at the midpoint of the
3:45:23
list so we'll say mid node equal linked
3:45:28
list dot node at index and here i'm going to do something
3:45:34
slightly confusing i'm going to do mid minus 1. remember we're subtracting 1 here
3:45:41
because we used size to calculate the midpoint and like the len function size will always return a
3:45:48
value greater than the maximum index value so think of a linked list with two nodes
3:45:55
size would return two the midpoint though and the way we're calculating the index we always start at
3:46:01
zero which means size is going to be one greater than that so we're going to deduct one from it to get the value we
3:46:07
want but we're using the floor division operator so it's going to round that down even more no big deal with the node
3:46:13
at the midpoint now that we have this midnote we can actually split the list so first we're going to assign the
3:46:19
entire linked list to a variable named left half so left half equal linked list
3:46:27
this seems counterintuitive but make sense in a second for the right half we're going to assign
3:46:33
a new instance of linked list so right half equal
3:46:38
linked list this newly created list is empty but we can fix that by assigning the node that
3:46:44
comes after the midpoint so after the midpoint of the original linked list we can assign the node that comes after
3:46:51
that midpoint node as the head of this newly created right linked list
3:46:57
so here we'll say right half dot head equal mid node dot
3:47:03
node once we do that we can assign none to the next node property on mid node to
3:47:10
effectively sever that connection and make what was the mid node now the tail
3:47:15
node of the left linked list so i'll say mid node dot next node
3:47:22
equal none if that's confusing here's a quick visualization of what just happened
3:47:28
we start off with a single linked list and find the midpoint the node that comes after the node at midpoint is
3:47:34
assigned to the head of a newly created linked list and the connection between the midpoint node and the one after is
3:47:41
removed we now have two distinct linked lists split at the midpoint
3:47:47
and with that we can return the two sub lists so we'll return left half and right half
3:47:54
in the next video let's tackle our merge function in the last video we defined an implementation for the version of the
3:48:01
split function that works on linked lists it contained a tiny bit more code than the array or list version that was
3:48:07
expected the merge function is no different because like with the split function after we carry out a comparison
3:48:14
operation we also need to swap references to corresponding nodes all right let's add our merge function over
3:48:21
here at the bottom below the split functions we'll call this merge and it's going to take a left
3:48:28
and right now because this can get complicated we're going to document this function
3:48:34
extensively and as always we're going to start with a doc string
3:48:40
so we'll say that this function merges two linked lists
3:48:45
sorting by data in the nodes and it returns a new
3:48:53
merged list remember that in the merge step we're
3:48:58
going to compare values across two linked lists and then return a new linked list with nodes where the data is
3:49:05
sorted so first we need to create that new linked list let's add a comment in here
3:49:10
we'll say create a new linked list that contains nodes from
3:49:18
let's add a new line merging left and right okay and then create the list so merged
3:49:24
equal new linked list to this list we're going to do something
3:49:30
unusual we're going to add a fake head this is so that when adding sorted nodes
3:49:35
we can reduce the amount of code we have to write by not worrying about whether we're at the head of the list once we're
3:49:41
done we can assign the first sorted node as the head and discard the fake head
3:49:46
now this might not make sense at first but not having to worry about whether the new linked list already contains a
3:49:52
head or not makes the code simpler we'll add another comment and a fake hand that is discarded
3:50:00
later we'll say merged dot add zero like we've been doing so far we'll
3:50:06
declare a variable named current to point to the head of the list
3:50:12
set current to the head of the linked list and then current equal
3:50:19
merged dot head next we'll get a reference to the head on each of the linked lists left and
3:50:26
right so we'll say obtain head nodes for left and right linked lists
3:50:35
and here's call this left head equal left dot head
3:50:41
and right hand equal right dot head
3:50:48
okay so with that setup out of the way let's start iterating over both lists
3:50:54
so another comment iterate over left and right as long
3:51:00
or we'll say until the until we reach the tail node
3:51:06
of either and we'll do that by saying while left head
3:51:12
or right head so this is a pattern that we've been following all along we're going to
3:51:18
iterate until we hit the tail nodes of both lists and we'll move this pointer forward every time so that we traverse
3:51:24
the list with every iteration if you remember the logic behind this from the earlier version once we hit the tail
3:51:30
note of one list if there are nodes left over in the other linked list we don't need to carry out a comparison operation
3:51:37
anymore and we can simply add those nodes to the merged list the first scenario we'll consider is if
3:51:43
the head of the left linked list is none this means we're already past the tail
3:51:48
of left and we can add all the nodes from the right linked list to the final merge list so here i'll say if
3:51:56
the head node of left is none we're past the tail
3:52:03
add the node from the right from right to merged
3:52:09
linked list so here we'll say if left head
3:52:15
is none current dot next node remember current points to the head of
3:52:21
the merge list that we're going to return so here we're setting its next node reference to the head node on the
3:52:28
right link list so we'll say right head then when we do that we'll move the
3:52:35
right head forward to the next node so let's say right head
3:52:41
equal right hand dot next node
3:52:46
this terminates the loop on the next iteration let's look at a visualization to understand why
3:52:52
let's say we start off with a linked list containing four nodes so we keep calling split on it until we have lists
3:52:58
with just a single head single node linked lists essentially so let's focus on these two down here
3:53:04
that we'll call left and right we haven't implemented the logic for this part yet but here we would compare
3:53:10
the data values and see which one is less than the other so we'll assume that left's head is
3:53:15
lesser than right's head so we'll set this as the next node in the final merge list
3:53:20
left is now an empty length list so left dot head equals none on the next pass
3:53:26
through the loop left head is none which is the situation we just implemented
3:53:31
here we can go ahead and now assign right head as the next note in the merge link list we know that right is also a
3:53:38
singly linked list here's the crucial bit when we move the pointer forward by calling next node on
3:53:45
the right node there is no node and the right link the right linked list is also
3:53:51
empty now which means that both left head and right head are none and either
3:53:56
one of these would cause our loop condition to terminate so what we've done here is encoded a
3:54:02
stopping condition for the loop so we need to document this because it can get fuzzy so right above that line of code
3:54:08
i'll say call next on right to set loop condition
3:54:16
to false okay there's another way we can arrive at this stopping condition and that's in the opposite direction if we start with
3:54:22
the right head being none so here we'll say i'm going to add another comment
3:54:28
if oops not there there if
3:54:33
the head node of right is none we're past the tail
3:54:40
then we'll say add the tail node from left to merged linked list
3:54:48
and then we'll add that condition we'll say else if right head is none
3:54:53
now remember we can enter these even if left head is none we can still go into
3:54:59
this condition we can still enter this if statement and execute this logic because the while loop the loop
3:55:05
condition here is an or statement so even if left head is false if this returns true because there's a value
3:55:11
there there's a node there the loop will keep going okay now in this case we want to set the
3:55:17
head of the left linked list as the next node on the merge list so this is simply the opposite of what we did over here
3:55:25
we'll set current dot next node equal to left head
3:55:30
and then we'll move so after doing that we can move the variable pointing to left head forwards which as we saw
3:55:37
earlier is past the tail node and then results in the loop terminating so we'll say left hand
3:55:43
equal left head dot next node and we'll add that
3:55:48
comment here as well so we'll say call next on left to set loop condition
3:55:55
to false because here right head is none and now we make left head none these two
3:56:00
conditions we looked at where either the left head or right head were at the tail nodes of our respective
3:56:07
lists those are conditions that we run into when we've reached the bottom of our split where we have single element
3:56:14
linked lists or empty linked lists let's account for our final condition where
3:56:20
we're evaluating a node that is neither the head nor the tail of the list and this condition we need to reach into
3:56:27
the nodes and actually compare the data values to one another before we can decide which node to add first to the
3:56:34
merged list so here this is an else because we've arrived at our third condition third and
3:56:39
final and here we'll say not at either tail node
3:56:45
obtain no data to perform comparison operations so let's get each
3:56:52
of those data values out of the respective nodes so that we can compare it so we'll say left data equal left head dot data
3:57:01
and write data equal right head righthead.data okay what do we do next well we compare
3:57:08
but first let's add a comment so we'll say if data on left
3:57:14
is less than right set current to left node and then
3:57:21
move actually we'll add this in a second so here we'll say if left data
3:57:26
is less than write data then current dot next node
3:57:32
equal left head and then we'll add a comment and we'll say move
3:57:38
left head to next node on that list so we'll say left head
3:57:43
equal left head dot next node
3:57:49
just as our comment says we'll check if the left data is less than the right
3:57:54
data if it is since we want a list in ascending order we'll assign the left node to be the next node in the merged
3:58:01
list we'll also move the left head forward to traverse down to the next node in that particular list now if left
3:58:09
is larger than right then we want to do the opposite so we'll go back to spaces another comment
3:58:15
if data on left is greater than right set current
3:58:21
to right node okay so else
3:58:26
here we assign the right head to be the next node in the merge list so current.nextnode
3:58:32
equal right head and then comment
3:58:38
move right head to next node so right
3:58:44
head equal right head dot next
3:58:49
node okay after doing that we move the right head pointer to reference the next node in the right list
3:58:56
and finally at the end of each iteration of the while loop so not here but two
3:59:02
spaces back right make sure we're indented at the same level as the while so we got to go yep or not the same
3:59:10
level as the wild but the same outer scope and then there we're going to say
3:59:15
move current to next node so current equal current dot next node
3:59:23
okay don't worry if this is confusing as always we'll look at a visualization in just a bit so we'll wrap up this
3:59:29
function by discarding that fake head we set earlier setting the correct node as head and returning the linked list so
3:59:37
we'll add a comment discard fake head and set first
3:59:44
merged node as head so here we'll say head equal merged dot head dot next
3:59:52
node and then merged dot head equal head and finally return
3:59:59
merged okay we wrote a lot of code here a lot of it was comments but still it's a bunch let's take a quick break in the
4:00:06
next video we'll test this out evaluate our results and determine the runtime of our algorithm
4:00:12
okay first things first let's test out our code now we'll keep it simple because writing a robust verify function
4:00:19
would actually take up this entire video instead i'll leave that up to you to try as homework
4:00:25
okay so at the very end let's create a new linked list
4:00:33
let's add a few notes to this so l add i'm going to copy paste this so it makes it easier for me
4:00:39
not to have to retype a bunch so i'll add 10 uh then set 2 44
4:00:45
15 and something like 200. okay then we'll go ahead and print l so that we can
4:00:52
inspect this list next let's create a declare variable here so
4:00:59
we'll call this sorted linked list and to this we're going to assign the
4:01:04
result of calling merge sort on l and then we'll print this so sorted
4:01:10
linked list okay since we've taken care of all the logic we know that this gets added in as
4:01:17
nodes and then let's see what this looks like all right so hit save and then bring up the console we're
4:01:23
going to type out python linked list underscore merge sort dot pi and
4:01:30
then enter okay so we see that linked list we first created remember that what
4:01:36
we add first right that eventually becomes a tail or right yeah so 10 is the tail 200 is the last one so 200 is
4:01:43
the head because i'm calling add it simply adds each one to the head of the list so here we have 10 to 44 15 and 200
4:01:51
in the order we added and then the sorted linked list sorts it out so it's 2 10 15 44 and 200. look at that a
4:02:00
sorted linked list okay so let's visualize this from the top we have a linked list containing five
4:02:06
nodes with integers 10 2 4 15 and 200 as
4:02:12
data respectively our merge sort function calls split on this list the
4:02:17
split function calls size on the list and gets back 5 which makes our midpoint 2.
4:02:23
using this midpoint we can split the list using the node at index method remember that when doing this we deduct
4:02:30
1 from the value of mid so we're going to split here using an index value of 1.
4:02:35
effectively this is the same since we're starting with an index value of 0 1 means we split after node 2. we assign
4:02:42
the entire list to left half then create a new list and assign that to right half
4:02:48
we can assign node 3 at index value 2 as the head of the right list and remove
4:02:53
the references between node two and node three so far so good right
4:02:58
okay so now we're back in the merge sort function after having called split and we have two linked lists
4:03:05
let's focus on just the left half because if you go back and look at our code we're going to call merge sort on
4:03:11
the left linked list again this means the next thing we'll do is run through that split process since
4:03:17
this is a linked list containing two nodes this means that split is going to return a new left and right list each
4:03:23
with one node again we're back in the merge sort function which means that we call merge sort on this left list again
4:03:32
since this is a single node linked list on calling merge sort on it we immediately return before we split since
4:03:38
we hit that stopping condition so we go to the next line of code which is calling merge sort on the right list as
4:03:45
well but again we'll get back immediately because we hit that stopping condition now that we have a left and
4:03:50
right that we get back from calling merge sort we can call merge on them inside the merge function we start by
4:03:57
creating a new linked list and attaching a fake head then we evaluate whether either the left
4:04:03
or the right head is none since neither condition is true we go to the final step where we evaluate the
4:04:08
data in each node in this case the data in the right node is less than the left node so we assign
4:04:15
the right node as the next node in the merge link list and move the right head pointer forward
4:04:22
in the merge link list we move our current pointer forward to this new node we've added and that completes one
4:04:27
iteration of the loop on the next iteration righthead now
4:04:32
points to none since that was a single node list and we can assign the rest of the left linked list which is
4:04:39
effectively the single node over to the merge link list here we discard the fake
4:04:44
head move the next node up to be the correct head and return the newly merged
4:04:50
sorted linked list remember that at this point because right head and left head pointed to none are while loop
4:04:56
terminated so in this way we recursively split and repeatedly merge sub-lists until we're
4:05:03
back with one sorted linked list the merge sort algorithm is a powerful sorting algorithm but ultimately it
4:05:10
doesn't really do anything complicated it just breaks the problem down until it's really simple to solve
4:05:16
remember the technique here which we've talked about before is called divide and conquer so i like to think of merge sort
4:05:22
in this way there's a teacher at the front of the room and she has a bunch of books that she needs to sort into
4:05:27
alphabetical order instead of doing all that work herself she splits that pile into two and hands it to two students at
4:05:34
the front each of those students split it into two more and hand it to the four students
4:05:40
seated behind them as each student does this eventually a bunch of single students has two books to compare and
4:05:47
they can sort it very easily and hand it back to the student who gave it to them in front of them who repeats the process
4:05:53
backwards so ultimately it's really simple work is just efficiently delegated
4:05:59
now back to our implementation here let's talk about runtime so far other than the node swapping we had to do it
4:06:05
seems like most of our implementation is the same right in fact it is including
4:06:10
the problems that we ran into in the list version as well so in the first implementation of merge sort we thought
4:06:17
we had an algorithm that ran in big o of n log n but turns out we didn't why well
4:06:23
the python list slicing operation if you remember actually takes up some amount of time amounting to big o of k
4:06:31
a true implementation of merge sort runs in quasi-linear or log linear time that
4:06:36
is n times log n so we almost got there but we didn't now in our implementation
4:06:42
of merge sort on a linked list we introduce the same problem so if we go back up to
4:06:48
the merge or rather the split function this is where it happens now swapping node references that's a constant time
4:06:54
operation no big deal comparing values also constant time the bottleneck here like list slicing is
4:07:03
in splitting a late list at the midpoint if we go back to our implementation you
4:07:08
can see here that we use the node at index method which finds the node we want by traversing the list
4:07:16
this means that every split operation incurs a big o of k cost where k here is
4:07:22
the midpoint of the list effectively n by 2 because we have to walk down the
4:07:28
list counting up the index until we get to that node given that overall splits take
4:07:34
logarithmic time our split function just like the one we wrote earlier
4:07:39
incurs a cost of big o of k log n so here we'll say it takes
4:07:46
big o of k times log n now the merge function also like the one
4:07:51
we wrote earlier takes linear time so that one is good that one runs in the expected amount of time so here we'll
4:07:56
say runs in linear time and that would bring our overall run
4:08:03
time so up at the merge sort function we can say this runs in big o of k n times log
4:08:12
n it's okay though this is a good start and one day when we talk about constant
4:08:17
factors and look at ways we can reduce the cost of these operations using different strategies we can come back
4:08:24
and re-evaluate our code to improve our implementation for now as long as you understand how merge sort works
4:08:30
conceptually what the run time and space complexities look like and where the
4:08:35
bottlenecks are in your code that's plenty of stuff if you're interested in learning more
4:08:40
about how we would solve this problem check out the notes in the teachers video in the next video let's wrap this
4:08:46
course up and with that let's wrap up this course in the prerequisite to this course
4:08:52
introduction to algorithms we learned about basic algorithms along with some concepts like recursion and big o that
4:08:59
set the foundation for learning about implementing and evaluating algorithms in this course we learned what a data
4:09:06
structure is and how data structures go hand in hand with algorithms we started off by exploring a data
4:09:12
structure that many of us use in our day-to-day programming arrays or lists as they are known in python
4:09:19
we take a peek under the hood at how arrays are created and stored and examine some of the common operations
4:09:25
carried out on arrays these are operations that we write and execute all the time but here we took a
4:09:31
step back and evaluated the run times of these operations and how they affect the performance of our code
4:09:37
after that we jumped into an entirely new world where we wrote our own data structure a singly linked list
4:09:44
admittedly linked lists aren't used much in day-to-day problem solving but it is a good data structure to start off with
4:09:51
because it is fairly straightforward to understand and not that much different from an array
4:09:56
we carried out the same exercise as we did on arrays in that we looked at common data operations but since this
4:10:02
was a type we defined on our own we implemented these operations ourselves and got to examine with a fine-tooth
4:10:09
comb how our code and the structure of the type affected the runtime of these operations
4:10:15
the next topic we tackled was essentially worlds colliding we implemented a sorting algorithm to sort
4:10:21
two different data structures here we got to see how all of the concepts we've learned so far
4:10:26
algorithmic thinking time and space complexity and data structures all come together to tackle the problem of
4:10:33
sorting data this kind of exercise is one we're going to focus on moving forward as we try to
4:10:39
solve more real-world programming problems using different data structures and algorithms
4:10:45
if you've stuck with this content so far keep up the great work this can be a complex topic but a really interesting
4:10:51
one and if you take your time with it you will get a deeper understanding of programming and problem solving as
4:10:57
always check the notes for more resources and happy coding
Algorithms: Sorting and Searching
4:11:03
[Music]
4:11:10
you may have heard that algorithms and computer science are boring or frustrating they certainly can be hard
4:11:16
to figure out especially the way some textbooks explain them but once you understand what's going on algorithms
4:11:22
can seem fascinating clever or even magical to help further your understanding of
4:11:27
algorithms this course is going to look at two categories sorting algorithms and searching algorithms you could argue
4:11:34
that these are the easiest kinds of algorithms to learn but in learning how these algorithms are designed we'll
4:11:39
cover useful concepts like recursion and divide and conquer that are used in many other sorts of algorithms and can even
4:11:46
be used to create brand new ones by the way all the code samples i'm going to show in the videos will be in
4:11:51
python because it's a popular language that's relatively easy to read but you don't need to know python to benefit
4:11:57
from this course you can see the teacher's notes for each video for info on implementing these algorithms in your
4:12:03
own favorite language our goal with this course is to give you an overview of how sorting and searching
4:12:08
algorithms work but many algorithms have details that can be handled in different ways some of these details may distract
4:12:15
from the big picture so we've put them in the teachers notes instead you don't need to worry about these when
4:12:20
completing the course for the first time but if you're going back and referring to it later be sure to check the teacher's notes for additional info
4:12:27
suppose we have a list of names it's a pretty big list a hundred thousand names long this list could be part of an
4:12:33
address book or social media app and we need to find the locations of individual names within the list
4:12:39
possibly to look up additional data that's connected to the name let's assume there's no existing
4:12:44
function in our programming language to do this or that the existing function doesn't suit our purpose in some way
4:12:50
for an unsorted list our only option may be to use linear search also known as sequential search
4:12:57
linear search is covered in more detail elsewhere on our site check the teacher's notes for a link if you want more details
4:13:03
you start at the first element you compare it to the value you're searching for if it's a match you return it if not
4:13:09
you go to the next element you compare that to your target if it's a match you return it if not you go to
4:13:15
the next element and so on through the whole list the problem with this is that you have to search the entire list every single
4:13:23
time we're not doing anything to narrow down the search each time we have to search all of it
4:13:29
if you're searching a big list or searching it repeatedly this amount of time can slow your whole lap down to the
4:13:34
point that people may not want to use it anymore that's why it's much more common to use a different algorithm for
4:13:40
searching lists binary search binary search is also covered in more detail elsewhere on our site check the
4:13:47
teacher's notes for a link binary search does narrow the search down for us specifically it lets us get
4:13:53
rid of half the remaining items we need to search through each time it does this by requiring that the list
4:13:59
of values be sorted it looks at the value in the middle of the list
4:14:05
if the value it finds is greater than the target value it ignores all values after the value it's looking at
4:14:11
if the value it finds is less than the target value it ignores all values before the value it's looking at
4:14:17
then it takes the set of values that remain and looks at the value in the middle of that list again if the value
4:14:23
it finds is greater than the target value it ignores all values after the value it's looking at if the value it
4:14:28
finds is less than the target value it ignores all values before the value it's looking at
4:14:33
but as we mentioned binary search requires the list of values you're searching through to be sorted
4:14:39
if the lists weren't sorted you would have no idea which half of the values to ignore because either half could contain
4:14:44
the value you're looking for you'd have no choice but to use linear search so before we can use binary search on a
4:14:51
list we need to be able to sort that list we'll look at how to do that next
4:14:56
our end goal is to sort a list of names but comparing numbers is a little easier than comparing strings so we're going to
4:15:02
start by sorting a list of numbers i'll show you how to modify our examples to sort strings at the end of the course
4:15:10
to help make clear the importance of choosing a good sorting algorithm we're going to start with a bad one it's
4:15:16
called bogosort basically bogosort just randomizes the order of the list
4:15:21
repeatedly until it's sorted here's a python code file where we're going to implement bogosort
4:15:28
it's not important to understand this code here at the top although we'll have info on it in the teachers notes if you
4:15:33
really want it all you need to know is that it takes the name of a file that we pass on the command line loads it and
4:15:39
returns a python list which is just like an array in other languages containing all the numbers that it read from the
4:15:45
file let me have the program print out the list of numbers it loads so you can see it we'll call the print method and we'll
4:15:53
pass it the list of numbers save that let's run it real quick
4:15:58
with python bogosort.pi oh whoops and we need to
4:16:05
provide it the name of the file here on the command line that we're going to load so it's in
4:16:10
the numbers directory a slash separates the directory name from the file name
4:16:16
five dot text and there's our list of numbers that was loaded from the file
4:16:21
okay let me delete that print statement and then we'll move on bogo sort just randomly rearranges the
4:16:28
list of values over and over so the first thing we're going to need is a function to detect when the list is
4:16:34
sorted we'll write an is sorted function that takes a list of values as a parameter
4:16:42
it'll return true if the list passed in is sorted or false if it isn't
4:16:47
we'll loop through the numeric index of each value in the list from 0 to 1 less than the length of the list like many
4:16:54
languages python list indexes begin at 0 so a list with a length of 5 has indexes
4:17:00
going from 0 through 4. if the list is sorted then every value in it will be less than the one that
4:17:06
comes after it so we test to see whether the current item is greater than the one that follows it
4:17:12
if it is it means the whole list is not sorted so we can return false if we get down here it means the loop
4:17:19
completed without finding any unsorted values python uses white space to mark code blocks so unindenting the code like
4:17:26
this marks the end of the loop since all the values are sorted we can return true
4:17:32
now we need to write the function that will actually do the so-called sorting the bogosort function will also take the
4:17:38
list of values it's working with as a parameter we'll call our is sorted function to
4:17:44
test whether the list is sorted we'll keep looping until is sorted returns true
4:17:49
python has a ready-made function that randomizes the order of elements in the list since the list isn't sorted we'll call
4:17:56
that function here and since this is inside the loop it'll be randomized over and over until our is
4:18:02
sorted function returns true if the loop exits it means is sorted returned true and the list is sorted so
4:18:09
we can now return the sorted list finally we need to call our bogosort function pass it the list we loaded from
4:18:16
the file and print the sorted list it returns okay let's save this and try running it
4:18:22
we do so with python the name of the script bogosort.pi and the name of the file we're going to
4:18:29
run it on numbers directory5.txt
4:18:35
it looks like it's sorting our list successfully but how efficient is this let's add some
4:18:41
code to track the number of times it attempts to sort the list up here at the top of the bogus sort
4:18:46
function we'll add a variable to track the number of attempts it's made we'll name it attempts and we'll set its
4:18:51
initial value to zero since we haven't made any attempts yet with each pass through the loop we'll
4:18:58
print the current number of attempts and then here at the end of the loop after attempting to shuffle the values
4:19:04
we'll add one to the count of attempts
4:19:10
let's save this and let's try running it again a couple times in the console i can just press the up
4:19:17
arrow to bring up the previous command and re-run it so it looks like this first run to sort
4:19:22
this five element list took 363 attempts let's try it again
4:19:27
this time it only took 91 attempts we're simply randomizing the list with each attempt so each
4:19:34
run of the program takes a random number of attempts now let's try this same program with a
4:19:40
larger number of items python bogo sort numbers
4:19:47
i have a list of eight items set up here in this other file
4:19:53
this time it takes 11 000 attempts only 487 this time
4:20:02
and this time thirteen thousand you can see that the trend is increasing steadily
4:20:07
the problem with bogosort is that it doesn't make any progress toward a solution with each pass
4:20:13
it could generate a list where just one value is out of order but then on the next attempt it could generate a list
4:20:18
where all the elements are out of order again stumbling on a solution is literally a
4:20:23
matter of luck and for lists with more than a few items it might never happen
4:20:28
up next we'll look at selection sort it's a sorting algorithm that's still slow but it's better than bogo's sort
4:20:36
previously we showed you bogo sort a terrible sorting algorithm that basically randomizes the order of a list
4:20:42
and then checks to see if it happens to be sorted the problem with bogo's sort is that it
4:20:48
doesn't get any closer to a solution with each operation and so with lists that have more than a few items it'll
4:20:53
probably never finish sorting them now we're going to look at an algorithm named selection sort it's still slow but
4:21:01
at least each pass through the list brings it a little closer to completion our implementation of selection sort is
4:21:08
going to use two arrays an unsorted array and a sorted one some versions move values around within just one array
4:21:15
but we're using two arrays to keep the code simpler the sorted list starts out empty but we'll be moving values from
4:21:22
the unsorted list to the sorted list one at a time with each pass we'll look through each
4:21:27
of the values in the unsorted array find the smallest one and move that to the end of the sorted array
4:21:33
we'll start with the first value in the unsorted array and say that's the minimum or smallest value we've seen so
4:21:39
far then we'll look at the next value and see if that's smaller than the current minimum if it is we'll mark that as the
4:21:45
new minimum then we'll move to the next value and compare that to the minimum again if
4:21:50
it's smaller that becomes the new minimum we continue that way until we reach the end of the list
4:21:56
at that point we know whatever value we have marked as the minimum is the smallest value in the whole list
4:22:02
now here's the part that makes selection sort better than bogo sort we then move that minimum value from the unsorted
4:22:09
list to the end of the sorted list the minimum value isn't part of the unsorted list anymore so we don't have
4:22:15
to waste time looking at it anymore all our remaining comparisons will be on the remaining values in the unsorted list
4:22:23
then we start the process over at this point our list consists of the numbers 8 5 4 and 7. our first minimum is 8.
4:22:31
we start by comparing the minimum to five five is smaller than eight so five becomes the new minimum then we compare
4:22:38
five to four and four becomes the new minimum four is not smaller than seven though so four remains the minimum four
4:22:45
gets moved to the end of the sorted array becoming its second element the process repeats again eight is the
4:22:51
first minimum but five is smaller so that becomes the minimum seven is larger so five stays is the minimum and five is
4:22:57
what gets moved over to the sort of array and so on until there are no more items left in the unsorted array and all
4:23:03
we have left is the sorted array so that's how selection sort works in
4:23:09
general now let's do an actual implementation of it this code here at the top is the same as
4:23:14
we saw in the bogo sword example it just loads a python list of numbers from a file
4:23:20
let's implement the function that will do our selection sort we're going to pass in our python list containing all
4:23:26
the unsorted numbers we'll create an empty list that will hold all our sorted values
4:23:33
we'll loop once for each value in the list we call a function named index submin
4:23:39
which we're going to write in just a minute that finds the minimum value in the unsorted list and returns its index
4:23:46
then we call the pop method on the list and pass it the index of the minimum value pop will remove that item from the
4:23:52
list and return it we then add that value to the end of the sorted list going up a level of indentation signals
4:24:00
to python that we're ending the loop after the loop finishes we return the sorted list
4:24:06
now we need to write the function that picks out the minimum value we pass in the list we're going to search
4:24:13
we mark the first value in the list as the minimum it may or may not be the actual minimum but it's the smallest
4:24:19
we've seen on this pass through the list it's also the only value we've seen on this pass through the list so far
4:24:25
now we loop through the remaining values in the list after the first we test whether the value we're
4:24:32
currently looking at is less than the previously recorded minimum if it is then we set the current
4:24:39
index as the new index of the minimum value after we've looped through all the values we return the index of the
4:24:45
smallest value we found lastly we need to actually run our selection sort method and print the
4:24:51
sorted list it returns let's save this and now let's try running it we run the python command and
4:24:58
pass it the name of our script selectionsort.pi in the numbers directory i've saved
4:25:04
several data files filled with random numbers one on each line five dot text has five lines eight dot text has eight
4:25:11
lines and to help us really measure the speed of our algorithms ten thousand dot text has ten thousand lines i've even
4:25:17
created a file with a million numbers our script takes the path of a file to load as an argument so i'll give it the
4:25:23
path of our file with five numbers numbers slash five dot text
4:25:29
the script runs reads the numbers in the file into a list calls our selection sort method with that list and then
4:25:35
prints the sorted list let me add a couple print statements within the selection sort function so
4:25:41
you can watch the sort happening don't worry about figuring out the python formatting string that i use it's
4:25:46
just there to keep the two lists neatly aligned i'll add the first print statement before the loop runs at all
4:25:57
i'll have it print out the unsorted list and the sorted list i'll add an identical print statement
4:26:03
within the loop so we can watch values moving from the unsorted list to the sorted list
4:26:09
let's save this and we'll try running the same command
4:26:15
again the output looks like this you can see the unsorted list on the left and
4:26:20
the sorted list on the right initially the sorted list is empty on the first pass it selects the lowest
4:26:26
number 1 and moves it to the sorted list then it moves the next lowest number over four
4:26:33
this repeats until all the numbers have been moved to the sorted list i have another file with eight different
4:26:38
numbers in it let's try our program with that python selection sort dot pi numbers
4:26:45
8.text you can see the same process at work
4:26:51
here notice that this file had some duplicate values too that's okay though because the index of min function only
4:26:58
updates the minimum index if the current value is less than the previous minimum if they're equal it just keeps the first
4:27:04
minimum value it found and waits to move the duplicate value over until the next pass through the list
4:27:11
so now we know that the selection sort algorithm works but the data sets we've been giving it sort are tiny in the real
4:27:18
world algorithms need to work with data sets of tens of thousands or even millions of items and do it fast i have
4:27:26
another file with ten thousand random numbers in it
4:27:33
let's see if selection sort can handle that if i run this as it is now though it'll print out a lot of debug info as it
4:27:40
sorts the list so first i'm going to go into the program and remove the two print statements in the selection sort
4:27:46
function now let's run the program again on the
4:27:53
dot text file and see how long it takes python selection sort dot pi
4:27:59
numbers ten thousand dot text one one thousand two one thousand three
4:28:06
one four one thousand five one thousand six one thousand seven one thousand eight one thousand nine one thousand ten
4:28:12
one thousand eleven thousand twelve thousand thirteen thousand and it prints out all ten thousand of those numbers
4:28:19
neatly sorted it took a little bit though how long well counting the time off vocally isn't very precise and other
4:28:26
programs running on the system can skew the amount of time your program takes to complete let me show you a unix command that's
4:28:33
available here in workspaces which can help you type time followed by a space
4:28:38
and then the command you want to run so this command by itself will print the
4:28:43
contents of our 5.txt file cat as in concatenate numbers 5.text
4:28:50
and this command will do the same thing but it'll also keep track of how long it takes the cat program to complete and
4:28:56
report the result time cat numbers five dot text
4:29:03
the real row in the results is the actual amount of time for when the program started running to when it
4:29:09
completed we can see it finished in a fraction of a second but as we said other programs running on the system can
4:29:15
take cpu resources in which case your program will seem slower than it is so we generally want to ignore the real
4:29:22
result the user result is the amount of time the cpu actually spent running the
4:29:27
program code so this is the total amount of time the code inside the cat program took to run
4:29:33
the sys result is the amount of time the cpu spent running linux kernel calls that your code made the linux kernel is
4:29:41
responsible for things like network communications and reading files so loading the 5.txt file is probably
4:29:47
included in this result in evaluating code's performance we're generally going to want to add together
4:29:53
the user and sys results but cad is a very simple program let's try running
4:29:58
the time command on our code and see if we get a more interesting result time python
4:30:05
selection sort dot pi numbers ten thousand dot text
4:30:13
this takes much longer to complete nearly 12 seconds according to the real time measurement but as we said the real
4:30:20
result is often skewed so let's disregard that if we add the user and cis runtimes
4:30:26
together we get about 6 seconds the time for the program to complete
4:30:32
will vary a little bit each time you run it but if it's doing the same operations it usually won't change more than a
4:30:37
fraction of a second if i run our selection sort script on the same file you can see it completes in roughly the
4:30:43
same time now let's try it on another file with 1 million numbers time python selection
4:30:50
sort dot pi numbers 1 million dot text
4:30:56
how long does this one take i don't even know while designing this course i tried running this command and my workspace
4:31:02
connection timed out before it completed so we'll just say that selection sort takes a very very long time to sort a
4:31:09
million numbers if we're going to sort a list that big we're going to need a faster algorithm
4:31:14
we'll look into alternative sorting algorithms shortly the next two sorting algorithms we look
4:31:20
at will rely on recursion which is the ability of a function to call itself so
4:31:25
before we move on we need to show you how recursion works recursive functions can be very tricky
4:31:31
to understand imagine a row of dominoes stood on end where one domino falling over causes the next domino to fall over
4:31:38
which causes the next domino to fall over causing a chain reaction it's kind of like that
4:31:44
let's suppose we need to write a function that adds together all the numbers in an array or in the case of python a list
4:31:51
normally we'd probably use a loop for this sort of operation the function takes a list of the numbers
4:31:57
we want to add the total starts at zero we loop over every number contained in
4:32:03
the list and we add the current number to the total once we're done looping we return the
4:32:09
accumulated total if we call this sum function with a list of numbers it'll return the total when
4:32:16
we run this program it'll print out that return value 19. let's try it real quick
4:32:21
python recursion.pi oh whoops mustn't forget to save my work here
4:32:28
and run it and we see the result 19. to demonstrate how recursion works let's
4:32:35
revise the sum function to use recursion instead note that recursion is not the most efficient way to add a list of
4:32:41
numbers together but this is a good problem to use to demonstrate recursion because it's so simple one thing before
4:32:48
i show you the recursive version though this example is going to use the python slice syntax so i need to take a moment
4:32:54
to explain that for those not familiar with it a slice is a way to get a series of values from a list
4:33:01
let's load up the python repel or read evaluate print loop so i can demonstrate
4:33:07
we'll start by creating a list of numbers to work with numbers equals a list
4:33:13
with 0 1 2 3 and 4 containing those numbers
4:33:18
like arrays in most other languages python list indexes start at 0 so numbers
4:33:24
1 will actually get the second item from the list with slice notation i can actually get
4:33:30
several items back it looks just like accessing an individual index of a list
4:33:37
but then i type a colon followed by the list index that i want up to but not including
4:33:43
so numbers 1 colon 4 would get us the second up to but not including the fifth
4:33:49
items from the list that is it'll get us the second through the fourth items
4:33:54
now i know what you're thinking and you're right that up to but not including rule is a little counterintuitive but you can just forget
4:34:01
all about it for now because we won't be using a second index with any of our python slice operations in this course
4:34:08
here's what we will be using when you leave the second index off of a python slice it gives you the items from the
4:34:14
first index up through the end of the list wherever that is so numbers 1 colon
4:34:19
with no index following it will give us items from the second index up through the end of the list
4:34:25
numbers 2 colon will give us items from the third index up to the end of the list
4:34:30
you can also leave the first index off to get everything from the beginning of the list numbers
4:34:36
colon 3 will get us everything from the beginning of the list up to but not including the third index
4:34:43
it's also worth noting that if you take a list with only one item and you try to get everything from the non-existent
4:34:49
second item onwards the result will be an empty list
4:34:54
so if i create a list with just one item in it and i try
4:35:00
to access from the second element onwards the second element doesn't exist
4:35:06
so the result will be an empty list don't worry too much about remembering python slice syntax it's not an
4:35:13
essential part of sorting algorithms or recursion i'm only explaining it to help you read the code you're about to see
4:35:20
so i'm going to exit the python rebel now that we've covered recursion we can convert our sum function to a recursive
4:35:27
function it'll take the list of numbers to add just like before
4:35:32
now here's the recursive part we'll have the sum function call itself we use slice notation to pass the entire list
4:35:39
of numbers except the first one then we add the first number in the list to the result of the recursive function
4:35:45
call and return the result so if we call sum with four numbers first it'll call itself with the
4:35:52
remaining three numbers that call to sum will then call itself with the remaining two numbers and so on
4:35:58
but if we save this and try to run it pythonrecursion.pi
4:36:07
well first we get a syntax error it looks like i accidentally indented something i shouldn't have so let me go
4:36:12
fix that real quick there we go that's suggested to python
4:36:18
that there was a loop or something there when there wasn't so let's go back to the terminal and try running this again
4:36:24
there we go now we're getting the error i was expecting recursion error maximum recursion depth exceeded
4:36:31
this happens because some gets into an infinite loop it keeps calling itself over and over the reason is that when we
4:36:38
get down to a list of just one element and we take a slice from the non-existent second element to the end
4:36:43
the result is an empty list that empty list gets passed to the recursive call to sum which passes an empty list in its
4:36:50
recursive call to sum and so on until the python interpreter detects too many recursive calls and shuts the program
4:36:57
down what we need is to add a base case to this recursive function a condition where the recursion stops this will keep
4:37:04
it from getting into an infinite loop with the sum function the base case is when there are no elements left in the
4:37:10
list in that case there is nothing left to add and the recursion can stop
4:37:15
a base case is the alternative to a recursive case a condition where recursion should occur for the sum
4:37:21
function the recursive case is when there are still elements in the list to add together
4:37:26
let's add a base case at the top of the function python treats a list that contains one
4:37:33
or more values as a true value and it treats a list containing no values as a false value
4:37:39
so we'll add an if statement that says if there are no numbers in the list we should return a sum of zero that way the
4:37:46
function will exit immediately without making any further recursive calls to itself we'll leave the code for the recursive
4:37:52
case unchanged if there are still numbers in the list the function will call itself with any
4:37:57
numbers after the first then add the return value to the first number in the list let's save this and try running it again
4:38:05
python recursion dot pi output the sum of the numbers in the
4:38:11
list 19 but it's still not really clear how this worked let's add a couple print
4:38:16
statements that will show us what it's doing we'll show the recursive call to sum and
4:38:21
what it's being called with we'll also add a call to print right
4:38:28
before we return showing which of the calls the sum is returning and what it's returning
4:38:38
let me save this and resize the console a bit
4:38:43
and let's try running it again python recursion.pi
4:38:49
since the print calls are inside the sum function the first call to sum 1279
4:38:54
isn't shown only the recursive calls are this first call to sum ignores the first
4:38:59
item in the list 1 and calls itself recursively it passes the remaining items from the list 2 7 and 9.
4:39:07
that call to sum again ignores the first item in the list it receives 2 and again calls itself recursively it passes the
4:39:13
remaining items in the list 7 and 9. that call ignores the 7 and calls itself
4:39:19
with a 9 and the last call shown here ignores the 9 and calls itself with an empty list
4:39:26
at this point none of our recursive calls to sum have returned yet each of them is waiting on the recursive call it
4:39:32
made to sum to complete python and other programming languages use something called a call stack to
4:39:38
keep track of this series of function calls each function call is added to the stack along with the place in the code
4:39:44
that it needs to return when it completes but now the empty list triggers the base case causing the recursion to end and
4:39:51
the sum function to return zero that zero value is returned to its caller the caller adds the zero to the
4:39:58
first and only value in its list nine the result is nine that nine value gets returned to the
4:40:04
caller which adds it to the first value in the list it received seven the result is sixteen
4:40:12
that sixteen value is returned to the caller which adds it to the first value in the list it received two the result
4:40:18
is 18. that 18 value is returned to the caller
4:40:23
which adds it to the first value in the list it received one the result is 19.
4:40:30
that 19 value is returned to the caller which is not the sum function recursively calling itself but our main
4:40:35
program this is our final result which gets printed it's the same result we got from the
4:40:41
loop-based version of our program the end we don't want the print statements in
4:40:46
our final version of the program so let me just delete those real quick and there you have it a very simple
4:40:52
recursive function well the function is simple but as you can see the flow of control is very complex don't worry if
4:40:59
you didn't understand every detail here because we won't be using this particular example again
4:41:05
there are two fundamental mechanisms you need to remember a recursive function needs a recursive case that causes it to
4:41:11
call itself and it also needs to eventually reach a base case that causes the recursion to
4:41:17
stop you've seen bogo sort which doesn't make any progress towards sorting a list with
4:41:22
each pass either it's entirely sorted or it isn't you've seen selection sort which moves
4:41:29
one value over to a sorted list with each pass so that it has fewer items to compare each time
4:41:35
now let's look at an algorithm that speeds up the process further by further reducing the number of comparisons it
4:41:40
makes it's called quick sort here's some python code where we'll implement quick sort again you can
4:41:47
ignore these lines at the top we're just using them to load a file full of numbers into a list
4:41:52
the quick sort algorithm relies on recursion to implement it we'll write a recursive function we'll accept the list
4:41:59
of numbers to sort as a parameter quicksort is recursive because it keeps calling itself with smaller and smaller
4:42:06
subsets of the list you're trying to sort we're going to need a base case where the recursion stops so it doesn't
4:42:12
enter an infinite loop lists that are empty don't need to be sorted and lists with just one element
4:42:18
don't need to be sorted either in both cases there's nothing to flip around so we'll make that our base case if there
4:42:25
are zero or one elements in the list passed to the quick sort function we'll return the unaltered list to the caller
4:42:32
lastly we need to call our quick sort function with our list of numbers and print the list it returns
4:42:42
that takes care of our base case now we need a recursive case we're going to rely on a technique
4:42:48
that's common in algorithm design called divide and conquer basically we're going to take our problem and split it into
4:42:54
smaller and smaller problems until they're easy to solve in this case that means taking our list
4:43:00
and splitting it into smaller lists viewers a suggestion the process i'm about to describe is complex there's
4:43:07
just no way around it if you're having trouble following along remember the video playback controls feel free to
4:43:13
slow the play back down rewind or pause the video as needed after you watch this the first time you may also find it
4:43:20
helpful to rewind and make your own diagram of the process as we go okay ready here goes
4:43:27
suppose we load the numbers from our 8.txt file into a list how do we divide
4:43:32
it it would probably be smart to have our quicksort function divide the list in a way that brings it closer to being
4:43:39
sorted let's pick an item from the list we'll just pick the first item for now four
4:43:45
we'll call this value we've picked the pivot like the center of a seesaw on a playground
4:43:50
we'll break the list into two sublists the first sub-list will contain all the items in the original list that are
4:43:55
smaller than the pivot the second sub-list will contain all the items in the original list that are greater than
4:44:00
the pivot the sub list of values less than and greater than the pivot aren't sorted
4:44:06
but what if they were you could just join the sub lists and the pivot all together into one list and the whole
4:44:12
thing would be sorted so how do we sort the sublist we call our quick sort function recursively on
4:44:18
them this may seem like magic but it's not it's the divide and conquer algorithm design technique at work
4:44:25
if our quick sort function works on the big list then it will work on the smaller list too
4:44:31
for our first sub list we take the first item it's the pivot again that's three
4:44:37
we break the sub list into two sub lists one with everything less than the pivot and one with everything greater than the
4:44:42
pivot notice that there's a value equal to the pivot that gets put into the less than sub-list our finished quicksort function
4:44:50
is actually going to put everything that's less than or equal to the pivot in the first sub-list
4:44:55
but i don't want to say less than or equal to over and over so i'm just referring to it as the less than pivot
4:45:01
sub-list also notice that there are no values greater than the pivot that's okay when
4:45:06
we join the sub-lists back together that just means nothing will be in the return list after the pivot
4:45:12
we still have one sub list that's more than one element long so we call our quick sort function on that too you and
4:45:18
i can see that it's already sorted but the computer doesn't know that so it'll call it anyway just in case
4:45:24
it picks the first element 2 as a pivot there are no elements less than the pivot and only one element greater than
4:45:30
the pivot that's it for the recursive case we've finally hit the base case for our quick
4:45:35
sort function it'll be called on both the empty list of elements less than the pivot and the one item list of elements
4:45:41
greater than the pivot but both of these lists will be returned as they are because there's nothing to sort
4:45:48
so now at the level of the call stack above this the return sorted lists are used in place of the unsorted sub-list
4:45:54
that's less than the pivot and the unsorted sub-list that's greater than the pivot these are joined together into one
4:46:00
sorted list remember that any empty lists get discarded then at the level of the call stack
4:46:06
above that the return sorted lists are used in place of the unsorted sub-lists there again they were already sorted but
4:46:12
the quick sort method was called on them anyway just in case the sub-lists are joined together into
4:46:18
one sorted list at the level of the call stack above that the return sorted list
4:46:23
is used in place of the unsorted sub-list that's less than the pivot so now everything that's less than or equal
4:46:28
to the pivot is sorted now we call quick sort on the unsorted sub-list that's greater than the pivot
4:46:34
and the process repeats for that sub-list we pick the first element six is the
4:46:39
pivot we split the sub-list into sub-lists of elements that are less than and greater than this pivot and we
4:46:45
recursively call the quicksort function until those sub-lists are sorted eventually a sorted sub-list is returned
4:46:52
to our first quick sort function call we combine the sub-list that's less than or equal to the pivot the pivot itself
4:46:59
and the sub-list that's greater than the pivot into a single list and because we recursively sorted the sub lists the
4:47:05
whole list is sorted so that's how the quick sort function is going to work in the next video we'll
4:47:11
show you the actual code quicksort works by picking a pivot value then splitting the full list into two
4:47:17
sub-lists the first sub-list has all the values less than or equal to the pivot and the second sub-list has all the
4:47:23
values greater than the pivot the quick sort function recursively calls itself to sort these sub-lists and then to sort
4:47:30
the sub-lists of those sub-lists until the full list is sorted now it's time to actually implement this
4:47:36
in code we already have the base case written any list passed in that consists of 0 or
4:47:42
1 values will be returned as is because there's nothing to sort now we need to create a list that will
4:47:48
hold all the values less than the pivot that list will be empty at first we do the same for values greater than the
4:47:55
pivot next we need to choose the pivot value for now we just grab the first item from
4:48:00
the list then we loop through all the items in the list following the pivot
4:48:06
we check to see whether the current value is less than or equal to the pivot if it is we copy it to the sub-list of
4:48:13
values less than the pivot otherwise the current value must be
4:48:18
greater than the pivot so we copy it to the other list
4:48:26
this last line is where the recursive magic happens we call quick sort recursively on the sub-list that's less
4:48:33
than the pivot we do the same for the sub-list that's greater than the pivot those two calls will return sorted lists
4:48:40
so we combine the sort of values less than the pivot the pivot itself and the sort of values greater than the pivot
4:48:46
that gives us a complete sorted list which we return this took a lot of prep work are you
4:48:52
ready let's try running it python quick sort
4:48:58
dot pi numbers 8.text it outputs our sorted list
4:49:04
i don't know about you but this whole thing still seems a little too magical to me let's add a couple print
4:49:09
statements to the program so we can see what it's doing first we'll add a print statement right
4:49:14
before the first call to the quick sort function so we can see the unsorted list we'll also add a print right within the
4:49:21
quick sort function right before the recursive calls again this string formatting code is just to keep the info
4:49:27
aligned in columns
4:49:38
let's try running this again and now you can see our new debug output
4:49:43
each time quicksort goes to call itself recursively it prints out the pivot as well as the sub list of items less than
4:49:49
or equal to the pivot if any and the sub list of items greater than the pivot if any you can see that first it sorts the
4:49:56
sub list of items less than the pivot at the top level it goes through a couple levels of
4:50:02
recursion to do that there are actually additional levels of recursion but they're from calls to
4:50:08
quick sort with a list of 0 or 1 elements and those calls return before the print statement is reached
4:50:14
then it starts sorting the second sub list from the top level with items greater than the original pivot
4:50:20
you can see a couple levels of recursion for that sort as well finally when both sublists are
4:50:26
recursively sorted the original call to the quicksort function returns and we get the sorted list back
4:50:32
so we know that it works the next question is how well does it work let's go back to our file of ten thousand
4:50:38
numbers and see if it can sort those first though i'm going to remove our two debug calls to print so it doesn't
4:50:44
produce unreadable output a quick note if you try running this on
4:50:49
a file with a lot of repeated values it's possible you'll get a runtime error maximum recursion depth exceeded
4:50:56
if you do see the teacher's notes for a possible solution now let's try running our quick sort
4:51:02
program against the ten thousand dot text file python quick sort dot pi
4:51:08
numbers 10 000 dot text there we go and it seems pretty fast but
4:51:15
how fast exactly let's run it with the time command to see how long it takes time python
4:51:22
quick sort dot pi numbers 10 000.text
4:51:28
remember we need to ignore the real result and add the user and sys results
4:51:33
it took less than a second of cpu time to sort 10 000 numbers with quicksort
4:51:39
remember that selection sort took about 13 seconds that's a pretty substantial improvement
4:51:45
and with a million numbers selection sort took so long that it never even finished successfully let's see if
4:51:51
quicksort performs any better time python quick sort dot pi
4:51:58
numbers 1 million dot text
4:52:08
not only did quicksort sort a million numbers successfully it only took about 11 seconds of cpu time
4:52:15
quicksort is clearly much much faster than selection sort how much faster that's something we'll discuss in a
4:52:21
later video what we've shown you here is just one way to implement quicksort
4:52:26
although the basic algorithm is always the same the details can vary like how you pick the pivot see the teacher's
4:52:33
notes for more details let's review another sorting algorithm merge sort so that we can compare it
4:52:39
with quick sort merge sort is already covered elsewhere on the site so we won't go into as much detail about it
4:52:46
but we'll have more info in the teacher's notes if you want it both quicksort and merge sword are
4:52:51
recursive the difference between them is in the sorting mechanism itself whereas quicksort sorts a list into two
4:52:58
sub-lists that are less than or greater than a pivot value merge sort simply splits the list in
4:53:04
half recursively and then sorts the halves as it merges them back together that's why it's called merge sort
4:53:11
you may recognize this code at the top by now it just loads a file full of numbers into a list
4:53:17
let's define a recursive merge sort function as usual it'll take the list or
4:53:22
sub-list that we want it to sort our base case is the same as with quicksort if the list has zero or one
4:53:28
values there's nothing to sort so we return it as is if we didn't return it means we're in
4:53:34
the recursive case so first we need to split the list in half we need to know the index we should split on so we get
4:53:41
the length of the list and divide it by two so for example if there are eight items in the list we'll want an index of
4:53:47
four but what if there were an odd number of items in the list like seven we can't have an index of 3.5 so we'll need to
4:53:54
round down in that case since we're working in python currently we can take advantage of a special python operator
4:54:01
that divides and rounds the result down the floor division operator it consists
4:54:06
of a double slash now we'll use the python slice syntax to get the left half of the list
4:54:13
we'll pass that list to a recursive call to the merge sort function
4:54:18
we'll also use slice syntax to get the right half of the list and pass that to merge sort as well
4:54:26
now we need to merge the two halves together and sort them as we do it we'll create a list to hold the sorted values
4:54:33
and now we get to the complicated part merging the two halves together and sorted them as we do it
4:54:39
we'll be moving from left to right through the left half of the list copying values over to the sorted values
4:54:44
list as we go this left index variable will help us keep track of our position
4:54:50
at the same time we'll also be moving from left to right through the right half of the list and copying values over
4:54:56
so we need a separate write index variable to track that position as well we'll keep looping until we've processed
4:55:02
all of the values in both halves of the list
4:55:13
we're looking to copy over the lowest values first so first we test whether the current value on the left side is
4:55:20
less than the value on the right side if the left side value is less that's
4:55:26
what we'll copy over to the sorted list
4:55:32
and then we'll move to the next value in the left half of the list otherwise the current value from the
4:55:38
right half must have been lower so we'll copy that value to the sorted list instead
4:55:49
and then we'll move to the next value in the right half of the list that ends the loop at this point one of
4:55:55
the two unsorted halves still has a value remaining and the other is empty we won't waste time checking which is
4:56:01
which we'll just copy the remainder of both lists over to the sorted list the one with the value left will add that
4:56:07
value and the empty one will add nothing all the numbers from both halves should now be copied to the sorted list so we
4:56:13
can return it finally we need to kick the whole process off we'll call the merge sort
4:56:18
function with the list of numbers we loaded and print the result
4:56:30
let's save this
4:56:36
and we'll try it out on our file with eight numbers python merge sort dot pi
4:56:42
numbers eight dot text and it prints out the sorted list
4:56:47
but again this seems pretty magical let's add some print statements to get some insight into what it's doing
4:56:55
first we'll print the unsorted list so we can refer to it we'll add a print statement right before we call the merge
4:57:01
sort function for the first time then we'll add another print statement
4:57:07
within the merge sort function right after the recursive calls this will show us the sorted left half and right half
4:57:13
that it's returning again don't worry about the fancy python formatting string it just keeps the values neatly aligned
4:57:20
let me resize my console clear the screen and then we'll try running this again
4:57:28
what we're seeing are the values being returned from the recursive merge sort function calls not the original calls to
4:57:34
merge sort so what you see here is after we reach the base case with a list that's only one item in length and the
4:57:40
recursive calls start returning the original list gets split into two unsorted halves four six three and two
4:57:48
and nine seven three and five the first half gets split in half again
4:57:53
four and six and three and two and each of those halves is halved again
4:57:59
into single element lists there's nothing to sort in the single element list so they're returned from
4:58:05
the merge sort function as is those single element lists get merged into two sub lists and sorted as they do
4:58:11
so the four and six sub-list looks the same after sorting as it did before sorting but the three and the two get
4:58:18
sorted as they're combined into a sub-list the new order is two three
4:58:23
the order is shifted again when those two sub-lists get combined back into a single list two three four six
4:58:30
then we recursively sort the right half of the original list nine seven three five
4:58:36
it gets split in half again nine seven and three five and each of those halves get broken into
4:58:43
single element lists there's nothing to sort there so the single element lists are returned as is
4:58:50
the first two are sorted as they're merged seven nine and so are the second three five
4:58:56
and then those two sub lists get sorted as they're combined into another sub list three five seven nine
4:59:04
and finally everything is sorted as it's merged back into the full sorted list two three three four five six seven nine
4:59:13
that's how merge sort works on a list of eight numbers let's see if it works on a bigger list
4:59:19
first i'll remove the two print statements so we don't get an overwhelming amount of debug output
4:59:28
then i'll run it on a list of ten thousand items python merge sort dot pi numbers ten thousand dot
4:59:36
text not only did it work it was pretty fast but which is faster merge sort or quick
4:59:43
sort we'll look at that next i've removed the call to print that displays the sorted list at the end of
4:59:49
our selection sort quick sort and merge sort scripts that way it'll still run the sort but
4:59:55
the output won't get in the way of our comparing runtimes let's try running each of these scripts
5:00:01
and see how long it takes time python
5:00:07
selection sort we'll do that one first numbers 10 000 dot text
5:00:16
we combine the user and sys results and that gives us about six seconds
5:00:21
now let's try quick sort time python quick sort dot pi numbers
5:00:28
ten thousand dot text much faster less than a second and
5:00:34
finally time python merge sort dot pi numbers ten thousand
5:00:41
dot text a little longer but far less than a
5:00:46
second so even on a list with just 10 000 numbers selection sort takes many
5:00:52
times as long as quicksort and merge sort and remember i ran the selection sort
5:00:57
script on a file with a million numbers and it took so long that my workspace timed out before it completed
5:01:04
it looks like selection sort is out of the running as a viable sorting algorithm it may be easy to understand
5:01:10
and implement but it's just too slow to handle the huge data sets that are out in the real world
5:01:17
now let's try quicksort and merge sort on our file with a million numbers and see how they compare there time python
5:01:25
quicksort dot pi numbers million
5:01:30
dot text looks like it took about 11 seconds of
5:01:37
cpu time now let's try merge sort time python
5:01:42
merge sort dot pi numbers 1 million dot text
5:01:51
that took about 15 seconds of cpu time it looks like quicksort is marginally
5:01:56
faster than merge sort on this sample data we had to learn a lot of details for
5:02:01
each algorithm we've covered in this course developers who need to implement their own algorithms often need to
5:02:07
choose an algorithm for each and every problem they need to solve and they often need to discuss their decisions
5:02:12
with other developers can you imagine needing to describe all the algorithms in this same level of detail all the
5:02:19
time you'd spend all your time in meetings rather than programming that's why big o notation was created as
5:02:26
a way of quickly describing how an algorithm performs as the data set it's working on increases in size
5:02:32
big o notation lets you quickly compare several algorithms to choose the best one for your problem
5:02:38
the algorithms we've discussed in this course are very well known some job interviewers are going to expect you to
5:02:44
know their big o run times so let's look at them remember that the n in big o notation
5:02:50
refers to the number of elements you're operating on with selection sort you need to check each item in the list to
5:02:56
see if it's the lowest so you can move it over to the sorted list so that's in operations
5:03:02
suppose you're doing selection sort on a list of five items and in this case would be five so that's five operations
5:03:09
before you can move an item to the sorted list but with selection sort you have to loop over the entire list for each item you
5:03:15
want to move there are five items in the list and you have to do five comparisons to move each one so it's more like 5
5:03:22
times 5 operations or if we replace 5 with n it's n times n or n squared
5:03:29
but wait you might say half of that 5 by 5 grid of operations is missing because we're testing one fewer item in the
5:03:35
unsorted list with each pass so isn't it more like one half times n times n
5:03:41
and this is true we're not doing a full n squared operations but remember in big o notation as the
5:03:48
value of n gets really big constants like one half become insignificant and so we discard them
5:03:55
the big o runtime of selection sword is widely recognized as being o n squared
5:04:01
quicksort requires one operation for each element of the list it's sorting it needs to select a pivot first and
5:04:07
then it needs to sort elements into lists that are less than or greater than the pivot so that's n operations to put that
5:04:14
another way if you have a list of eight items then n is eight so it will take eight operations to split the list
5:04:20
around the pivot but of course the list isn't sorted after splitting it around the pivot just once you have to repeat those eight
5:04:27
operations several times in the best case you'll pick a pivot that's right in the middle of the list so that you're
5:04:33
dividing the list exactly in half then you keep dividing the list in half until you have a list with a length of
5:04:39
one the number of times you need to divide n in half until you reach one is expressed
5:04:44
as log n so you need to repeat n sorting operations log n times that leaves us
5:04:51
with the best case run time for quick sort of o n log n
5:04:56
but that's the best case what about the worst case well if you pick the wrong pivot you won't be dividing the list
5:05:03
exactly in half if you pick a really bad pivot the next recursive call to quicksort will only reduce the list
5:05:08
length by one since our quicksort function simply picks the first item to use as a pivot
5:05:14
we can make it pick the worst possible pivot repeatedly simply by giving it a list that's sorted in reverse order
5:05:20
if we pick the worst possible pivot every time we'll have to split the list once for every item it contains and then
5:05:27
do end sorting operations on it you already know another sorting algorithm that only manages to reduce
5:05:33
the list by one element with each pass selection sort selection sort has a runtime of o n
5:05:39
squared and in the worst case that's the run time for quicksort as well so which do we consider when trying to
5:05:45
decide whether to use quicksort the best case or the worst case well as long as your implementation
5:05:51
doesn't just pick the first item as a pivot which we did so we could demonstrate this issue
5:05:56
it turns out that on average quicksort performs closer to the best case many quicksort implementations
5:06:03
accomplish this simply by picking a pivot at random on each recursive loop here we are sorting our reverse sorted
5:06:09
data again but this time we pick pivots at random which reduces the number of recursive operations needed
5:06:16
sure random pivots sometimes give you the best case and sometimes you'll randomly get the worst case but it all
5:06:21
averages out over multiple calls to the quick sort function now with merge sort there's no pivot to
5:06:27
pick your list of n items always gets divided in half log n times
5:06:32
that means merged sort always has a big o runtime of o and log in
5:06:38
contrast that with quicksort which only has a runtime of o and log n in the best case in the worst case quick sorts
5:06:44
runtime is o n squared and yet out in the real world quicksort is more commonly used than merge sort
5:06:51
now why is that if quicksort's big o runtime can sometimes be worse than merge sorts
5:06:56
this is one of those situations where big o notation doesn't tell you the whole story all big o can tell you is
5:07:02
the number of times an operation is performed it doesn't describe how long that operation takes
5:07:08
and the operation mergesor performs repeatedly takes longer than the operation quicksort performs repeatedly
5:07:15
big-o is a useful tool for quickly describing how the runtime of an algorithm increases is the data set it's
5:07:20
operating on gets really really big but you can't always choose between two algorithms based just on their big o
5:07:26
runtimes sometimes there's additional info you need to know about an algorithm to make a good decision
5:07:33
now that we can sort a list of items we're well on our way to being able to search a list efficiently as well we'll
5:07:38
look at how to do that in the next stage [Music]
5:07:46
now that we've covered sorting algorithms the groundwork has been laid to talk about searching algorithms
5:07:52
if you need to search through an unsorted list of items binary search isn't an option because you have no idea
5:07:58
which half of the list contains the item you're looking for your only real option is to start at the beginning and compare
5:08:05
each item in the list to your target value one at a time until you find the value you're looking for
5:08:10
this algorithm is called linear search or sequential search because the search proceeds in a straight line or sequence
5:08:18
even though linear search is inefficient searching for just one name will happen so fast that we won't be able to tell
5:08:24
anything useful about the algorithm's runtime so let's suppose we had a hundred different names and that we
5:08:29
needed to know where they appear in a list of unsorted names here's some code that demonstrates
5:08:35
as usual this code at the top isn't relevant to the search algorithm it's just like the code that loaded a list of
5:08:41
numbers from a file in the previous stage but this code calls a different function load strings that loads a list
5:08:47
of strings in if you want the load strings python code we'll have it for you in the teacher's
5:08:52
notes here's a separate hard-coded list containing the 100 names we're going to search for we'll loop through each name
5:08:59
in this list and pass it to our search function to get the index within the full list where it appears
5:09:05
now let's implement the search function compared to the sorting algorithms this is going to be short the index of item
5:09:12
function takes the python list you want to search through and a single target value you want to search for
5:09:18
now we need to loop over each item in the list the range function gives us a range of numbers from its first argument
5:09:24
up to but not including its second argument so if our list had a length of 5 this would loop over the indexes 0
5:09:31
through 4. we test whether the list item at the current index matches our target
5:09:37
if it does then we return the index of the current item this will exit the index of item function without looping
5:09:44
over the remaining items in the list if we reach the end of the loop without finding the target value that means it
5:09:50
wasn't in the list so instead of returning an index we return the special python value none which indicates the
5:09:56
absence of a value other languages have similar values like nil or null but if yours doesn't you
5:10:03
might have to return a value that would otherwise be impossible like an index of negative 1.
5:10:08
now let's call our new search function we start by looping over the list of 100 values we're looking for we're using the
5:10:14
values themselves this time not their indexes within the list so there's no need to mess with python's range
5:10:20
function here's the actual call to the index of item function we pass it the full list of names that we loaded from
5:10:27
the file plus the name we want to search for within that list then we store the index it returns in a variable
5:10:33
and lastly we print the index we get back from the index of item function let's save this and go to our console
5:10:40
and see if it works python linear search dot pi names
5:10:48
unsorted dot text and it'll print out the list of indexes
5:10:53
for each name i actually set it up so that the last two items in the list of names we're going to search for corresponded to the
5:11:00
first and last name within the file so if we open up our unsorted.txt file
5:11:06
we'll see mary rosenberger is the first name and alonso viviano is the last name
5:11:13
and those are the last two values in our list of names we're searching for so it returned an index of zero for that
5:11:19
second to last name and you can see that name here on line one of the file the line numbering starts at one and the
5:11:26
python list indexes start at zero so that makes sense and for the last name it returned an
5:11:31
index of 109873 and you can see that name here on line
5:11:38
109 874 so we can see that it's returning the correct indexes but right now we're just searching for a
5:11:44
hundred different names in a list of one hundred thousand names in the real world we're going to be looking for many more
5:11:50
names than that within much bigger lists than that can we do this any faster yes
5:11:56
but we'll need to use the binary search algorithm and for that to work we need to sort our list of strings we'll do
5:12:02
that in the next video before we can use the binary search algorithm on our list of names we need
5:12:08
to sort it let's do that now we need to load our unsorted list of names from a file sorted and write the sorted names
5:12:14
back out to a new file again this code at the top just loads a file full of strings into a list
5:12:21
we'll use our quick sort method to sort the list of names its code is completely unchanged from when you saw it in the
5:12:27
previous stage we just call our quick sort function on the list of names loaded from the file
5:12:32
and save the list to a variable then we loop through each name in the sorted list
5:12:40
and we print that name that's all there is to it let's save this script and try running it
5:12:47
python quicksort strings stop pi and we'll pass it the
5:12:54
names unsorted.text file let me resize the console window here a
5:12:59
little bit that prints the sorted list of names out
5:13:05
to the terminal but we need it in a file so we'll do what's called a redirect of the program's output we'll run the same
5:13:12
command as before but at the end we'll put a greater than sign followed by the path to a file that we want the program
5:13:18
output written to names sorted dot text
5:13:26
redirecting works not only on linux based systems like workspaces but also on macs and even on windows machines you
5:13:33
just need to be careful because if you redirect to an existing file its contents will be overwritten without
5:13:38
asking you let me refresh the list of files in the sidebar
5:13:44
and you'll see that we now have a new sorted dot text file in the names directory it's the same number of lines as the
5:13:51
unsorted dot text file but all the names are sorted now now we can load this file of sorted
5:13:56
names into a list and we'll be able to use that list with the binary search algorithm we'll see how to do that next
5:14:03
now that we have our list of names sorted we can use the binary search algorithm on it let's see if we can use
5:14:09
it to speed up our search for the indexes of 100 names binary search keeps narrowing down the
5:14:14
list until it has the value it's looking for it's faster than linear search because it discards half the potential
5:14:20
matches each time our code here at the top of our binary search script is unchanged from the
5:14:26
previous scripts we just call the load strings function to load our 100 000 sorted names from a file
5:14:32
here we've hard coded the list of 100 names we're going to search for again it's identical to the list from the
5:14:37
linear search script except that i've again changed the last two names to correspond to the names on the first and
5:14:42
last lines of the file we'll be loading now let's write the function that will implement our binary search algorithm
5:14:49
like the linear search function before it'll take two arguments the first is the list we're going to search through
5:14:55
and the second is the target value we'll be searching for again the binary search function will return the index it found
5:15:01
the value at or the special value none if it wasn't found binary search is faster than a linear
5:15:07
search because it discards half the values it has to search through each time to do this it needs to keep track
5:15:13
of a range that it still needs to search through to start that range is going to include the full list
5:15:19
the first variable will track the lowest index in the range we're searching to start it's going to be 0 the first index
5:15:26
in the full list likewise the last variable will track the highest index in the range we're
5:15:32
searching to start we'll set it to the highest index in the full list if the first and last variables are
5:15:38
equal then it means the size of the search range has shrunk to zero and there is no match until that happens
5:15:44
though we'll keep looping to continue the search we want to divide the list of potential matches in half each time to
5:15:51
do that we need to check the value that's in the middle of the range we're searching in we add the indexes in the first and last
5:15:57
variables then divide by two to get their average we might get a fractional number which can't be used as a list
5:16:04
index so we also round down using python's double slash floor division operator
5:16:10
all this will give us the index of the list element that's the midpoint of the range we're searching we store that in
5:16:16
the midpoint variable whoops looks like my indentation got mixed up there let me fix that real
5:16:22
quick there we go now we test whether the list element at the midpoint matches the target value
5:16:32
if it does we return the midpoint index without looping any further our search is complete
5:16:37
otherwise if the midpoint element's value is less than the target value
5:16:45
then we know that our target value can't be at the midpoint or any index prior to that so we move the new start of our
5:16:51
search range to just after the old midpoint otherwise the midpoint element's value
5:16:56
must have been greater than the target value we know that our target value can't be at the midpoint or any index after that
5:17:03
so we move the new end of our search range to just before the old midpoint by unindenting here we mark the end of
5:17:10
the loop if the loop completes it means the search range shrank to nothing without our finding a match and that
5:17:16
means there's no matching value in the list so we return the special python value none to indicate this
5:17:23
lastly just as we did in our linear search script we need to search for each of the 100 names we loop over each name
5:17:30
in our hard-coded list and we call the binary search function with the sorted list of names we're
5:17:35
going to load from the file and the current name we're searching for we store the returned list index in the
5:17:41
index variable and finally we print that variable
5:17:47
let's save this and go to our console and try running it python
5:17:52
binarysearch.pi and it's important to give it the name of the sorted file if it loads the
5:17:57
unsorted file the binary search won't work so names sorted dot text
5:18:05
again it prints out the list of indexes for each name i once again set it up so the last two
5:18:10
items in the list of names we're going to search for corresponded to the first and last name in the file
5:18:16
so it returned an index of zero for the second to last name
5:18:21
and you can see that name
5:18:27
here's the second to last name aaron augustine
5:18:32
you can see that name here on line one of the file and for the last name it returned an index of one zero nine eight seven three
5:18:40
and you can see that name here on line one zero nine eight seven four
5:18:49
let's check the third to last name for good measure it looks like an index of
5:18:54
97022 was printed for that name stephen daras
5:19:00
let's search for steve and daras within the file
5:19:05
and here it is on line 97023 remember that line numbers start on one
5:19:11
instead of zero so this actually matches up with the printed list index of 97022
5:19:17
it looks like our binary search script is working correctly let's try our linear search and binary
5:19:23
search scripts out with the time command and see how they compare i've commented out the lines that print the indexes of
5:19:29
matches in the two scripts that way they'll still call their respective search functions what the 100
5:19:36
names we're searching for but they won't actually print the indexes out so we won't have a bunch of output obscuring
5:19:41
the results of the time command first let's try the linear search script
5:19:47
time python linear search dot pi names
5:19:53
and we can just use the unsorted list of names for linear search
5:19:59
remember we want to ignore the real result and add the user and sys results together
5:20:04
it looks like it took about .9 seconds for linear search to find the 100 names in the list of one hundred thousand
5:20:11
now let's try timing the binary search script time python
5:20:16
binarysearch.pi names and for this one we need to use the sorted list of names
5:20:25
looks like that took around a quarter second so less than half as long bear in mind that part of this time is
5:20:31
spent loading the file of names into a list the difference between linear search and binary search will be even
5:20:36
more pronounced as you search through bigger lists or search for more items let's wrap up the course by looking at
5:20:43
the big o runtimes for linear search and binary search these are going to be much simpler to calculate than the sorting
5:20:49
algorithms were for linear search you need to do one comparison to the target value for each
5:20:55
item in the list again theoretically we could find the target value before searching the whole list but big o
5:21:01
notation is only concerned with the worst case where we have to search the entire list so for a list of eight items
5:21:07
that means eight operations the big o runtime for linear search is o
5:21:13
n where n is the number of items we're searching through this is also known as linear time
5:21:18
because when the number of items and number of operations are compared on a graph the result is a straight line
5:21:25
linear search looks pretty good until you compare it to binary search for binary search the number of items you
5:21:31
have to search through and therefore the number of operations is cut in half with each comparison
5:21:36
remember the number of times you can divide n by two until you reach one is expressed as log n so the run time of
5:21:43
binary search in big o notation is o log n even for very large values of n that is
5:21:50
very large lists you have to search through the number of operations needed to search is very small binary search is
5:21:56
a very fast efficient algorithm that's our tour of sorting and searching
5:22:01
algorithms be sure to check the teacher's notes for opportunities to learn more thanks for watching
316907
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.