Good afternoon. You guys are talkative. That sounds like a good weekend. I heard like some low noise on Assignment 2, and I thought we could elevate that to a little bit broader level, so if therefs somebody whofs already gotten fairly far on one or both of the problems on there, and has stumbled across some insight that perhaps was a little bit painful to get through, but youfd like to provide the benefit to your fellow classmates, now would be a great time to offer any bit of advice you think that might make the world a happier place. Anyone? No one? [Inaudible]. Youfre sitting in the wrong seat. You donft sit there. [Student:]When you make an iterator of like a map or [inaudible] like that, capitalize the word iterator. Okay. So this insight is that the iterator class that is nested within the map and nested within the set, its name really is capital I iterator, so itfs capital M map of something youfre putting in there, int, whatever, and then colon colon capital I iterator. Itfs pretty easy to look at the code and be a little bit -- the lower case I and capital I often in small fonts look about the same, too. [Student:][Inaudible]. Ifve no doubt, and that actually is a great bit of insight is that you think the error would be like gSpell it the right way, doofus,h and C++ never gives you the error and says herefs what youfd like to do to fix it. It says according to my internal specs of which I have referenced point Article 72 of the C++ standard and all this stuff that is just total gobbledygook. So one of the things you get very good at is letting the compiler direct you to where to look for the problem, but then ignoring what it told you was the problem because its analysis of it was often not very helpful. You just get good at kind of looking at the line yourself to figure it out. Okay? [Student:]I keep forgetting to clear the classes or renew like objects and things like that. Yeah. So all those pound includes, right? So C++, very fussy about that. You start using set. You start using map. You need to make sure youfve got that pound include of the set and the map and whatnot, and that none of the code you write that will use it will make any sense to the compiler until itfs been fully informed about what map and set look like. So if you donft do that -- And same thing for like iostream and all these other things youfre using the random library, when you start making those calls, C++ wants to know what the interfaces are and how theyfre set up. And the way to tell it about it is to get the right pound includes in there. Without them, it wonft get you very far at all. [Student:]I have a question. Why isnft it possible to just give access to every [inaudible]? Well, you could certainly do that. We could make one sort of big master include, which is like include the whole world, and just grab everything and bring it on in. And what that will do is it will just slow down all your compiles because it will be looking through that thing again and again every single time. And even though youfre not using it, it will have to have kind of read past it and understood it. And then it turned out to be a wasted effort. In the case of our programs, the amount of headers wefre talking about is small enough that you could imagine doing that without really slowing your development down. But for the large-scale development, you will typically include exactly those headers you need to avoid slowing down all your compiles by rereading a lot of things that arenft necessary for this unit. [Student:]Careful when you concatenate characters to strings. Yes. [Student:]Because it will give you -- because if you do it a certain way, it will give you a bunch of junk. That is a great point. And we talked about that at strings, but itfs a fine time to reiterate it -- is that when youfre using [cuts out] plus equal to take a string and extend it with some new characters or another string, one of those operands needs to be a C++ string, which means a variable, a parameter, something that was declared and created as a string. And the things in double quotes remember are old style strings, and until they have been converted they are still old style. So if you have an expression that looks like this, and you try to add a character there, this compiles but it does not do what you want. It takes the old style string and uses this character kind of as an offset and causes the string basically to turn into garbage. So if you were to say something equals this, you will not get what you want. Youfre expecting to get my string extended with that new character. So remember that one or both the arguments needs to be a string. If you ever need to force, you can introduce that typecast around it, and does the promotion. In most situations, the promotion will happen automatically and you wonft have to be involved, but this is one case where the legacy of C++ being derived from C means that the old language did have a meaning for this, and they couldnft break that old meaning, so they left it in place for you to stumble across when you least expect. So thatfs a great thing to always be keeping in mind when youfre doing that concatenation, looking at your arguments, making sure one of them at the very least [cuts out] C++ screen. Way over there. [Student:]Why does the set pair operator need to have ordering? Because it actually is more than just a quality. So hefs asking [cuts out] just say yes, no, are they the same? Itfs really using sorting to [cuts out] are you exactly like this one, but should I put you to the things that are smaller or the things that are larger because itfs largely using that to kind of quickly throw away the parts of the set that it doesnft need to explore to find a match. If you actually made yours just equalities that just returned zero and one all the time -- [Student:]Yeah, and it doesnft work. It definitely doesnft work. It will end up just losing things. Youfll put them in the set, and it wonft be able to find them again. Thatfs because you told it they would be one place, and in fact, they didnft get put there because in effect your comparison looks a little random. So it really does want full ordering. Wefll keep going. So Assignment 2 is coming in this Wednesday, right? So hopefully youfre making good progress on that, and then we will get out your third assignment then, which will be your recursion problem set, which allows you to practice on recursion. Wefre gonna be talking about recursion all week. The reader chapters that go along with this pretty much lecture for lecture 4, 5, and 6, and this is the place where I just encourage you again. I think thatfs some of the best material in the reader, so I encourage you to make some time to do the reading, especially in advance of lecture will pay off the best. So we talked just a little bit about the vocabulary of recursion at the end of Friday. It was very rushed for time, so Ifm just gonna repeat some of those words to get us thinking about what the context for solving problems recursively looks like. And then wefre gonna go along and do a lot of examples [cuts out]. So the idea is that a recursive function is one that calls itself. Thatfs really all it means at the most trivial level. It says that in the context of writing a function binky, itfs gonna make a call or one or more calls to binky itself past an argument as part of solving or doing some task. The idea is that wefre using that because the problem itself has some self-similarity where the answer I was seeking -- so the idea of surveying the whole campus can actually be thought of as well if I could get somebody to survey this part of campus and this part of campus, somebody to survey all the freshmen, somebody to [cuts out] and whatnot, that those in a sense are -- those surveys are just smaller instances of the same kind of problem I was originally trying to solve. And if I could recursively delegate those things out, and then they themselves may in turn delegate even further to smaller but same structured problems to where we could eventually get to something so simple -- in that case of asking ten people for their input that donft require any further decomposition -- we will have worked our way to this base case that then we can gather back up all the results and solve the whole thing. This is gonna feel very mysterious at first, and some of the examples I give youfll say I have really easy alternatives other than recursion, so itfs not gonna seem worth the pain of trying to get your head around it. But eventually, wefre gonna work our way up to problems where recursion really is the right solution, and there are no other alternatives that are obvious or simple to do. So the idea throughout this week is actually just a lot of practice. Telling you what the terms mean I think is not actually gonna help you understand it. I think what you need to see is examples. So Ifm gonna be doing four or five examples today, four or five examples on Wednesday, and four or five examples on Friday that each kind of build on each other, kind of take the ideas and get a little more sophisticated. But by the end of the week, Ifm hoping that youfre gonna start to see these patterns, and realize that in some sense the recursive solutions tend to be more alike than different. Once you have your head around how to solve one type of problem, you may very well be able to take the exact same technique and solve several other problems that may sound different at first glance, but in the end, the recursive structure looks the same. So Ifd say just hold the discomfort a little bit, and wait to see as we keep working which example may be the one that kind of sticks out for you to help you get it through. So wefre gonna start with things that fit in the category of functional recursion. The functional in this case just says that youfre writing functions that return some non-void thing, an integer, a string, some vector of results, or whatever that is, and thatfs all it means to be a functional recursion. Itfs a recursive function that has a result to it. And because of the recursive nature, itfs gonna say that the outer problemfs result, the answer to the larger problem is gonna be based on making calls, one or more calls to the function itself to get the answer to a smaller problem, and then adding them, multiplying them, comparing them to decide how to formulate the larger answer. All recursive code follows the same decomposition into two cases. Sometimes therefs some subdivisions within there, but two general cases. The first thing -- this base case. Thatfs something that the recursion eventually has to stop. We keep saying take the task and break it down, make it a little smaller, but at some point we really have to stop doing that. We canft go on infinitely. There has to be some base case, the simplest possible version of the problem that you can directly solve. You donft need to make any further recursive calls. So the idea is that it kinda bottoms out there, and then allows the recursion to kind of unwind. The recursive cases, and there may be one or more of these, are the cases where itfs not that simple, that the answer isnft directly solvable, but if you had the answer to a smaller, simpler version, you would be able to assemble that answer youfre looking for using that information from the recursive call. So thatfs the kind of structure that they all look like. If Ifm at the base case, then do the base case and return it. Otherwise, make some recursive calls, and use that to return a result from this iteration. So letfs first look at something that -- the first couple examples that Ifm gonna show you actually are gonna be so easy that in some sense theyfre almost gonna be a little bit counterproductive because theyfre gonna teach you that recursion lets you do things that you already know how to do. And then Ifm gonna work my way up to ones that actually get beyond that. But letfs look at first the raise to power. C++ has no built-in exponentiation operator. Therefs nothing that raises a base to a particular exponent in the operator set. So if you want it, you need to write it, or you can use -- therefs a math library called pow for raise to power. Wefre gonna run our own version of it because itfs gonna give us some practice thing about this. The first one Ifm gonna show you is one that should feel very familiar and very intuitive, which is using an iterative formulation. If Ifm trying to raise the base to the exponent, then thatfs really simply multiplying base by itself exponent times. So this one uses a for loop and does so. It starts the result at one, and for iterations through the number of exponents keeps multiplying to get there. So that onefs fine, and it will perfectly work. Ifm gonna show you this alternative way that starts you thinking about what it means to divide a problem up in a recursive strategy. Base to the exponent -- I wanna raise five to the tenth power. If I had around some delegate, some clone of myself that I could dispatch to solve the slightly smaller problem of computing five to the ninth power, then all I would need to do is take that answer and multiply it by one more five, and Ifd get five to the tenth. Okay. If I write code thatfs based on that, then I end up with something here -- and Ifm gonna let these two things go through to show us that to compute the answer to five to the tenth power, what I really need is the answer to five to the ninth power, which I do by making a recursive call to the same function Ifm in the middle of writing. So this is raise that Ifm defining, and in the body of it, it makes a call to raise. Thatfs what is the mark of a recursive function here. I pass slightly different arguments. In this case, one smaller exponent which is getting a little bit closer to that simplest possible case that we will eventually terminate at where we can say I donft need to further dispatch any delegates and any clones out there to do the work, but if the exponent Ifm raising it to is zero, by definition anything raised to the exponent of zero is one, I could just stop there. So when computing five to the tenth, wefre gonna see some recursion at work. Let me take this code into the compiler so that we can see a little bit about how this actually works in terms of that. So thatfs exactly the code I have there, but I can say something that I know the answer to. How about that? First, wefll take a look at it doing its work, so five times five times five should be 125. So we can test a couple little values while wefre at it. Two the sixth power, that should be 64, just to see a couple of the cases, just to feel good about whatfs going on. And then raising say 23 to the zero power should be one as anything raised to the zero power should be. So a little bit of spot testing to feel good about whatfs going on. Now Ifm gonna go back to this idea like two to the sixth. And Ifm gonna set a breakpoint here. Get my breakpoint out. And Ifm gonna run this guy in the debugger. Takes a little bit longer to get the debugger up and running, so Ifll have to make a little padder up while wefre going here. And then it tells me right now itfs breaking on raise, and I can look around in the debugger. This is a -- did not pick up my compilation? I think it did not. I must not have saved it right before I did it because itfs actually got the base is 23 and the exponent is zero. It turns out I donft wanna see that case, so Ifm gonna go back and try again. I wanna see it -- no, I did not. And Ifm just interested in knowing a little bit about the mechanics of whatfs gonna happen in a recursive situation. If I look at the first time that I hit my breakpoint, then Ifll see that therefs a little bit of the beginnings of the student main, some stuff behind it. Therefs a little bit of magic underneath your stack that you donft really need to know about, but starting from main it went into raise, and the arguments it has there is the base is two, the exponent is six. If I continue from here, then youfll notice the stack frame got one deeper. Therefs actually another indication of raise, and in fact, theyfre both active at the same time. The previous raise that was trying to compute two to the sixth is kind of in stasis back there waiting for the answer to come back from this version, which is looking to raise two to the fifth power. I continue again, I get two to the fourth. I keep doing this. Ifm gonna see these guys kinda stack up, each one of those kind of waiting for the delegate or the clone to come back with that answer, so that then it can do its further work incorporating that result to compute the thing it needed to do. I get down here to raising two to the first power, and then finally I get to two to the zero, so now Ifve got these eight or so stacked frames, six up there. This one, if I step from here, itfs gonna hit the base case of returning one, and then we will end up kind of working our way back out. So now, we are at the end of the two to the one case thatfs using the answer it got from the other one, multiplying it by two. Now Ifm at the two to the two case, and so each of themfs kind of unfolding in the stack is whatfs called unwinding. Itfs popping back off the stack frames that are there and kind of revisiting them, each passing up the information it got back, and eventually telling us that the answer was 64, so I will let that go. So the idea that all of those stack frames kind of exist at the same time -- theyfre all being maintained independently, so the idea that the compiler isnft confused by the idea that raise is invoking raise which is invoking raise, that each of the raise stack frames is distinct from the other ones, so the indications are actually kept separate. So one had two to the sixth, the next one had two to the fifth, and so on. And then eventually we need to make some progress toward that base case, so that we can then stop that recursion and unwind. Let me actually show you something while Ifm here, which is one thing thatfs a pretty common mistake to make early on in a recursion is to somehow fail to make progress toward that base case or to -- not all cases make it to the base case. For example, if I did something where I forgot to subtract one [cuts out], and I said oh yeah, I need to [cuts out]. In this case, Ifm passing it exactly the same arguments I got. If I do this and I run this guy, then whatfs gonna happen is itfs gonna go two to the sixth, two to the sixth, two to the sixth, and Ifm gonna let go of this breakpoint here because I donft really wanna watch it all happening. And there it goes. Loading 6,493 stack frames, zero percent completed, so thatfs gonna take a while as you can imaging. And usually, once you see this error message, itfs your clue to say I can cancel, I know what happened. The only reason I shouldfve gotten 6,500 stack frames loaded up is because I made a mistake that caused the stack to totally overflow. So the behavior in C++ or C is that when you have so many of those stack frames, eventually the space thatfs been allocated or set aside for the function stack will be exhausted. It will use all the space it has, and run up against a boundary, and typically report it in some way that suggests -- sometimes youfll see stack overflow, stack out of memory error. In this case, itfs showing you the 7,000 stack frames that are behind you, and then if you were to examine them you would see they all have exactly the same argument, so they werenft getting anywhere. Ifm not gonna actually let it do that because Ifm too impatient. Let me fix this code while Ifm here. But other things like even this code that actually looks correct for some situations also has a subtle bug in it. Even if I fixed this, which is that, right now it assumed that the exponent is positive, that itfs some number that I can subtract my way down to zero. If I actually miscalled raise, and I gave it a negative exponent, it would go into infinite recursion as well. If you started it at ten to the negative first power, it would go negative first, negative second, negative third. 6,500 stack frames later, wefd run out of space. In this case, since wefre only intending to handle those positive powers, we could just put an error that was like if the exponent is less than zero then raise an error and donft even try to do anything with it. Okay. So let me show you a slightly different way of doing this thatfs also recursive, but that actually gets the answer a little bit more efficiently. This is a different way of dividing it up, but still using a recursive strategy which is that if Ifm trying to compute five to the tenth power, but if I have the answer not of five to ninth power, but instead I have the answer of five to the fifth power, and then I multiply that by itself, I would get that five to the tenth power that I seek. And then therefs a little bit of a case though of what if the power I was trying to get was odd, if I was trying to raise it to the eleventh power, I could compute the half power which get me to the fifth -- multiplied by the fifth, but then I need one more base multiplied in there to make up for that half. Okay. And so I can write another recursive formulation here. Same sort of base case about detecting when wefve gotten down, but then in this case the recursive call we make is to base of the exponent divided by two, and then we use it with an if else whether the exponent was even or odd, it decided whether to just square that number or whether to square it and tack in another power of the base while wefre at it. So this one is gonna be much more quick about getting down to it, whereas the one we saw was gonna put one stack frame down for every successive exponent power. So if you wanted to raise something to the 10th, or 20th, or 30th power, then you were giving yourself 10, 20, 30 stack frames. Something like 30 stack frames is not really something to be worried about, but if you were really trying to make this work on much larger numbers, which would require some other work because exponent is a very rapidly growing operator and would overflow your integers quickly, but this way very quickly divides in half. So it goes from a power of 100, to a power of 50, to 25, to 12, to 6, to 3, to 2, to 1, so that dividing in half is a much quicker way to work our way down to that base case and get our answer back, and wefre doing a lot fewer calculations than all those multiplies, one per base. So just a little diversion. So let me tell you something, just a little bit about kind of style as it applies to recursion. Recursion is really best when you can express it in the most kind of direct, clear, and simple code. Itfs hard enough to get your head around a recursive formulation without complicating it by having a bunch of extraneous parts where youfre doing more work than necessary, or redundantly handling certain things. And one thing thatfs actually very easy to fall in the trap of is thinking about therefs lots of other base cases you might be able to easily handle, so why not just go ahead and call out for them, test for -- youfre at the base case. Youfre close to the base case. Checking before you might make a recursive call, if youfre gonna hit the base case when you make that call, then why make the call. Ifll just anticipate and get the answer it wouldfve returned anyway. It can lead to code that looks a little bit like this before youfre done. If the exponentfs zero, thatfs one. If the exponentfs one, then I can just return the base. If itfs two, then I can just multiply the base by itself. If itfs three, I can start doing this. Certainly, you can follow this to the point of absurdity, and even for some of the simple cases, it might seem like youfre saving yourself a little bit of extra time. Youfre like why go back around and let it make another recursive call. I could stop it right here. Itfs easy to know that answer. But as you do this, it complicates the code. Itfs a little harder to read. Itfs a little harder to debug. Really, the expense of making that one extra call, two extra calls is not the thing to be worried about. What we really want is the cleanest code that expresses what we do, has the simplest cases to test, and to examine, and to follow, and not muck it up with things that in effect donft change the computational power of this but just introduce the opportunity for error. Instead of a multiply here, I accidentally put a plus. It might be easy to overlook it and not realize what Ifve done, and then end up computing the wrong answer when it gets to the base case of two. In fact, if you do it this way, most things would stop at three, but these would suddenly become kind of their own special cases that were only hit if you directly called them with the two. The recursive cases will all stop earlier. It just complicates your testing pass now because not everything is going through the same code. So we call this armfs length recursion. I put a big X on that looking ahead, testing before you get there. Just let the code fall through. So Dan, hefs not here today, but he talked to me at the end of Fridayfs class, and it made me wanna actually just give you a little bit of insight into recursion as it relates to efficiency. Recursion by itself, just the idea of applying a recursive technique to solving a problem, does not give you any guarantee that itfs gonna be the best solution, the most efficient solution, or the fact that itfs gonna give you a very inefficient solution. Sometimes people have kind of a bad rap on recursion. Itfs like recursion will definitely be inefficient. That actually is not guaranteed. Recursion often requires exactly the same resources as the iterative approach. It takes the same amount of effort. Surveying the campus -- if youfre gonna survey the 10,000 people on campus and get everybodyfs information back, whether youfre doing it divide and conquer, or whether youfre sitting out there in White Plaza asking each person one by one, in the end 10,000 people got survey. The recursion is not part of what made that longer or shorter. It might actually depending on how you have resources work out better. Like if you could get a bunch of people in parallel surveying, you might end up completing the whole thing in less time, but it required more people, and more clipboards, and more paper while the process is ongoing than me standing there with one piece of paper and a clipboard. But then again, it took lots of my time to do it. So in many situations, itfs really no better or no worse than the alternative. It makes a little bit of some tradeoffs of where the time is spent. There are situations though where recursion can actually make something that was efficient inefficient. There are situations where it can take something that was inefficient and make it efficient. So itfs more of a case-by-case basis to decide whether recursion is the right tool if efficiency is one of your primary concerns. I will say that for problems with simple iterative solutions that operate relatively efficiently, iteration is probably the best way to solve it. So like the raise to power, yeah you could certainly do that iteratively. Therefs not some huge advantage that recursion is offering. Ifm using them here because theyfre simple enough to get our head around. Wefre gonna work our way toward problems where wefre gonna find things that require recursion, which is kind of one of the third points I put in. Why do we learn recursion? Whatfs recursion gonna be good for? First off, recursion is awesome. There are some problems that you just canft solve using anything but recursion, so that the alternatives like Ifll just write it iteratively wonft work. Youfll try, but youfll fail. They inherently have a recursive structure to them where recursion is the right tool for the job. Often, it produces the most beautiful, direct, and clear elegant code. The next assignment that will go out has these problems that you do in recursion, and theyfre each about ten lines long. Some of them are like five lines long. They do incredible things in five lines because of the descriptive power of describing it as herefs a base case, and herefs a recursive case, and everything else just follows from applying this, and reducing the problem step by step. So you will see things where the recursive code is just beautiful, clean, elegant, easy to understand, easy to follow, easy to test, solves the recursive problem. And in those cases, it really is a much better answer than trying to hack up some other iterative form that may in the end be no more efficient. It may be even less efficient. So donft let efficiency be kind of a big fear of what recursion is for you. So Ifm gonna do more examples. Ifve got three more examples, or four I think today, so I will just keep showing you different things, and hopefully the patterns will start to kind of come out of the mist for you. A palindrome string is one that reads the same forwards and backwards when reversed. So was it a car or a cat I saw, if you read that backwards, it turns out it says the same thing. Go hang a salami, Ifm a lasagna hog. Also handy when you need to have a bar bet over what the longest palindrome is to have these things around. There are certainly ways to do this iteratively. If you were given a string and you were interested to know is it a palindrome, you could kind of do this marching -- youfre looking outside and kind of marching your way into the middle. But wefre gonna go ahead and let recursion kinda help us deal with the subproblem of this, and imagine that at the simplest possible form -- you could say that a palindrome consists of an interior palindrome, and then the same letter tacked on to the front and the back. So if you look at was it a car or a cat I saw, there are two Ws there. It starts and it ends with a W, so all palindromes must start and end with the same letter. Okay. Letfs check that, and if that matches, then extract that middle and see if itfs a palindrome. So it feels like I didnft really do anything. Itfs like all I did was match two letters, and then I said by the way delegate this problem back to myself, making a call to a function I havenft even finished writing to examine the rest of the letters. And then I need a base case. Ifve got a recursive case, right? Take off the outer two letters. Make sure they match. Recur on the inside. What is the simplest possible palindrome? [Student:]One letter? One letter. One letter makes a good palindrome. One letter is by definition first and last letter are the same letter, so it matches. Thatfs a great base case. Is it the only base case? [Student:][Inaudible]. Two letters is also kind of important, but therefs actually an even simpler form, right? Itfs the empty string. So both the empty string and the single character string are by definition the worldfs simplest palindromes. They meet the requirements that they read the same forward and backwards. Empty string forwards and backwards is trivially the same, so that makes it even easier than doing the two-letter case. So if I write this code to look like that where if the length of the strength is one or fewer, so that handles both the zero and the one case, then I return true. Those are trivial palindromes of the easiest immediate detection. Otherwise, Ifve got a return here that says if the first character is the same as the last character, and the middle -- so the substring starting at Position 1 that goes for length minus two characters that picks up the interior discarding the first and last characters -- if thatfs also a palindrome, then wefve got a palindrome. So given the short circuiting nature of the and, if it looks at the outer two characters and they donft match, it immediately just stops right there and says false. If they do match, then it goes on looking at the next interior pair which will stack up a recursive call looking at its two things, and eventually we will either catch a pair that doesnft match, and then this false will immediately return its way out, or it will keep going all the way down to that base case, hit a true, and know that we do have a full palindrome there. So you could certainly write this with a for loop. Actually, writing it with a for loop is almost a little bit trickier because you have to keep track of which part of the string are you on, and what happens when you get to the middle and things like that. In some sense, the recursive form really kinda sidesteps that by really thinking about it in a more holistic way of like the outer letters plus an inner palindrome gives me the answer Ifm looking for. So this idea of taking a function youfre in the middle of writing and making a call to it as though it worked is something that -- they require this idea of the leap of faith. You havenft even finished describing how is palindrome operates, and there you are making a call to it depending on it working in order to get your function working. Itfs a very kind of wacky thing to get your head around. It feels a little bit mystical at first. That feeling you feel a little bit discombobulated about this is probably pretty normal, but wefre gonna keep seeing examples, and hope that it starts to feel a little less unsettling. Anybody wanna ask a question about this so far? Yeah? [Student:]So I guess create your base case first, then test it? [Inaudible]. Thatfs a great question. So I would say typically thatfs a great strategy. Get a base case and test against the base case, so the one character and the two character strings. And then imagine one layer out, things that will make one recursive call only. So in this case for the palindrome, itfs like whatfs a two-character string? One has AB. One has AA. So one that is a palindrome, one that isnft, and watch it go through. Then from there you have to almost take that leap and say it worked for the base case. It worked for one out. And then you have to start imagining if it worked for a string of length N, itfll work for one of N plus one, and that in some sense testing it exhaustively across all strings is of course impossible, so you have to kind of move to a sort of larger case where you canft just sit there and trace the whole thing. Youfll have to in some sense take a faith thing that says if it could have computed whether the interiorfs a palindrome, then adding two characters on the outside and massaging that with that answer should produce the right thing. So some bigger tests that verify that the recursive case when exercised does the right thing, then the pair together -- All your code is going through these same lines. The outer case going down to the recursive case, down to that base case, and thatfs one of the beauty of writing it recursively is that in some sense once this piece of code works for some simple cases, the idea that setting it to larger cases is almost -- I donft wanna say guaranteed. That maybe makes it sound too easy, but in fact, if it works for cases of N and then N plus one, then itfll work for 100, and 101, and 6,000, and 6,001, and all the way through all the numbers by induction. Question? [Student:]You have to remove all the [inaudible], all the spaces? Yeah. So definitely like the was it a car to match I should definitely be taking my spaces out of there to make it right. You are totally correct on that. So let me show you one where recursion is definitely gonna buy us some real efficiency and some real clarity in solving a search problem. Ifve got a vector. Letfs say itfs a vector of strings. Itfs a vector of numbers. A vector of anything, it doesnft really matter. And I wanna see if I can find a particular entry in it. So unlike the set which can do a fast contains for you, in vector if I havenft done anything special with it, then therefs no guarantee about where to find something. So if I wanna say did somebody score 75 on the exam, then Ifm gonna have to just walk through the vector starting at Slot 0, walk my way to the end, and compare to see if I find a 75. If I get to the end and I havenft found one, then I can say no. So thatfs whatfs called linear search. Linear kind of implies this left to right sequential processing. And linear search has the property that as the size of the input grows, the amount of time taken to search it grows in proportion. So if you have a 1000 number array, and you doubled its size, you would expect that doing a linear search on it should take twice as long for an array thatfs twice as large. The strategy wefre gonna look at today is binary search which is gonna try to avoid this looking in every one of those boxes to find something. Itfs gonna take a divide and conquer strategy, and itfs gonna require a sorted vector. So in order to do an efficient lookup, it helps if wefve done some pre-rearrangement of the data. In this case, putting it into sorted order is gonna make it much easier for us to be able to find something in it because we have better information about where to look, so much faster. So wefll see that surely there was some cost to that, but typically binary search is gonna be used when you have an array where you donft do a lot of changes to it so that putting it in a sorted order can be done once, and then from that point forward you can search it many times, getting the benefit of the work you did to put it in sorted order. If you plan to sort it just to search it, then in some sense all the time you spent sorting it would kind of count against you and unlikely to come out ahead. So the insight wefre gonna use is that if we have this in sorted order, and wefre trying to search the whole thing -- wefre looking for letfs say the No. 75 -- is that if we just look at the middlemost element, so we have this idea that wefre looking at the whole vector right now from the start to the end, and I look at the middle element, and I say itfs a 54. I can say if 75 is in this vector because itfs in sorted order, it canft be anywhere over here. If 54fs right there, everything to the left of 54 must be less than that, and 75 wouldnft be over there. So that means I can just discard that half of the vector from further consideration. So now I have this half to continue looking at which is the things that are the right of 54 all the way to the end. I use the same strategy again. I say if Ifm searching a vector -- so last time I was searching a vector that had 25 elements. Now Ifve got one thatfs got just 12. Again, I use the same strategy. Look at the one in the middle. I say oh, itfs an 80, and then I say well the number Ifm looking for is 75. It canft be to the right of the 80. It must be to the left of it. And then that lets me get rid of another quarter of the vector. If I keep doing this, I get rid of half, and then a quarter, and then an eighth, and then a 16th. Very quickly, I will narrow in on the position where if 75 is in this vector, it has to be. Or Ifll be able to conclude it wasnft there at all. Then I kind of work on my in, and I found a 74 and a 76 over there, then Ifm done. That base case comes when I have such a small little vector there where my bounds have crossed in such a way that I can say I never found what I was looking for. So letfs walk through this bit of code that kind of puts into C++ the thing I just described here. Ifve got a vector. In this case, Ifm using a vector thatfs containing strings. It could be ints. It could be anything. It doesnft really matter. Ifve got a start and a stop, which I identify the sub-portion of the vector that wefre interested in. So the start is the first index to consider. The stop is the last index to consider. So the very first call to this will have start set to zero and stop set to the vectorfs size minus one. I compute the midpoint index which is just the sum of the start and stop divided by two, and then I compare the key that Ifm looking for to the value at that index. Ifm looking right in the middle. If it happens to match, then I return that. The goal of binary search in this case is to return the index of a matching element within the vector, or to return this not found negative one constant if it was unable to find any match anywhere. So when we do find it at whatever the level the recursion is, we can just immediately return that. Wefre done. We found it. Itfs good. Otherwise, wefre gonna make this recursive call that looks at either the left half or the right half based on if key is less than the value we found at the midpoint, then the place wefre searching is -- still has the same start position, but is now capped by the element exactly to the left of the midpoint, and then the right one, the inversion of that, one to the right of the midpoint and the stop unchanged. So taking off half of the elements under consideration at each stage, eventually I will get down to the simplest possible case. And the simplest possible case isnft that I have a one-element vector and I found it or not. The really simple case is actually that I have zero elements in my vector that I just kept moving in the upper and lower bound point until they crossed, which meant that I ran out of elements to check. And that happens when the start index is greater than the stop index. So the start and the stop if theyfre equal to each other mean that you have a one element vector left to search, but if you -- For example, if you got to that case where you have that one element vector left to search, youfll look at that one, and if it matches, youfll be done. Otherwise, youfll end up either decrementing the stop to move past the start, or incrementing the start to move past the stop. And then that next iteration will hit this base case that said I looked at the element in a one-element vector. It didnft work out. I can tell you for sure itfs not found. If it had been here, I wouldfve seen it. And this is looking at just one element each recursive call, and the recursive calls in this case stack up to a depth based on the logarithm of the size to the power of two, so that if you have 1000 elements, you look at one, and now you have a 500-element collection to look at again. You look at one, you have a 250 element, 125, 60, 30, 15. So at each stage half of them remain for the further call, and the number of times you can do that for 1000 is the number of times you can divide 1000 by two, which is the log based two of that, which is roughly ten. So if you were looking at 1000 number array, if itfs in sorted order, it takes you ten comparisons to conclusively determine where an element is if itfs in there, or that it doesnft exist in the array at all. If you take that 1000 element array, and you make it twice as big, so now I have a 2000 number array, how much longer does it take? [Student:]One more step. One more step. Just one, right? You look at one, and you have a 1000 number array, so however long it took you to do that 1000 number array, it takes one additional comparison, kinda one stack frame on top of that to get its way down. So this means actually this is super efficient. That you can search so for example a million is roughly two the 20th power. So you have a million entry collection to search, it will take you 20 comparisons to say for sure itfs here or not, and herefs where I found it, just 20. You go up to two million, it takes 21. The very slow growing function, that logarithm function, so that tells you that this is gonna be a very efficient way of searching a sorted array. A category thing called the divide and conquer that take a problem, divide it typically in half, but sometimes in thirds or some other way, and then kind of -- in this case itfs particularly handy that we can throw away some part of the problem, so we divide and focus on just one part to solve the problem. All right. So this is the first one thatfs gonna start to really inspire you for how recursion can help you solve problems that you might have no idea how to approach any other way than using a recursive formulation. So this is an exercise that comes out of the reader in Chapter 4, and the context of it is you have N things, so maybe itfs N people in a dorm, 60 people in a dorm, and you would like to choose K of them. Letfs K a real number, four -- four people to go together to Flicks. So of your 60 dorm mates, how many different ways could you pick a subset of size four that doesnft repeat any of the others? So you can pick the two people from the first floor, one person from the middle floor, one person from the top floor, but then you can kind of shuffle it up. What if you took all the people from the first floor, or these people from that room, and whatnot? You can imagine therefs a lot of kind of machinations that could be present here, and counting them, itfs not quite obvious unless you kinda go back to start working on your real math skills how it is that you can write a formula for this. So what Ifm gonna give you is a recursive way of thinking about this problem. So I drew a set of the things wefre looking at? So there are N things that wefre trying to choose K out of. So right now, Ifve got 12 or so people, or items, whatever it is. What Ifm gonna do is Ifm gonna imagine just designating one totally at random. So pick Bob. Bob is one of the people in the dorm. Ifm gonna kind of separate him from everybody else mentally in my mind, and draw this line, and kinda mark him with a red flag that says thatfs Bob. So Bob might go to Flicks or might not go to Flicks. Some of the subsets for going to Flicks will include Bob, and some will not. Okay. So what Ifd like to think about is how many different subsets can I make that will include Bob, and how many different subsets can I make that donft include Bob. And if I added those together, then that should be the total number of subsets I can make from this collection. Okay, so the subsets that include Bob -- so once Ifve committed to Bob being in the set, and letfs say Ifm trying to pick four members out of here, then I have Bob and I have to figure out how many ways can I pick three people to accompany Bob to the Flicks. So Ifm picking from a slightly smaller population. The population went down by one, and the number Ifm seeking went down by one, and that tells me all the ways I can pick three people to go with Bob. The ones that donft include Bob, and Bobfs just out of the running, I look at the remaining population which is still one smaller, everybody but Bob, and I look for the ways I can still pick four people from there. So what I have here is trying to compute C of NK is trying to compute C of N minus one K minus one and add it to C of N minus one K. So this is picking friends to accompany Bob. This is picking people without Bob. Add those together, and I will have the total number of ways I can pick K things out of N. So wefre very much relying on this recursive idea of if I had the answer -- I donft feel smart enough to know the answer directly, but if I could defer it to someone who was actually willing to do more scrutiny into this thing, if you could tell me how many groups of three you can join with Bob, and how many groups of four you can pick without Bob, I can tell you what the total answer is. The simplest possible base case wefre gonna work our way down to are when there are just no choices remaining at all. So if you look back at my things that are here, in both cases the population is getting smaller, and in one of the recursive calls, the number of things Ifm trying to pick is also getting smaller. So theyfre both making progress toward kind of shrinking those down to where therefs kind of more and more constraints on what I have to choose. For example, on this one as I keep shrinking the population by one and trying to get the same number of people, eventually Ifll be trying to pick three people out of three, where Ifm trying to pick of K of K remaining. Well, therefs only one way to do that which is to take everyone. On this one, Ifll eventually keep picking people, so the K is always less than the N in this case. The K will eventually kinda bottom out, or Ifll say Ifve already picked all the people. Ifve already picked four people. I need to pick zero more. And at that point, thatfs also very easy, right? Picking zero out of a population, therefs only one way to do that. So what we end up with is a very simple base case of if K equals zero -- so Ifm not trying to choose anymore. Ifve already committed all the slots. Or if K is equal to N where Ifve discarded a whole bunch of people, and now Ifm down to where Ifm facing Ifve gotta get four, and Ifve got four left. Well, therefs only one way to do those things, and thatfs to take everybody or to take nobody. And then otherwise, I make those two recursive calls with Bob, without Bob, add them together to get my whole result. Thatfs wacky. Ifm gonna read you something, and then wefll call it a day. I brought a book with me. I stole this from my children.