I have a feeling that wefre on. Confirmed. Can you go down to the pad and Ifll make a couple of announcements. The first is that homework four is posted. I said that I wouldnft announce these types of things. In fact, I think we posted it yesterday. The next one -- I shouldnft have to say this, but can you turn off all amplification in here? I shouldnft have to say this, but the midterm is actually the end of next week, so itfs actually eight days from now. Wefre more panicked than you are, just for the record. We have a lot of work to do on that. Itfs coming along actually quite nicely, I have to say. Thatfs of course next Friday to Saturday or Saturday to Sunday. What wefll do, I think, is today post last yearfs midterm, if you just want to see what one looks like. You will also find out where homework problems come from. Many homework problems started life as midterm and final exam problems. Wefll post one so that you know what it looks like and so on, and then maybe a bit later wefll post the solutions. In fact, as to whether or not you really have to go over last yearfs midterm, I think you actually donft really have to unless you want to. If youfve been following the homework and understanding all of it and the lectures, youfre welcome to do last yearfs midterm. Let me not discourage you from that. I donft think you really need to. Let me say a couple of other things about the midterm. The midterm will cover through lecture eight, which is material wefll cover a little bit today and finish up next Tuesday. It will cover through homework four, so thatfs the coverage of the midterm. Wefll probably put something to that effect on the website so you know. Any questions about the material from last time or the midterm? Wefll continue. What wefre doing today is actually just looking at some extremely useful extensions of least squares, and many of them involve this idea of multi objectively squares, so in multi objectively squares, instead of having just one objective like AX-Y norm squared that you want small, there are actually two, and so you want them simultaneously small. Now one of the problems there is the semantics of that is not clear. It doesnft really make any sense to say please minimize these two things. It makes absolutely no sense. The first thing you have to work out is what is the semantics? By the way, if theyfre not competing, it does make sense, but thatfs an extremely rare case. Otherwise, you have to figure out what even does it mean to minimize two objectives. So we started looking at that last time. As a thought experiment, we did the following. We simply took ever XNRN and we evaluated J1 and J2, the two objectives. You want both small. For every X, we put a point. All the shaded region shows you pairs as wefve written it, J2, J1, which are achievable, and then the clear area here are pairs that are not achievable. We talked about this last time. We talked about the following idea, that if a certain X corresponds to this point, then basically, all the Xs corresponding that map into what is lower and to the left -- these are actually points that are unambiguously better than this one. Everything up and to the right is unambiguously worse. Unambiguously better or worse means that on both objectives, itfs better. The interesting parts are the second and fourth quadrants, because here, itfs ambiguous. In fact, this is where wefre going to have to really work out what the semantics is. Here, a point -- if you want to compare a point here and here, in fact, one of the ways youfd say this is youfd say that in fact, these two points are not comparable. One has better J1 but worse J2. The other has better J2 but worse J1. Theyfre incomparable is what you say. The boundary here, which is called the Pareto optimal -- these are Pareto optimal points. This is the Pareto boundary. Itfs also called the optimal tradeoff curve. These points are characterized in the following way. Any point on here, therefs no other point thatfs better. Thatfs what it means. The least you can say, and in fact, thatfs all you can say. If someone says you have a multi-objective problem, you want J1 and J2 small, and theyfre not yet willing to commit to how you trade off one and the other. If they simply say no, I want both objectives small, you can already say something of substance. You can say the following. You can say that that point is a stupid choice. Why? All of these are better. You can say thatfs an infractive choice, but it canft be done. If someone wants to minimize these two objectives, the only non-stupid choice -- non-stupid and feasible choice -- are gonna be the points on this boundary. Youfve already done a lot to say that you can focus your effort on these points on this optimal tradeoff curve. Thatfs the basic idea in this. This extends to three objectives, four and so on and so forth. Itfs an idea thatfs completely obvious and that you probably already have in your head anyway. Therefs a very common method to find points on that curve, and it works something like this. Before we do that, I want to talk about what the curve might look like qualitatively. Letfs talk about that, and let me try to do this consistently. Ifll draw my first objective there and my second here. This is what it looks like. That tradeoff curve can have lots of different looks. Ifm gonna draw a couple of them. One would be something like this. It actually has to go down. These are all achievable. Thatfs actually quite interesting. Ifll make this three and Ifll make this one. This is very, very interesting. In fact, if you work out that this is the tradeoff curve, and youfll see how to do this very soon. This has huge meaning and implication for this problem. The way you would describe this sort of casually or informally is this. Basically, you would say there isnft much of a tradeoff because the lowest you could ever get J1 might be over here, and that might be 0.9. Thatfd be the lowest value you could get J1 while ignoring J2. It might be here. This might have an [inaudible] or something, and this might be 2.6. So here, youfd say the smallest you could ever make J2 ignoring J1 is 2.6. On the other hand, look at this. Therefs these points right around here, which give up ten percent in each objective and yet get both. This is so obvious that I almost hate to go over this. This is the proverbial knee of the curve. Itfs an efficient point. These ideas are extremely important to have because I guarantee you will be working on problems. You will finish something or do something and the point you will find will be right here. Thatfs whatfs gonna happen. My opinion is itfs not good enough to simply return that point. Itfs not responsible. The correct thing to do is to say oh, yeah, I got J1 down to 0.9. Letfs say itfs ride quality. It doesnft really matter what it is. Say I got the ride quality down to 0.9. Theyfd say thatfs great. Youfd say but you know what? This is when you go back and therefs a design review. Youfd say you know what, though? It turns out if we accept a ride quality of 1.0, I can do it with one quarter of the fuel. I think if you donft point out that therefs this point here, I think youfre actually being irresponsible. The same goes for over here. If someone says find me a point and you find this point and you say -- itfd be like if youfre doing circuit design. You could say oh, I can make that thing clock at 2.6 gigahertz. But actually, if it clocks at 2.45, Ifll use one-half the power. As to which is the best choice, it depends. But the point is to me, itfs irresponsible if you donft point this fact out. Basically, you donft ever even do -- when you do least squares and things like that. Anything involving this, you should always just as a matter of responsible engineering -- you will do studies like this just to check. You wiggle things around to see if things could dramatically change. This is one where therefs essentially no tradeoff. To really get no tradeoff, you do this. Thatfs absolutely no tradeoff. This point -- actually now, itfs great. This is the one case where you can say that is the optimal point. Thatfs the only time when a biobjective or multi-objective problem has a unique, well-defined answer where the semantics is clear. That point is good. Itfs the best one. No other point would be reasonable here. Any other point would be worse than that point. This is when therefs absolutely no tradeoff. Now, letfs look at the other extreme. The other extreme happens and the other extreme looks like this. You have a point there and a point there, and this might look something like that. That might be the tradeoff curve. Now, therefs a tradeoff. In fact, this is the opposite. Now the tradeoff is almost linear in the sense that when you give up one, you actually gain in the other by a fixed amount. This is what people call a strong tradeoff, and the slope actually -- one of the names for the slope is the exchange rate. Youfre actually exchanging J1 for J2 when you move here. When you go from this design to this design, what have you done? Youfre doing something like youfre exchanging J2 for J1. Youfre doing better on J1 by giving up on J2. The slope literally is sometimes called on the streets the exchange rate. This is conceptual models. Wefre gonna leave them up here and wefre gonna come back to them. Letfs look at the idea of the weighted sum objective, which comes up independent of discussion of tradeoff curves and things like that. Itfs also completely normal if you have two objectives to -- if you want to come up with some answer to actually just add them with some weight in between them. So I add this function plus Mu, this objective plus Mu times that one, and the idea is that Mu is supposed to give a relative weight between J1 and J2. Question? Great question. I was trying to go real fast and kind of avoid that question. Ifll do it. Could the optimal tradeoff curve look like that? You want me to draw it in like that? It cannot. It actually has to be convex here. It has to curve up, and thatfs because for least square problems, the J1 and J2 are both convex functions. Thatfs not part of this class. I was drawing them the way they must look in 263. If you have non-convex functions, they can absolutely look like that. I will show you one that they cannot look like. It cannot look like this ever. It canft look like this. Thatfs not possible. First of all, these points are not Pareto optimal because if thatfs a feasible design, everything above and to the right of this point has a technical -- the technical name is itfs a bad design. That means that this is not part of the tradeoff curve. In this case, the tradeoff curve for something that would look like that actually is discontinuous. Itfs got this point and then itfs got this line segment. These things can happen in the general case with general objectives. They canft happen with quadratic objectives like youfll see in 263. Before a weighted sum objective, it turns out that you can interpret this easily on a J1 J2 plot or J2 J1 plot, and thatfs this way. If I look at level curves of this composite objective -- thatfs J1 plus Mu J2 -- in the J2 J1 plane, these are nothing but lines with slope minus Mu. If you were to minimize J1 plus Mu J2, herefs what youfre doing. You are doing nothing but this. Youfre moving this line with a slope of minus Alpha, which is fixed, and you simply move it down until you last have contact with the set of achievable points. That is always a point on the Pareto optimal curve, and actually, therefs a lot more interesting stuff about that point. Another one is this -- if you were to zoom in locally, these local slope here would be exactly Mu. Let me summarize that. By minimizing J1 plus Mu J2, you will actually find a point on this tradeoff curve. Thatfs fact number one. Number two, you will find a point where the local exchange rate is exactly Mu or where the angle is Mu. This picture, though simple, explains everything. If I increase Mu, what happens? If you increase Mu -- letfs think about what happens. If I increase Mu, what youfre really saying is you know what? I care more about J2 than I said before. Presumably, wefll find a new point where J2 is smaller. Youfre gonna pay for that. J1 is gonna go up. Thatfs the way these things work. Letfs just see if you can get that visually. Itfs very simple. If you crank Mu up, thatfs the weight. You simply change the slope like that and you do the same experiment. You take this thing and you move it until it just touches and sure enough, thatfs the new point for the new slope. Here I cranked up Mu by a factor of three or something. Thatfs a new point. Sure enough, itfs a new point on the optimal curve, and indeed, it has reduced J2 and to pay for that, it has increased J1. If you have the ability to minimize the weighted sum of the objectives, you can actually now sweep out the optimal tradeoff curve by simply sweeping Mu over some range and minimizing this weighted sum objective and you will sweep out points. By the way, if the Mus you choose are not over a big enough range, you will sweep out just a little tiny thing. In fact, in practice, a lot of people use Mu on a log scale because it has to go for usually a pretty big range. This is just a practical detail. Conceptually, you simply solve this problem for lots of values for Mus, store the design and the J2 J1 achieved, and plot that. You have the optimal tradeoff curve. Thatfs exactly how these curves were created. Not only that, but the picture gives you a lot of geometric intuition about what happens when you mess with Mu. Now, I want to go back to my two tradeoffs. Herefs a problem where there is no tradeoff. Letfs do the one where therefs a slight but very small tradeoff. Letfs do that one first. J2 J1 and Ifm gonna put a slight but small tradeoff like that. Now, letfs talk about minimizing J1 plus Mu J2. What will happen when I minimize J1 plus Mu J2? As I vary Mu, what happens? Well, when you fix Mu, you get a slope like this, and you simply march down this thing until you first lose contact with it and you get a point there. Now, you change Mu a lot like that, and you go down here and you get a new point. What you should see here is that in fact the Xs are not changing much. Youfre always getting -- over a huge range of Mu, youfre getting points right around there. In other words, youfre getting the knee of the curve over some huge range of Mus. Everybody see this? The two things -- herefs what youfll notice. Number one, the actual design you get is largely insensitive to Mu. Thatfs the first thing. If you crank Mu to ten to the eight, then you might start getting something up here. And if Mu is ten to the minus eight, you might start getting a point down here. But the point is that for this huge range of Mus in the middle, you have a lot of Mus and youfre just tracing out this little tiny thing here. By the way, if you see that, it means youfre seeing a problem where therefs not that much tradeoff. The two objectives are not particularly competing in this case. Thatfs kind of the idea. Letfs do the other one now. Letfs do this one. Honestly, I donft know why I drew it with J1 vertical. Letfs do the other one where therefs a strong tradeoff. Herefs the curve like that. Now, letfs talk about minimizing a weighted sum. What happens now? How sensitive -- what happens is you vary Mu and minimize the weighted sum objective. What happens? Itfs very sensitive. Basically, for Mu below some number, you kind of get points here. Letfs say it flattens out over here. For Mu above some number, you start getting points over here. Right when Mu is around this slope, right as you sweep Mu through that point, this thing jumps tremendously. Everybody see that? Thatfs the point here. You will see this, and it has a meaning. This is the meaning. It means youfve got a linear tradeoff. If the tradeoff in here were exactly linear, youfd actually get an amazing thing where the weighted sum objective would jump from this point all the way to that point with nothing in between. It would be absolutely discontinuous. For quadratic functions like wefre looking at, that canft happen. That could happen. When you get many dimensions, all of these things -- you can get all of these phenomena. You can get parts where the surface is kind of angled. You can get other parts where itfs very flat, and as you mess with weights, things will jump from one place to another. Other regions where you mess with the weights and in that case, itfs a tangent hyper plane touching this optimal tradeoff surface. You mess with the normal and it kind of rolls around and doesnft do very much. You can get all of these kind of phenomena, but itfs important to understand these ideas. Now letfs talk about how you would specifically do this for a biobjective least squares problem. How do you minimize this? The way we can take two quadratic objectives and reduce it to a problem wefve already solved is all we have to do is say that the sum of this norm squared plus that norm squared is just this. Itfs absolutely nothing more. You can check. The top part of this is AX minus Y. The bottom part is square root Mu FX minus square root Mu G. Now, the norm squared of a stacked vector is the norm squared of the top plus the norm squared of the bottom. Norm squared of the top is that term. Norm squared of the bottom is that. If you like, you can put the square of Mu in both of these places, but when you square it and pull it outside, it looks like that. That means wefre done, because this we know how to do. This is no problem. Wefll call that A~ or something like that. It just means in [inaudible] itfs even simpler. Itfs something like AN -- if you really want to do this, something like that times F. I hope Ifm doing this right and then backslash and then Y and square root Mu times G. There you go. Therefs the code for it. You shouldnft have to write that down. The formula for it is this. Itfs gonna be A~ transpose A, one of the inverse, times A~ transpose. Well, A~ transpose A -- you can work out analytically what that is. Thatfs it. Thatfs A~ transpose A~ like that. You see something actually quite beautiful here. You see A transpose a plus Mu of transpose F, and thatfs how this works. Over here, you get A transpose Y plus Mu F transpose G. It just sort of works out. Itfs a very pretty formula. Letfs look at some examples just to see how this works. These are going to be really simple examples, but just to see what happens. This is our friend the unit less mass on a frictionless able. We have a ten second period. We apply forces X1 through X10, each one for a second in turn. Wefre just gonna have one. We donft care about the velocity. We care about the position, FT equals ten, and so we have Y equals A transpose X where A is in [inaudible], and I think you may remember what A is. I think it goes down by one. Itfs large for the first one and then goes down a half or something. By the way, when you see a vehicle or objection motion problem where the person specifying the problem tells you where this object has to be but doesnft seem to care how fast itfs going, generally speaking, that corresponds to a non-positive social use. Usually, this would be something like a missile hitting something. When someone says no, Ifd like you to be -- if you say what about the velocity and they go it doesnft matter, that -- you should be suspicious at that point. This is one of those cases where there are no specifications about the velocity of this mass, so no specification on the velocity. Here, you just want to be near the point one. This is a stupid problem. We could solve it by hand. Itfs totally idiotic, but just to give you a rough idea, we could work out what this is. J2 is the sum of the squares of the forces used. This has units of Newtons squared. You might ask why would you care about the sum of the squares of the forces applied? Let me ask that question. Many people take whole courses here where the entire course, everything is quadratic. You go take a controlled course in aero astro, everything -- squared Newtons, integrated -- thatfs all you see. Why do you care about it? It corresponds to energy. Ifm going to tell you something. Thatfs not true. Thatfs what they tell you in those classes. Thatfs what I should be telling you now. Thatfs the party line. That's what wefre supposed to say, that it corresponds to energy. Thatfs total nonsense. Generally speaking, I know of almost no case where it actually corresponds to energy. By the way, any case where it does correspond to energy like in a disk drive servo, therefs actually no limit on energy and no one really cares. I said it. Ifm being honest. Now why do we really care? Why do we work with the sum of the squares? What do you think? Thank you. Itfs easy to analyze. Right. Because we can. Thatfs why. I just wanted to be clear on this. There are lots of other things here. If this was a thruster, you could probably care. Thatfs something you would really care about. Thatfs the field use. You might care about this. Thatfs the maximum force applied. Why? Because this dictates how big a thruster you have to get. This has practical use. The sum of the squares, it might, but itfs very unlikely. You never get an actuator, and this comes up in signal processing, too, where it says here it is. It needs 28 volts in, five amps, and it never says under no circumstances should the norm of the input exceed this. You just wonft see that. Thatfs not what it is. Wefre gonna go back, now that Ifve said that we do this because we can. Letfs go back. If anyone asks you and you donft want to get into this big argument, you just say it represents energy. If they buy it, move on quickly. Thatfs my recommendation. Itfs kind of a stupid problem, so letfs talk about some things here that we can just know immediately. This says that basically youfre gonna apply a force for ten seconds. Youfre gonna move this mass and youfll be charged -- there are two things you care about -- how much you miss being displaced one meter and the sum of the squares. Let me ask some questions. Does it make any sense to move the mass backwards before moving it forwards? Obviously not because youfre running up a J2 bill and not for any particularly good reason in terms of J1. Does it make sense to overshoot the target, which is one point, and to say here are my masses and say oh, look at that. My final position was 1.1. I overshot. No, thatfs totally idiotic because youfll run up a bill here for overshooting and itfs stupid because for the same amount, you actually could have landed right at the point and run up zero bill here. Youfre gonna always undershoot. You can also figure out in this problem that youfre always gonna push, and youfre gonna push more at first because itfs more efficient. You can figure out a lot of this before you ever even form a formula. The optimal X is this. Thatfs a function of Mu. Thatfs that tradeoff parameter. As we vary Mu, wefre gonna get different full trajectories. By the way, you can even work out an analytical formula for this. Thatfs not the point and it doesnft really matter. Herefs the optimal tradeoff curve. So there it is. Itfs very pretty. Herefs the energy. Notice how I said that without making any apology. Herefs the energy and here is the square of the missing distance. This curve actually hits zero at a point. It doesnft approach zero [inaudible]. It hits it. This is a very interesting point that wefre going to discuss either later today or maybe Tuesday. At this point, you are hitting the target exactly, and youfre using the minimum energy possible. Thatfs what it means to say that this curve hits this line. Zero means Y is one. At ten seconds, that mass is exactly at a position of one. Thatfs the energy bill you run up. There are many, many force programs that will displace the mass one meter after ten seconds. They all lie along this line, and theyfre characterized by using more energy. This is gonna be the least energy thing that gets you right there. What about here? Does this curve hit or is this [inaudible]? Does it hit it? Letfs ask the question. Would it be possible to run up a bill J2 of zero? Could you? It doesnft move. You could do that. You could just have X equals zero. So you do nothing. Youfre doing very well in terms of J2. You couldnft do any better. You just take the hit, which is the cost on J1, and thatfs here. By the way, this is a beautiful example. Take a look at what that curve looks like near zero. So basically, if someone comes to you and says Ifm sorry, Ifm just not gonna do anything, this curve -- not only does it have a steep slope, it has infinite slope there. That says that with extremely small levels of force applied, you can reduce your miss-hit distance by a correspondingly very large amount. Thatfs the picture. This is a silly example. You could have done all of this analytically or figured it all out, and therefs no surprises here. Trust me, if this was a vehicle with 12 states and 13 different inputs representing different thrusters and control surfaces you can actuate and things like that, this is not obvious. You already have four lines or five lines of code that will beat anything any person could ever come up with, and Ifm talking about good pilots and things like that. Same code. Three lines. I think itfs just the one I wrote before. Itfs not even three. Three with a lot of comments, actually. This stuff looks simple. This example is stupid. Trust me, even if these matrices -- if the dimensions of vectors get to be five, ten, let alone 500 or a thousand, youfre doing stuff that is absolutely impossible for someone doing intuitive based stuff to even come close to. Now therefs a very famous special case of biobjective least squares. Itfs where the second objective is really simple. Itfs just the norm squared. Here, the way to understand the semantics of it is you have a problem where you say I want a good fit. I want AX minus Y norm squared to be small, but I donft want to do it if the only way to do that is to have a giant X. I want some tradeoff there. I will accept a poorer fit in return for a modest X. Where you operate on that curve determines the tradeoff of the optimal size of X versus the fit. At least one end of the trade we actually know. When you only care about J1, thatfs just least squares. Thatfs classic least squares. We know the solution. If you only cared about J2, letfs get that out of the way right now. If you only cared about J2, whatfs the best choice of X? Zero. And the objective on the other side? Norm Y squared. The two end points of this tradeoff curve are now known. By the way, thatfs an exercise you should always do is figure out what you can say about the endpoints, because all the action then goes down in between the two. In this case, you get X is A transpose A plus Mu I inverse A transpose Y. This has got lots of names. Maybe the most common is tickenoff regularization. In statistics, you will hear the following phrase. Therefs probably many others. In statistics, this is called ridge regression, and Mu is called the ridge parameter. In tickenoff regularization, Mu is called the regularization parameter. Ifll show you something kind of cool about this. This formula makes sense for any A -- skinny, fat, full rank or not full rank. I have to kind of justify that, so Ifm going to. Herefs my claim. My claim is that A transpose A -- therefs lots of ways to prove this, but Ifll do one. Ifm gonna claim that thatfs invertible provided only that Mu is positive. This formula even makes sense for A equals zero. Itfs a stupid formula, but it makes perfect sense. Ifm saying provided here Mu is positive. If A is zero, no problem. Itfs X equals Mu I inverse. Mu I is perfectly invertible times zero, so X is zero in that case. By the way, when Mu is zero, you recover least squares, and now this formula is one you better watch out for, because that formula only makes sense when A is skinny and full ranking. It parses, but it doesnft pass the semantics test if, for example, A is fat. The weird thing is you can do tickenoff regularization if A is fat and this makes perfect sense. This is why some people use tickenoff regularization because theyfre lazy or they tried this and they got some error or something somewhere. Some software told them that they were trying to invert something that was singular into working precision, and they were like, well, whatever, and then they just put in plus 1E minus 6I, and they said now itfs working. Trust me, you see a lot of that. Let me justify why this matrix here is in fact invertible provided Mu is positive. Totally irrespective of A -- I donft care about the size of A. I donft care about the values, rank -- couldnft care less. Letfs check. What we have to do is we have to do an [inaudible] experiment, and youfd say, well, look, thatfs a square matrix. Suppose it were singular. That means therefs some vector Z, which gets mapped to zero, and Z is non-zero. Thatfs what it means. It means this matrix here [inaudible] a square being singular means itfs got an element in a null space. Itfs non-zero. Thatfs the case. Letfs do something here. If this is the case, Ifm gonna multiply that equation on the left by A transpose, and Ifm gonna write this down. Thatfs surely zero, and this is a zero vector. Thatfs zero number. Why? Because Z transpose times the zero vector is zero, obviously. Now what Ifm gonna do is Ifm gonna take this and letfs expand it. Ifm gonna write it this way. Ifm just going slow here. Plus Mu and Ifve commuted the Mu and I get that. Now, Ifm gonna write this as norm AZ squared plus Mu norm Z squared equals zero, and now Ifm gonna use a very deep mathematical fact. If you have a sum of two non-negative numbers and the sum is zero, you can make an amazing conclusion. That is that both the numbers are zero. Everybody follow me? Norm AZ square to zero, so norm AZ is zero, and norm Z is zero. If you want to know where does the Mu positive come in, itfs right now, because if Mu is zero, all I can say is that. This says because this is not zero up here, we have our contradiction and wefre done. Thatfs why this works. Thatfs one way to say it. Another way to say it is to look at the stacked matrix and just show that the stacked matrix [inaudible] is full rank. Thatfs the other way to look at it -- that if you take any matrix at all, any size, any shape, any values, and you stick below it, square root Mu a positive number times the identity, that matrix is full rank and skinny. Thatfs the other way to think of it, which is also probably a better way. This is called tickenoff regularization and has lots of applications. Here are the types of applications you would see. Itfs very common in estimation inversion, and it works this way. Typically something like X minus Y is a sensor residual. Here, if you choose the X that minimizes sensor residual and you like the X you see, great. No problem. But in fact, you might have some prior information that X is small. If you have prior information that X is small or another application is where [inaudible] is actually your model Y is AX is actually only approximately valid, and itfs certainly only valid for X small. Wefre gonna see an example of that momentarily. So the regularization trades off sensor fit in the size of X. Thatfs what youfd do. By the way, this comes up in control, estimation, communications, and itfs always the same story. Therefll be some parameter and some method or algorithm, and when you turn this knob, because thatfs really what Mu is. Itfs a knob that basically tunes your irritation is what it really does. It tells the algorithm what youfre more irritated by. As you turn Mu, what will happen is at first youfll get -- if you turn it all the way one way, youfll get a control system thatfs kind of very slow and doesnft do such a great job but it doesnft use huge forces. You turn the knob all the way the other way and youfll get something thatfs very snappy and is using very big forces or things like that. In communications, you get something that equalizes things out beautifully, but itfs very sensitive to noise. You turn the parameter the other way around and you get something that is gonna be really -- itfs kind of very calm, doesnft overreact any noise or anything like that. It cleans it up a little bit but not much. These are the types of things youfll see all over the place. Let me mention some examples in image processing. Thatfs a very famous one. We should actually add that to the notes. A very famous one in image processing is this. Itfs called La Placean regularization. Let me say what that is. Youfve already done one, or you will at 5:00 p.m. today. Look at an image reconstruction problem. That image reconstruction problem, the sensor measurements are pretty good and there wonft be any problem. It will just work. Why? Because we arranged it to. However, in general, youfll have the same sort of thing, and actually what you want to do there is you want to add -- let me just explain what wefre doing. I want to estimate an image, so I have an image here and I have some pixels. What Ifm estimating, my X is sort of the value in each of these things. If with your sensors you estimate X and you get some crazy numbers that vary pixel by pixel by huge amounts, this kind of hints trouble, because normally when a person writes down pixels, therefs sort of an implicit assumption that youfre sampling fine enough. So for example, if this were plus ten and that was minus 30 and there were wild swings here, what youfd do when you looked at that is that youfd probably say you need to sample finer is probably what youfd say. So now my question is letfs say that AX minus Y -- X is a vectorized version -- itfs a rasterized version of the image. This is gonna be misfit from my sensor readings. I want the following. I want to trade off the fit -- thatfs this thing -- with the smoothness of the image. Now, Ifm waiting for you to tell me what do I do. Letfs add a new objective. To do this, we have to add a new objective, and the new objective is gonna be not smoothness, actually. Itfs the roughness. We have to write down a roughness objective. Someone please suggest a roughness objective, which is kind of a norm. Perfect. Wefre gonna form a vector, which is the difference between neighboring pixels. We could have vertical or horizontal. We could have diagonal. It doesnft matter. We could have all of them. Ifll skip a few details here. Theyfre not that complicated. When you do that, you can actually write that as a matrix D, because itfs really a differentiation. Ifll take DX and I could have things like D horizontal and D vertical X. This is a new image whose value at a certain pixel is the horizontal difference here, and thatfs the vertical difference. By the way, if both of these matrices -- describe the null space, actually, of these two matrices. Whatfs the null space of these two? Constant. [Inaudible], but in fact this would be any image that is constant along horizontal lines but can vary in X. The null space in this one would be any image which is constant along vertical lines but can vary this way. I donft know if I got that right. I think I did. Something like that. So what you do is simply this. Wefre done. There. Thatfs a least squares problem. Now youfve turned Mu. You turned Mu all the way to zero, and you get the problem youfre doing now. You get no regularization. In other words, if you like what you see, in this case, therefs no problem. If, however, you do this -- by the way, in many problems with noise and things like that, when you actually just minimize this, youfll get an image which is way too wiggly and stuff like that. Now, you turn Mu up. By the way, if you turn Mu all the way up, what does the solution of this look like? Donft do the math. I just want the intuition. Yeah, itfs totally smeared out. Itfs just a big, gray mess, and itfs just equal to the average value or something like that. Somewhere with Mu in between, youfre gonna see the right picture, and then you might ask how do people choose the regularization? Everyone see what Ifm saying? This is how you use regularization. This is it. I donft know why we donft have an example of that in the notes. Wefll add one, I guess. Then therefs the question of how do you choose Mu? How do people choose Mu? We should distinguish two things -- how do they really choose Mu and then when someone asks them, how do they choose Mu? What do they do? How do they really do it? They try this for one Mu and they go no, itfs too smeared out. Reduce Mu and they go nah, itfs getting too much speckle in there. I can still see some weird artifacts and they go increase Mu. They iterate over Mu. Theyfre just tweaking Mu. Thatfs how they really do it. What happens if someone says in a formal design review how did you choose Mu? Then they go, well, I calculated this optimal tradeoff curve and statistically, this corresponds to the posterior variance of this, that and the other thing, and I used the such and such model and thatfs how I came up with Mu equals two times ten to the minus three. Thatfs what they would say. Whereas in fact, they tried Mu equals 0.1. It got too smeared out and they tried Mu equals 1E minus six, and they didnft get enough regularization. Thatfs how they really did it. Any questions about this? By the way, if you know about least squares and regularization and tricks like this, youfre actually on your way to -- this can be very effective in a lot of problems. By the way, one more point about this. If you go back down to the pad here, this penalizes smoothness, but suppose you also care about the size of X. Suppose you run this but X is huge -- itfs smooth, but huge. How would you reign in the size? Ifll take my pen out. What do you do? You got it. Less Lambda norm X squared. How would you choose Lambda? By messing around. How would you say you chose Lambda? You would talk about the optimal tradeoff surface and tangent and exchange rates and then if youfve had some statistics, you could throw in some statistical justification. But you found it by fiddling with it. Itfs not just total fiddling with it. As you increase Lambda, you can say one thing about the image -- what happens? It gets smaller. Itfs gonna be a little bit rougher and itfs gonna have a little bit of a worse fit to the measurements, if thatfs what these are. Thatfs it. So now you know how regularization works. It works quite well. Next topic is related. Itfs also a huge topic -- non-linear least squares, NLLS, and here it is. It says I have a bunch of functions. Now, for us, we have the residuals as AX minus Y. Thatfs an affine function. Itfs linear plus a constant. What we do in least squares is we minimize the sum of the components of the residuals. Now the question is what if these RIs are non-affine? Thatfs the general case. This is called the non-linear least squares problem. Actually, of course, this residualfs not linear, either. Itfs affine. Linear sometimes means affine. How do you solve a problem like that? Wefll get to that in a minute, but letfs just look at some examples. These come up all the time, non-linear least squares. A perfect example is GPS type problems where you have ranges so you have measured ranges and from that, you want to estimate where a point is. There, you donft have to linearize. You would just minimize. This is actually now the exact range error squared. This comes up in tons of places. In estimation problems, it would come up because you have some kind of non-linear sensor. Instead of something like a line integral through something, you might have something that is non-linear. This comes up all the time. Thatfs an example. How do you solve these problems, and here, I have to tell you something. The first thing that has to be admitted, if wefre being honest -- therefs nothing that great about being honest, but to be totally honest, no one can actually solve this problem in general. Thatfs the truth is basically this problem cannot be solved. Instead, we have heuristics that solve this problem. They donft really solve it. That would be they solve it like that. You donft actually really solve. So non-linear least squares problems in general are not solved period. If you go to Wikipedia, if you go to Google and type in non-linear least squares, whole books, everything all over the place. You will probably find nothing that admits this fact. Thatfs a very big difference from linear least squares or least squares that wefd been looking at so far. We said that A transpose A inverse A transpose Y is the least square solution. We werenft lying. That vector minimizes the norm of R if R is AX minus Y absolutely. Therefs no fine print, nothing. Thatfs the minimizer. All methods for [inaudible] a least squares problem donft have that property. They are all heuristics. You probably wonft find out certainly from people who have an algorithm for this. It gets kind of weird after awhile to say things like howfd you do that? After awhile, in [inaudible], you say I solved a non-linear least squares problem. Technically, that is false. Youfll see that in papers. Itfs false. When someone says that in a paper, therefs always the question because therefs two options -- either A, they know they havenft solved it and theyfre a liar because theyfre saying in a paper they solved it or B, they donft even know that they may not have solved the problem. Itfs usually the latter. Thatfs usually the problem. Theyfre just totally clueless. Theyfre like I donft know, I got the software. I downloaded it from the web. It was called non-linear least squares. It didnft complain. You have to know itfs all a heuristic. You donft get the answer. By the way, in practice, you very often do get the answer, but you donft know that you got the answer. I made enough of a point about that. Having said all that, and also having said that I will probably slip into the informal language where Ifll say how do you solve non-linear least square problems, the answer is you donft because you canft because no one knows how to do it in general. How do you approximately solve or something like that, so you have to put a qualifier like an asterisk, and then at the bottom of all these pages you just have a little note where it says solve, and then down here, Ifll just write it in so wefre all cool here, it says not really, but maybe. That would be the right thing. Thatfs just in force until I say itfs not. So how do you solve least square problems? Therefs lots of methods that are heuristics and they often do a very good job for whatever application youfre doing. The very, very famous one is Gauss Newton. By the way, the name of the method suggests that this was not developed five years ago. This is not exactly a new method. Itfs actually pretty straightforward. It goes like this. You have a starting guess for X, and what you do is you linearize R near the current guess. Now, linearize means find an affine approximation of R near there. Then what you do is you replace that non-linear function with this affine one. Affinely squares, which on the streets is called linearly squares -- we know how to do that. Thatfs what wefve been doing for three days. Thatfs no problem. Then you update. Thatfs the new X. Then you repeat so you get a different model. This is called the Gauss Newton method. Herefs some more detail, and herefs the way it works. You take the current residual and you form -- basically, what youfre doing is you form an approximation that says if X were to be near XK, then I would have R of X is about equal to R of XK plus -- thatfs the Jacobean -- times the deviation. Here, if you wanted to, you could say something like provided something like X is near XK. I can put a little comment there like that. Now what we do is we write down the linearized approximation. We rewrite this to make it look like an affine function, but we can put it like this. It doesnft really matter. Now what you do is you minimize the sum of the squares of these things over choice of X. That gives you your next iterate. The next iterate is that, and itfs just a formula for that, which is this thing. You repeat until this converges, which, by the way, it need not. Question? Absolutely. Youfre right, and Ifm very glad you brought that up. The question was this, that last lecture, I believe I was ranting and raving about calculus versus least squares where calculus is getting a linear model of something thatfs super duper accurate right near the point, but itfs vague about the range over which itfs -- did you notice that in your calculus class? They go this is a really good estimate near this thing, and you go, well, what does near mean? And theyfre like, you know, near. Donft you -- thatfs the beauty of all this, right? I donft have to say how near it is. Itfs the limit. The point was how about doing Gauss Newton where you donft use the derivative of the Jacobean but you get a least squares based model? Instead of this, use a least squares based model, and that was your question, and I can tell you this. That is an outstanding method, often far superior to Gauss Newton. My answer to your question is yes, thatfs a good idea. It often works really well. In fact, therefs a name for this. In the context of filtering, when youfre doing estimation of a dynamic system, thatfs called a particle filter. Let me just say a little bit about that. By the way, I had some pseudo code. At this level, itfs so vague that the linearized near current guess, you could use several methods there. One would be calculus, and the one I just described is calculus. In fact, linearized near current guess -- that could be done by a least squares fit, just as you recommend. You wouldnft call this Gauss Newton anymore, but you might or something. Anyway, in the context of filtering, itfs called a particle filter. These work unbelievably well. Instead of this thing, which is sort of calculus, what you would do instead is youfd take XK and youfd add to XK little [inaudible], and youfd actually evaluate R of XK for a whole bunch of points right around there, and then youfd fit the best affine model to what youfve got, and then all the method would work the same. That would work really well, by the way. By the way, one little comment here. This says provided X is near XK, so you could well get into trouble here, because you linearize -- in this case, we did it by calculus -- you linearize and at least it says if X stays near XK, this is a good model. But you just solved this problem, and therefs no reason to believe that this point is near XK. If itfs not near XK, then the whole premise for how you got XK plus one is in doubt, because youfre using this model which need not be valid. Therefs a very cool trick, which since we just did regularization, I can tell you what it is. The super cool trick is this. You add this. There you go. This says minimize the sum of the squares of my model for the residual. Thatfs what this says. This says but please donft go very far from where you are now, because if you do, therefs no reason to believe thatfs a good model. By the way, this is called something. This is sometimes called a trust region term because when you form this model -- by the way, either with least squares or with calculus, you have the idea of what is the area around XK where you trust that model? Thatfs the trust region. This basically is your trust region term. Everybody got this? Once you know about regularization, you start realizing you should be using it in a lot of places, and this would be a perfect example. What I described here was the pure Gauss Newton. Letfs look at an example and see how this works. Herefs an example. We have ten beacons. No linearization here, at least not in the main problem. The true position, thatfs this point right here, and thatfs where we started off, and let me tell you what the measurements are. I have ten measurements. What I know is the distance of a point to each of these beacons. These are noisy. By the way, if they were noise free, this problem would be trivial. If they were noise free -- if I knew the range to this point, I would draw a circle with that radius around that point. I would do that for all the points and they would all intersect in one point and Ifd say thatfs where you are. The problem is the range measurements have errors. In fact, theyfre considerable errors. Theyfre plus/minus 0.5. So theyfre not really going to all come to one single point. Wefll use Gauss Newton for that. Actually, here, therefs other methods you could use, but this is just a simple example. By the way, this is what the non-linear least squares objective looks like as you vary X. This is going to be -- that happens to be the true global solution in this problem, but you see, this is not a quadratic function, which would be a nice, smooth bowl. Itfs got all these horrible little bumps and things like that. In this problem, it doesnft have any local minima that are not global, but we could easily have done that by putting some point here, and then they would have had a little valley here that would have filled up with a lake or something. As soon as you have that, I guarantee you Gauss Newton method will, given the wrong initial point, land right there very happily, at which point you have not solved the problem and you donft know it. Okay. If you run Gauss Newton on this starting from here and youfre way, way far away. In this case, it actually works. In a sense, you actually do compute the solution, but you donft know it. I only know it because I plotted it. Itfs got two variables and I plotted it for everything. But the minute youfve got five variables, youfre not going to know it anyway. What happens now is the objective and the Gauss Newton iterate just keeps going down, and you can see it in about five steps. It hits what appears to be a minimum, and it takes maybe five, six steps or something like that, and thatfs it. By the way, you donft really know -- the final estimate, by the way, was quite good. The final estimatefs minus 3.3 plus 3.3, so thatfs the actual true position. Thatfs where you started. After one step, you were here, and two you were here, then there, then there, and now you can see youfre sort of in the region where youfre going to get the answer. Itfs pretty good. You are getting from the least squares part of it. You are actually getting the blending of ten sensor measurements, so youfre getting the power of blending. Some people call that a blending gain or something like that, and I forget what they call it in GPS. They have some beautiful, colorful word for blending lots of sensors and measurements and then actually ending up with an estimate thatfs better than any of the individual sensors. Here, you actually end up with an unusually good estimate -- better, actually, than the accuracy of your individual sensors. Thatfs the picture. In this case, it actually worked. In other words, we got the global solution, but thatfs only because I plotted it. I think I already mentioned this. Let me ask -- well, I can ask you. Suppose you have a problem. Real problems donft have two variables, I might add, right? You have two variables, you plot it and you use your eyeball, okay? This is silly. You have three, you write three or four loops and you go to lunch. Letfs just bear that in mind. Real problems have ten variables, 100, 1,000. You cannot plot pictures like that. So you have a non-linear least squares problem where you are estimating, letfs say, some non-linear estimate. Maybe itfs a topography thing. You have a non-linear sensor. Itfs a variation on the problem youfre doing for homework right now, except instead of having a linear sense, you have a non-linear one. How big is that problem we gave you? Thirty by thirty? Tiny. Thatfs 900 variables. Thatfs pretty small. Now you run a Gauss Newton in 900 variables and you get an image. By the way, if youfre imaging somebodyfs head and it came out looking like a head, thatfs good. That would be your feedback that something is approximate. If it came out not looking like a head, that wouldnft be good. What would you do as a practical matter there to check whether you got in fact or just to enhance your confidence that you may have actually minimized the non-linear least squares? What would you do? Exactly. Youfd run it once and youfd see what you got. By the way, if you had a pretty good estimate ahead of time, and thatfs actually likely to help. You start with that. But what you might do is exactly what was just suggested. You run it multiple times from different starting points and you just see what happens. Here are some of the things that can happen. The first is that no matter where you started from, it always goes back to the same thing. What do you know if you do that 50 times? Letfs be very careful in our verbs. I used the word know, so what do you know when you run it 50 times and you keep getting the same point? Herefs what you know. You know nothing. I mean both in theory and in practice. You know absolutely nothing, because R900 is a huge place, and the fact that you just sampled 50 points out of it, thatfs nothing. You know nothing. Now as a practical matter, what can you say when someone says to you I did the imagining. There it is. I believe thatfs the image. It looks good. Itfs clearly a head and therefs somebodyfs brain there. It looks good. Someone says do you know thatfs the global minimum and you go no. But Ifll tell you what I did do. Last night when I went home, I started up a batch of doing 1,000 of these from different initial conditions, and I found that in something like -- letfs suppose you found in all of them, you say in every single case, I converged the exact same final estimate. Theyfd go cool. Youfre getting the global minimum. If therefs no lawyers present, you can say good bet. But other than that, if someone says what can you really say you know, you canft say anything. You can say I donft know. You would say if you ran 50 and got the same answer each time, youfd say that as a practical matter, you have enhanced your confidence in the design. Actually, therefs a great phrase, which makes absolutely no sense and is wrong. Itfs very useful. Itfs this. You could say exhaustive simulation. Have you heard this phrase? Thatfs great. So this is what you say to someone when you open the door and you say hop it. You go are you sure this works? You go no problem. We used exhaustive simulation. Hop in. Thatfs a very useful phrase. Of course, it makes absolutely no sense if therefs more than three or four things varying, and is generally speaking just wrong. You say we checked a million values of bursts of wind and things like that, all sorts of terrible things. Wefve simulated them all. Of course, you cannot. If you find yourself in a situation, you can always use that phrase. Then you mention the number, and as long as this person doesnft take the nth root of that number where N is the number of parameters, everything is cool, because a million simulations in R10 is like zero. Itfs like the tenth root of a million, which is a small number. Thatfs non-linear least squares. I hope we have a problem on that, but I have this weird feeling we didnft. It just didnft make it? Thatfs horrible. Fortunately, we have some recourse, donft we? It seems that over the next week, you might not see non-linear least squares. Itfs an important topic, and really one that should be covered in the first half of the course, and I mean covered like I think you should do one. Letfs look at the next topic. Ifll just say a little bit about the beginning of the topic. Itfs actually the dual of least squares. Youfll get used to this idea of duality. I donft think wefll ever get very formal about it, but at least Ifll give you the rough idea. Let me say a little bit about duality. Itfs going to involve ideas like this. Itfs going to involve transposes, so there are going to be transposes. Rows are going to become columns and things like that. Null spaces are going to become ranges and thatfs the kind of thing. By the way, therefs a duality between control and design and things like that and estimation, because there youfre switching the roles of X and Y, typically. These are all sort of the ideas. Wefve done least squares so far. The dual of that or a dual of that is going to be least norm solutions [inaudible], because wefve so far been looking at over determined equations, and wefll see what this is. This is actually pretty straightforward stuff. Now wefre going to take Y equals AX, but A is fat now. That means M -- you have fewer equations than you have variables. You have more variables than equations. Another way to say this is X is underspecified. So even if there is one solution here, therefs going to be lots, because you can have anything in null space of A, which, by the way, has to be more than just the zero element now because itfs got a dimension at least N minus M here. Wefll assume A is full rank, so that means that you have M equations and theyfre actually independent equations, so the rows of A are independent. Then all solutions look like this. The set of all Xs to satisfy AX equals Y is you find any particular solution here. You will very shortly see a particular solution. You take any particular solution and you add anything in the null space. By the way, if a person chooses a different XP here, you get the same set, because the difference of any two solutions is in the null space, and you get the same thing. Here in this description, Z essentially parameterizes the available choices in the solution of Y equals AX, and you can say roughly that the dimension of the null space of A, which is N minus M, because A is full rank, that gives you the degrees of freedom. That says you can choose X to satisfy other specs or optimize among solutions. I guess as we talked about before, as to whether or not thatfs a good or bad thing, that depends on the problem. If this is an estimation problem, degrees of freedom are not good. Basically, thatfs stuff you donft know and cannot know with the measurements you have. If this is a design problem, this is good, because it means you can do exactly what you want many ways, and therefore you can choose a way thatfs to your liking. Thatfs the idea. I might as well just show this and then wefll continue next time. Here is a particular solution. Itfs A transpose times AA transpose inverse times Y. It should look very familiar but be a little bit off. You are used to A transpose A inverse A transpose Y. Youfre used to that formula, and this looks very different. Itfs a rearrangement. You move a few things around. It looks perfectly fine. You have to be very, very careful here, because itfs very easy to write down things like that and that. What you must do, and Ifll show you my pneumonic. My pneumonic is this. You look at that and you look at that and you quickly do the syntax check. The syntax scan goes like this. If you saw A inverse, your syntax alarm would go off unless you know A is square. But here, A transpose A, thatfs square, and therefore at least by syntax can be passed to the inversion function. Thatfs cool. In fact, you can multiply it by A transpose on the right. You can also form AA transpose, and thatfs invertible, too, so syntactically, this one is cool, too. Both of these pass the syntax test regardless of the size of A, fat or skinny. Now letfs get to the semantics test. Now, if A is fat, then this one is basically a fat times a tall matrix, and the result is you get a little one. Over here, if A is skinny, this is also a fat times a tall, and the result is a little one. So here is the semantic pass. The semantic pass says if you propose to invert a fat times a tall matrix, unless therefs something else going on like some rank condition, youfre not in trouble yet. If these two reverse, youfre in trouble independent of what the entries of A are, because if you reverse these -- if A is fat and you go to this formula, that is a square matrix. Thatfs fine by syntax. It fails on semantics because A transpose A is going to be square, but it is low rank, and you should not invert low rank matrices. Wefll quit here and continue next week.