Okay. I guess wefre -- wefve started. Well, we get the video-induced drop-off in attendance. Oh, my first announcement is this afternoon at 12:50 -- thatfs not this afternoon. I guess technically it is. Later today at 12:50, believe it or not, wefre going to make history by having an actual tape-behind where wefre going to go back and do a dramatic reenactment of the events that occurred at the first -- on the first lecture. Thatfs -- so, I donft know, so people on the web can see it or something like that. Thatfs 12:50 today. Itfs here. Obviously not all of you are going to come. But those who do come will get a gold star and extra help with their projects or who knows what. Wefll figure something out. So please come because, although itfs never happened to me in a tape-ahead or tape-behind, you know itfs every professorfs worst nightmare to go to a tape-ahead and have no one there. So, in fact, itfs not even clear. Just some philosophical questions. And practical ones, like can you actually give a lecture if there was no one there? I pretty sure the answer is no. But okay. So wefll start in on continuing on the constrained subgradient method, projected subgradient method. Oh, let me make one more announcement. Homework 1 is due today, which has a couple things on it. We pipelined so homework two is currently in process. And then wefre going to put out homework three later tonight, something like that. So, hey, just listen, itfs harder to make those problems up than it is to do them. Come on. We can switch. You want to make up the problems and Ifll do them? We can do that if you want. Fine with me. I can do -- well, of course, your problems have to be well-posed and they have to actually kinda mostly be correct and kinda work. So anyway, we can switch if you want. Just let me know. Okay. So letfs look at projected subgradient. So the projected subgradient method, let me just remind you what it is. Itfs really quite stupid. Here it is. Itfs amazing. Goes like this. You call f, f dot get subgrad. Get here at x to get a subgradient. So thatfs -- you need a weak subgradient calculus method implemented. So you get a subgradient of f. You then take a step in the negative subgradient direction with a tradition step size. Of course, this in no way takes into account the constraint. And then you project onto the constraint. Now this is going to be useful, most useful, when this projected subgradient is meant to be most useful when this projection is easy to implement. And we talked about that last time. There are several cases where projections are easy. Thatfs it. Projection on the unit simplex. That was it. Homework 3 coming up. Coming up. Okay. Project on unit simplex. Okay. So obvious cases of projection on the non-negative orthant. Projection onto the cone of positive semidefinite matrices. But youfd be surprised. Itfs probably about 15, 20 sets that itfs easy to project onto. Or easy in some sense. Of course, in -- you can also project onto other sets, like for example, polyhedra or something like that. But that would involve using quadratic programming or something like that. Okay. This is projected subgradient method and a big use of it is applied to the dual problem. Now this is really a glimpse at a topic wefre going to do later. So later in the class, wefre going to look at this idea of distributed decentralized optimization. So far, kinda everything wefve been talking about is centralized. You collect all the data in one place and calculate gradients and all this kind of stuff. In fact, wefre going to see beautiful decentralized methods, and theyfre going to be based on this. So this is a glimpse into the future. Maybe not even too far a future. Maybe like a couple of lectures or something. But letfs see. Okay. So we have a primal problem, minimized f0 subject to fi less than zero. And we form this dual problem, which is maximized the dual function at lambda. These are the Lagrange multipliers lambda. These have to be non-negative because these are inequality constraints like this. And the projected subgradient method is very easy, because wefre maximizing this concave function subject to the constraint that youfre in the non-negative orthant. Projection on the non-negative orthant is completely trivial. You just take the plus of it -- of the vector -- component by component. So the updatefs going to look like this. You will find a subgradient of minus g here. So wefll find a subgradient of minus g, and then we will step in the negative subgradient direction. Actually, I suspect this is correct, but this could be a plus here. I donft know. And the rule is for 300 level classes, I donft even care. If itfs a plus then you fix it or something like that. I actually think this is right. Itfs confusing because we have that subgradient of minus g. Okay. So you take a subgradient step in the appropriate direction, so Ifm allowed to say that in a 300 level class, and then you project here. So thatfs it. Now by the way, I should mention again kinda going towards -- when -- if I solve this dual, what are the -- when is it that I can actually extract the solution of the primal problem? Again, this is 364a material, which we covered way too fast, but I donft know. Does anyone remember? Anyone remember this? [Student:][Inaudible]. Sure, wefre going to need strong duality holds if it were strictly feasible. Wefd have Slaterfs condition and strong duality would hold. That gives you zero duality gap and I guess if you donft have that, then you canft solve this at all, because the optimal values arenft even the same. So letfs assume that. Therefs more, actually, to it than just that. What the sledgehammer condition is is this. What youfll need is that when you find lambda*, what you want is that the Lagrangian at lambda* should have a unique minimizer in x. If it does, then that x is actually x* up here. Okay? So thatfs the condition. A simple condition for that -- you should go back and read this, because wefre going to be doing this in a couple of weeks and actually these things are really going to matter. So wefre going to do network flow methods and all sorts of other stuff, and theyfre actually gonna matter. Herefs the -- one sledgehammer condition is f0 here is strictly convex. Because if f0 is strictly convex then f0 plus lambdai fi, where fi are convex, is also strictly convex. For all lambda, including all lambda0 it doesnft matter. Itfs strictly convex. If itfs strictly convex, it has a unique minimizer. So just to go back to this, you would actually calculate the optimal lambda*. You would then go back and get the minimizer -- the minimizer of the Lagrangian with respect over x, call that x*. And that will actually be the optimum of this problem. Okay. So letfs work out the subgradient of the negative dual function. So itfs actually quite cool. Letfs let x* of lambda be this. Itfs arg min of the Lagrangian. So itfs just exactly what we were just talking about. And herefs sorta the big sledgehammer assumption is f0 is strictly convex. And by the way, in this case, you might say a lot of times some of these things are silly. Theyfre sorta things that basically only a theorist would worry about. I mean, somebody should worry about them, but they have no implications in practice. And Ifm very sorry to report that this is not one of those. This actually -- there are many cases in practice with real methods where this issue comes up. Itfs real. It means that methods will or will not converge and you have to take extra effort in things like that. Okay. All right, so wefll just make this sledgehammer assumption here, the crude assumption, f0 is strictly convex. That means this is strictly convex here. And therefore, it has a unique minimizer. And wefre going to call that minimizer x* of lambda. Itfs a function of lambda. Okay? So thatfs x* of lambda. And of course, if x* is the minimizer, than g of lambda is f0 of x* of lambda plus lambda1 times this. Itfs the Lagrangian evaluated lambda*. Okay. So a subgradient of minus g at lambda is then given by this. Itfs hi is minus fi of x* of lambda. Now this is actually quite an interesting -- first of all, let me explain that. Let me see if I can get this right. g is the infenum over z -- itfs the infenum over z of this Lagrangian here. Thatfs the infenum of -- thatfs what g of lambda is. So negative g of lambda is a supremum. How do you calculate the subgradient of a supremum? No problem. You pick a point that maximizes -- one of the points that maximizes. In this case therefs a unique one. Thatfs what this assumption says here. So you pick the maximizer, thatfs this, and then you form this thing, and then you ask yourself what is the gradient, subgradient, of this thing with respect to lambda? Thatfs an affline function. So the subgradient is simply this thing here up to this thing here. And so, again, modular minus sines. My guess is that this onefs correct, but I guess wefll hear if theyfre not. And wefll silently update it. But I think itfs actually right. So a subgradient of minus g is this. By the way, thatfs a very interesting thing. Let me say what that is. This -- if this is positive, then letfs see, what does that mean? If hi is positive -- maybe we donft -- well, we can work it out. If fi is negative, that means that the ith inequality constraint is satisfied. If itfs positive it means itfs violated. So that means that hi if itfs positive is something like a slack. So hi is a slack in the ith inequality. If hi is positive it means the ith inequality is satisfied. If hi it is violated. And hi is the amount by which itfs violated. Okay. So herefs the algorithm. Herefs the projected subgradient method, just translated using the subgradient. Notice how embarrassingly simple it is. So this is projected subgradient method for the dual. And it basically says this. It says you start with some lambdas. You can start with all lambdas equal to one. For that matter, start with all lambdas equal to zero. It doesnft -- just start with all lambdas zero. It says at your current lambda, minimizes Lagrangian without any consideration of feasibility for the primal problem. Now when you minimize this thing here, and basically the lambda are, of course, prices or costs associated with the constraints. So this is sorta a net here, because itfs sorta your cost plus, and then these are charges and subsidies for violations. Itfs a charge if itfs a violation. And it is a subsidy if fi is negative, which means you have slack. And then, actually, you derive income from it. Okay. So thatfs the meaning of this. So it says what you do is you set all -- itfs basically a price update algorithm and you start with any prices you like. They have to be non-negative. Start with the prices. You then calculate the optimal x. No reason to believe this optimal x is feasible. By the way, if the optimal x is feasible, youfre done. Youfre globally optimal. So if fi of x* at any step, if theyfre all less than or equal to zero, youfre done, optimal. And not only that, you have a primal dual pair proving it. Okay. Otherwise what you do is you do this. And this is really cool. You go over here and you look at fi of x. If fi of x, letfs say, is plus one, it means that your current x is violating constraint i. Okay? It says youfre violating constraint i. That says youfre not being -- the price is not high enough. So it says increase the charge on resource one. If -- resource i if fi represents a resource usage. So it says pop the price up in that case and alpha tells you how much to pop the price up. In that case, the plus is totally irrelevant, because that was non-negative. You adding something. You bumped the price up and therefs no way this could ever come into play. Okay. So it says -- I mean, this is actually -- this is the name -- I should say the comment, if you make a little comment symbol over here in the code you should write on the thing gprice update.h Because thatfs exactly what this is, the price update. So what you do then is this. If fi is negative, thatfs interesting. That means youfre underutilizing resource i. Itfs possible that in the final solution the constraint i is not tight, in which case it doesnft matter. Thatfs fine. Thatfs the right thing to do, but youfre underutilizing it. And what this says is in that case, thatfs negative, this says drop the price on that resource. This says drop the price. However, now this comes into play. It says drop the price, but if the new price goes under zero, which messes everything up, because now it encourages you to violate inequality, not satisfy them, then you just make the price zero. And so, for example, if the ith inequality is, in the end, gonna be at the optimal point, this is actually gonna be not tight, then whatfs going to happen is that price is gonna go to zero like that. Youfre gonna -- at the -- youfll be underutilized here. Thatfll be negative. Thatfll be zero for the last step. This will become negative. The plus part will restore it to zero. So this algorithm, I mean, itfs actually a beautiful algorithm. It goes to the -- variation on this go back into the e50s and e60s, and so -- and you find them in economics. So this is -- itfs just a price update or, I think, this is one -- this would be part of a -- a bigger family of things. I guess they call this a TETMA process, or something like that, where -- I donft know. Whofs taken economics and knows the name for these things? Does anyone remember? Come on, someone here took an economics class and saw some price adjustment. Okay, letfs just move on. No problem. All right. So in that method, it says that the primal iterates are not feasible. Thatfs, I mean, itfs actually -- if you ever hit an interation where the primal iterates are feasible you are now primal dual optimal, quit. You quit with perfect -- so what it means is in a method like this, projected subgradient applied to the base problem, after each step youfre feasible because you projected onto the feasible set. So thatfs a feasible method. And all thatfs happening is your function value is going down. I might add not monotonically, right. So these are not decent methods. But your function valuefs coming down the optimal one non-monotonically. In a dual subgradient method whatfs happening is that the primal iterates are not feasible. What happens is these things -- youfre approaching feasibility. Thatfs what happens. In fact, youfll never hit feasibility. If you hit feasibility you terminate at the end. And in this case, the dual function values, all of which are lower-bound -- so the one nice part about this dual subgradient method is at each step you have a global lower bound on the original problem. You do add value at g exactly at each step. So you have a lower bound. By the way, therefs a couple of tricks. I think these are in the notes, so if you read the notes, this becomes especially cool when you have a way, some special method for taking x tilde, and from it, instructing -- I want to say projection, but it doesnft have to be projection -- but constructing a feasible point. Could be by projection. So you can get -- you can construct a feasible point, then this algorithm will actually produce at each step two things, a lower bound, a dual feasible point. Youfll know g of lambda. Thatfs the lower bound of your problem. It will go up non-monotonically. And youfll also have a feasible point called at x tilde of k. Tilde is the operation of constructing a feasible point from x(k). And then you get primal points whose function value is going down non-monotonically. Then you actually get a duality gap and all that kind of stuff. Okay. So I think wefve talked about all this. Thatfs the interpretation of the thing. Itfs really quite beautiful. The coolest part about it you havenft seen yet. And the coolest part wefre going to see later in the class. Not much later, but itfs that this is going to yield a decentralized algorithm. For example, you can do network flow control, you can do all sorts of crazy stuff with this, and itfs quite cool. But thatfs later. Okay. So wefll just do an example just to see how this method works, or that it works, or whatever. Oh, I should mention something here. And you might want to think about when would this algorithm be a good algorithm. When would it look attractive? And let me show you, actually, one case just right now immediately. This is trivial calculation. The only -- the actual only work is here. And what this means is you have to do, at each step, the work is actually in minimizing this Lagrangian. So basically, at each step there will be prices, and then you minimize the Lagrangian. Thatfs going to be the work. Therefore, if at any time you have an efficient method for minimizing this function, you are up and running. Okay? So, for example, I mean, I can name a couple of thing. Someplace you have a control problem and these are quadratic functions. Then if you have your -- if this weighted sum is also going to be one of these convex control problems that it means you can apply your LQR or your Riccati recursion or whatever you want to call it to that. If this is image processing and somehow this involves something involving 2D df keys and all sorts of other stuff, the grad student before you has spent an entire dissertation writing a super-fancy, multigrid blah, blah, blah that solves this thing well. If it solves least squares problems there, and if this is a least squares problem, youfre up and running. And you wrap another loop around that where you just update weights and then repeatedly solve this problem. So itfs good to be on the lookout. Any time where you see -- where you know a problem where you have an efficient method for solving -- actually, just a way of minimizing a weighted sum -- this is what you want to do. Okay. Letfs look at an example. This is going to be quadratic minimization. Itfs not a big deal. And wefll make p strictly positive over unit box. So notice here we can do all sorts of things with this. Oh, we could do the projected subgradient primal is easy here. So projected subgradient primal goes like this: you take x at each step and then you x minus equals alpha times p(x) minus q. Everybody follow that? That was x minus equals alpha g. And then I take that quantity and I apply sat, saturation, because saturation is how you project onto the unit box. Everybody got that? Thatfs the method. Okay? So okay, that would be primal subgradient method applied to this. By the way, I should mention something here, that is, donft -- these are not endorsements of these methods, and in fact, these methods only make sense to actually use in very special circumstances. If you just want to solve a box-constrained QP like this, and x is only 2,000 dimensions or it could be all sorts of other things, you are way better off using all the methods of 364a. Thatfs just an interior point method. So actually, if someone said, gOh, Ifm using primal decomposition or dual decomposition to solve this,h I would actually really need to understand why. There are some good reasons. One of them is not, I donft know, just because itfs cool or something. I mean, thatfs not -- I mean, here, for example, this would be so fast if you made a primal barrier method for it. It would be insane. So there are only special reasons why youfd want to do this. One would be that when you write down this dual subgradient method it turns out to be centralized. That would be -- that works as a compelling argument. But just to remind you, these methods are slow. They might be two lines, like that. I guess if you put a semicolon here, itfs one line. They might be two lines. They might be simple. But theyfre not -- these are not the recommended -- I just want to make that clear. Okay. So herefs the Lagrangian. And, indeed, it is positive -- this is positive definite quadratic function for each value of lambda, because you donft even need this part. Itfs positive definite already here. And so herefs x*. Itfs this. Itfs p plus 9 ag of 2 lambda inverse q. And the projected subgradient method for the dual, this looks like that. So you, in fact, it makes perfect sense. It even goes really back to 263 and it goes back to regularization. If you didnft know anything about convex optimization, but you knew about least squares, that describes a lot of people, by the way, who do stuff. And by the way, people who do stuff and actually get stuff done and working. So donft make fun of them. Donft ever make fun of those people. So how would a person handle that if you hadnft taken 364? Well, itfd be 263. Youfd look at it and youfd say, gWell, I know how to minimize that.h Thatfs no problem. Thatfs that, without the lambda there. Thatfs p inverse q. Something like that. And then youfd look at it and youfd go, gYeah, but, I mean, this is a problem.h So herefs how a human being would do it. Theyfd do this. They calculate p inverse q. Thatfs x. If that x is inside the unit box, they would say -- theyfd have the sense to say, gIfm done.h Otherwise, theyfd say, gOh, x7 is like way big. Ouch. Thatfs no good.h So I will add to this. I will regularize and I will put plus a number, some number, times x7 squared. Everybody cool on that? Youfre adding a penalty for making x7 big. Okay? And youfd look at this and be like x12 is also big and youfd add something there. Ifm not -- remember, donft make fun of these people. I donft. You shouldnft. So then youfd solve it again. And now -- except it would be smaller. Now x7 is too small. Now x7 has turned out to be plus/minus -- is 0.8. And you go, oh sorry, my weight was too big. So you back off on x7 and now other things are coming up and over. And you adjust these weights until you get tired and you announce, gThatfs good enough.h Okay. I mean, listen, donft laugh. This is exactly how engineering is done. Least squares with weight twiddling. Period. Thatfs how itfs done. Like if youfre in some field like machine learning or something you think, oh now, how unsophisticated. People in my field are much more sophisticated. This is false. All fields do this. This is the way it really works in those little cubicles down in Santa Clara. This is the way itfs done. Youfre doing imaging. You donft like it. Too smoothed out. You go back and you tweak a parameter and you do it again. So no shame. All right. So, actually, if you think about what this method is, this is weight twiddling. Thatfs what this says. Itfs weight twiddling. It says pick some regularization weights, because thatfs what these are, and then it says update the regularization weights this way in a very organized way. It just -- you just update them this way. So this is, in fact, a weight twiddling -- an economist would call this a price update algorithm. And maybe an engineer might call it a weight twiddling algorithm. They might even -- therefs probably people who invented this and didnft know it. Anyway, everyone see what Ifm saying here? Okay. Let me ask you a couple of questions about it, just for fun, because Ifve noticed that the 364a material has soaked in a bit. If p -- not fully. If p is banded, how fast can you do this if p is banded? Letfs say itfs got a bandwidth around k. N is the size of x. How fast can you do that? [Student:][Inaudible]. With n squared k? Thatfs your opening bid? Thatfs better than n cubed, right. If p is full, thatfs n3. Thatfs a Cholesky factorization and a forward and backward substitution, right? Letfs make p banded. You said n squared k, that was your opening? [Student:][Inaudible]. Oh, even better. So nk squared. Youfre right. Thatfs the answer. Okay. So just to make it -- I mean, you want me to -- let me just make a point here. If this is full, you probably donft want to do this for more than a couple of thousand. Three thousand, 4,000, you start getting swapping and stuff like that on something like that. You have a bunch of machines, all your friendsf machines, and you run MPI and all that stuff. Whatever. You can go up to 5,000, 10,000, something like that. But things are getting pretty hairy. And theyfre getting pretty serious at that point. If this thing is blocked -- if p is block-banded or something like that, itfs got a bandwidth of ten, how big do you think I could go? For example, my laptop, and solve that. Remember, the limit would be 1,000. I could do 2,000. Itfs growing like the cube, so every time you double it goes up by a factor of ten or eight or whatever. So whatfs a rough number? Well, put it this way, we wouldnft have to worry at all about my laptop about a million. I want to make a point here that knowing all this stuff about structure and recognizing the context of problems puts you in a very, very good position. By the way, where would banded structure come up in a least squares problem? Does it ever come up? [Student:][Inaudible]. Structures that are, yeah, that actually banded -- what does banded mean? Banded means that x(i) only interacts with x(j) for some bound on i minus j. So if you had a sort of a trust or some mechanical thing that went like this and things never -- bars never went too far from one to the other, that would be a perfect example. Let me give you some others. Control, dynamic system. So just control is one, because there, itfs time. And, for example, if you have a linear dynamical system or something like that, the third state at time 12, it interacts, roughly, with states one step before and one after. But then thatfs banded. How about this? How about all of signal processing? Therefs a small example for you. All of signal processing works that way, more or less. Because there the band structure comes from time. Signal processing means that each x is dependent only on a few -- you know, a bounded memory for how much it matters. Now the whole problem is coupled, right? Okay. This is just for fun, but Ifm going to use -- itfs good to go over this stuff, because, okay, I just use it as an excuse to go over that. Okay. So herefs a problem instance. So herefs a problem instance where I guess we have 50 dimensions and took a step at point one. Oh, I should -- I can ask a question here about this. In this case, it turns out g is actually differentiable. So if g is differentiable, that actually justifies theoretically using a fixed step size. Actually, in practice as well, because in a -- if you have a differentiable function, if you apply a fixed step size, and the step size is small enough, then you will converge to the true solution. So this goes g of lambda. These are lower bounds on the optimal value, like that. They converge. And this is the upper bound, found by finding a nearby feasible point. And then let me just ask you -- I donft even know because I didnft look at the codes this morning on how I did this, but why donft you guess it? At each step of this algorithm, here, when you calculate this thing -- by the way, if this thing is inside the unit box, you quit and youfre done. Youfre globally optimal because youfre both primal and dual. End of story. Zero duality gap. Everythingfs done. So at each step at least one of these -- at least one component of this pops outside the unit box. Please give me some guesses -- give me just a uristic for taking an x and producing from it something thatfs feasible for this problem. [Student:][Inaudible]. Simple? What do you do? [Student:]If the [inaudible] is negative one, you make it one. If itfs less than negative one, you make it negative one. There you go. You just project. So in this case itfs too easy to calculate the projection. You just calculate the projection and so, in fact, this -- whoops. This thing here -- Ifm sure thatfs what this is, but x tilde is simply the projection of x(k) onto the unit box, so thatfs what that is. Okay, so thatfs that. Wefre going to come back and see a lot more about projected subgradient methods applied to the dual later in the class. Okay. So letfs look at a more general case thatfs going to be subgradient method for constrained optimization. So here, instead of describing the constraint as just a constraint set, wefll write it out explicitly as some convex inequalities. So this goes like this. Herefs the update. I mean itfs really dumb. Ifll tell you what it is. You simply do a subgradient step. And herefs what you do. If the point is feasible, you do an objective subgradient step. If itfs not feasible, then you find any violated constraint and you use a subgradient of that. Okay? Does this make sense? So itfs really quite strange. In fact, whatfs kinda wild about it is that it actually -- I mean, that it actually works. So, I mean, you realize how myopic this is. Itfs very, very silly. It basically -- so the algorithm goes like this: youfre given x and you start walking through the list of constraints. So you evaluate f1, f2, f3. If those are less than or equal to zero, you go to the next one. The first one -- I mean, thatfs just a valid method. The first time you hit a violated constraint, that j is positive, fj, you simply call fj dot get subgrad, or something like that, to generate a g, and you take a step in that direction. Does that reduce fj? No. Subgradient method is not a decent method. Therefs no reason -- so basically you go down, you find the 14th inequality. If violated, you take a subgradient step, and that could, and often does, make the violation of the 14th inequality worse. I mean, the whole thing is like -- these algorithms are just highly implausible. I mean, the kinda things where you need the proof because the algorithms themselves are so ludicrous. Okay. Now here we have to change a few things. fk best is the best objective value we have over all the points that are feasible, and this can actually be plus infinity if you havenft found any feasible points yet. So fk best is initialized as plus infinity. Okay, so the convergence is basically the same. I wonft go into the details. It just works. I mean, thatfs the power of these kinda very slow, very crude methods. In fact, thatfs going to come up in our next topic. What are you going to say about a subgradient method is theyfre very unsophisticated, theyfre very slow, but actually, one of the things you get in return for that is that they are very rugged. In fact, in the next lecture, which wefll get to very soon, youfll see exactly how rugged they are. There it kinda makes sense. Anyway, so there it is. Thatfs a typical result. And I think the proof of this is in the notes. You can look at it. But letfs just do an inequality form LP, so letfs minimize C transpose x subject to Ax less than b. Itfs a problem with 20 variables and 200 inequalities. Letfs see, the optimal value for that instance turns out to be minus 3.4. We have one over k step size here. Oh, by the way, when we do the feasible step, you can do a Polyak step size, because when youfre -- if youfre doing a step on fj, which is a violated inequality, what youfre interested in is fj equals zero. You are specifically interested in that. So your step size can be the Polyak. And this would be an example of sorta the convergence f minus f*. I guess if f* is minus 3.4, then -- well, this is not bad, right? I mean, this is -- I donft know. Letfs find out where 10 percent is. Therefs 10 percent. So took you about 500 steps to get 10 percent or something. So each step here costs what? Whatfs the cost of the step here, assuming dense, no structural? Whatfs the cost of a step in solving? Letfs write that down. So wefre gonna -- letfs see -- wefre gonna solve this problem. Wefre gonna minimize C transpose x subject to Ax less than b. Whatfs the cost of a subgradient method step here? Youfre exempted because you canft see it. Is that true you canft see it? No, you canft see it. You can see the constraints. Thatfs actually the really important part. Whatfs the cost? [Student:][Inaudible]. How did you do it? Whatfs the method? If you were in matlab, how long would it be? For that matter, itfs just as easy to write it in lapad, but letfs write it in matlab. What would it be? How do you implement this method here? Not this one, but -- by the way, of course all the source code for this is online, but therefs a method, right? So whatfs the method? Somebody tell me the method. Homework three. Youfll be doing this shortly enough. Well, herefs the lazy way. You just evaluate Ax and you compare to b. If Ax is less than or equal to b, whatfs your update on x? Itfs x minus equals alpha c, right? Otherwise, if Ax is not less than or equal to b, you sorta, you find -- for example, you might as well find the maximum violated one, or -- I mean, it doesnft matter. Thatfs the point. You can take any violated one. So but if you evaluate all of them -- and thatfs just from laziness -- you evaluate all of them, then whatfs the cost, actually, of evaluating this? Therefs your cost right there. Itfs just multiplying Ax. Whatfs that? [Student:][Inaudible]. Are you saying mn? Thank you, good. Okay, mn. This is -- itfs irritating -- I know -- the thing is, you should know these things. This should not be abstract parts from three days of 364a. You should just know these things. You should know what the numbers are on modern processors and things like that. Just for fun everyone should -- after a while, then, we quit and then you go back and itfs just Ax and stuff like that. So the cost is mn per step. So what that says -- whereas, how about -- whatfs the cost on an interior point method on this guy? Whatfs an interior point step? In fact, whatfs the interior point method complexity period, end of story, on this guy? Just minimize C transpose Ax less than b. At each step you have to solve something that looks like Ad, A transpose, something or other, right? And thatfs going to be n cubed -- I mean, unless thatfs [inaudible], thatfs n cubed. But forming Ad A transpose, thatfs the joke on you. Thatfs m(n)2. m is bigger than n. Thatfs the dominant term. So itfs m(n)2. How many interior point steps does it take to solve this problem? [Student:][Inaudible]. Thank you. Twenty. So itfs 20. So whatfs the overall complexity of solving this problem? m n squared. And remember what that is that follows my mnemonic. Itfs the big dimension times little dimension squared. This assumes you know what youfre doing. If you do it the wrong way, itfs the big dimension squared times the small dimension. Always ask yourself that. So itfs m n squared versus mn. So basically, a subgradient step costs a factor of n more. n is 20. I mean, it doesnft -- is that right? Okay, so that says here you really should divide these by 20. And so that -- so you said 20 steps. So this is 25. It would -- you would actually have solved the problem here. Youfve solved it through about 10 percent accuracy with this subgradient type method here, roughly ten percent. Maybe a little bit better. But in an interior point method, in this amount of effort, roughly, in this amount of effort youfd have the accuracy of ten to the minus ten. Okay. Everything cool? All right. Is that going to work? I donft know what Ifve done. Okay. So, our next topic is the stochastic subgradient method. And wefre gonna get to some of the first things we can actually do with subgradient-type methods that donft really have an analog in interior point methods. Wefre not -- so far theyfre just cool because theyfre three lines that proof of convergences, four lines and so on. Wefre gonna see now some very cool things about subgradient methods later, but now wefre gonna see something thatfs actually different and isnft 364a compatible. So wefre gonna do stochastic subgradient method. And let me just give you the rough background on it. The rough background on it is that these methods, these subgradient methods, theyfre slow, but theyfre completely robust. They just donft break down. Theyfre three lines of code, one really, two, something like that. One or two lines of code, two lines of code. Theyfre very slow and boy are they robust. They just cannot be messed up. And wefre gonna see a very specific example of that. In fact, whatfs gonna turn out is you can add noise, not small noise, to the subgradient calculator. Everyone would guess -- and look, if youfre doing stuff in double precision floating points, youfre always adding noise every time you all anything. In an interior point method you get -- you say get me the gradient. That comes back with noise. I guess in ee, we would say with something like 200 decibel signal noise ratio, because thatfs what a i triple-e floating point gives you. But it basically comes back already with noise, but itfs on the order of 1(e) minus 8 or 1(e) minus 10 times the size of the thing youfre calculating. Thatfs standard. Itfs gonna turn out there. So no one would be surprised if barrier method continued to work if there was noise that was in the sixth figure of your gradient calculation. That would hardly be surprising. Fifth figure, again. Fourth figure you can start imaging having some trouble now. The subgradient methods, they work this way. Herefs how stupid and robust they are. Not only -- you can actually have a signal of noise ratio thatfs quite negative. So notice you can have basically a subgradient where the signal noise ratio is one. In other words, that means basically when the person says the subgradient is that direction, the true subgradient can actually be back there. Itfs just sorta if you ask them 50 time or 100 times or something, they should be kinda average out vaguely to the right direction. Everybody got this? So it went to school, actually. It has lots of applications. So okay, so let me define a noisy unbiased subgradient. So herefs what it is. So I have a fixed -- this is a deterministic point, x, and then I have a noisy unbiased subgradient for f at x is this. It is a vector, a random vector g tilde that satisfies this, that its, on average, its expected value is a subgradient. Now by the way, this means, of course, that for any particular g tilde, this inequality if false, I mean, obviously, need not hold, right? However, on average -- so basically think of your f dot get subgrad as being ran -- itfs not deterministic. When you call it, it gives you different gfs. If you call it a zillion times on average that would give you something close to the mean. Thatfs close to a subgradient. Everybody got it? So wefll see lots of practical examples where you get things like that. Okay. Another way to say it is this, is that what comes back is a true subgradient plus a noise, which is zero mean. Thatfs a stochastic subgradient. Now this error here, it can represent all sort of things. It can be -- first of all, it can just be computation error that basically when you calculate subgradient youfre sloppy or you do it in pix point or I donft know. Anything like that. But it can also be measurement noise. Wefre going to see itfs going to be Monte Carlo sampling error, so if, in fact, the function itself is an expected value of something, and you estimate an expected value by Monte Carlo, then youfre right, itfs unbiased -- I mean, itfs an unbiased estimated. You write it down as -- well, itfs an unexpected then the average is the right. But you get -- itfs unbiased, and then v is actually the difference between -- itfs a random variable and itfs the difference between the what you actually get is your Monte Carlo sampling error. Okay. Now if x is also random, then you say the g tilde is a noisy unbiased subgradient if the following is true: for all z this holds almost surely that this is the conditional expectation of g tilde. Thatfs the noisy subgradient condition on x. Now thatfs a random variable, so this right-hand side is a random variable. Thatfs not a random variable. And itfs also a random variable because x is a random variable. So the whole thing on the right is a random variable. And if this inequality holds almost surely, then you call it a noisy unbiased subgradient. So thatfs what it is. Okay. And thatfs the same as saying the following: it says that the conditional expectation of g tilde given x is a subgradient of f at x almost surely. So thatfs what it means. For the conditional one, if x is not random, itfs like that, I can -- I donft need the condition on x and I can erase that. So, letfs see, I donft know what this means. Anyway. This is a random vector. Thatfs a random vector and the idea is -- and thatfs actually a random set. And so it basically says that that inequality holds almost surely. So okay. Now herefs a stochastic subgradient method. Ready? Here. In other words, itfs the subgradient method. So it says you got a noisy subgradient, Ifll just use it. Ifll just use it. Nothing else. You basically update like that and thatfs it. Now I want to point something out. You get a -- this is now a stochastic process, because even if x(0), if your initial x was deterministic, then g(0) is already a random variable, and therefore, x(1) is a random variable, the first update, because it depends on g(0). And so this is now a stochastic process, this thing. Okay. So we now have the stochastic process, which is the stochastic subgradient -- the trajectory of the stochastic subgradient method. And here you just have any noisy unbiased subgradient. And then wefll take the step size, the same as always, and then f(k) best is going to be the min of these things. Thatfs a -- by the way, thatfs a random variable there. Because thatfs now a stochastic process here. So thatfs a stochastic process and thatfs a random variable. It is f(k) best. Okay. So herefs some assumptions. The first is wefll assume that the problem is bounded below. These are much stronger than you need, but thatfs good enough. Wefll make this global Lipschitz condition here. More sophisticated methods you can relax these, but thatfs okay. And wefll take the expected value of x(1) minus x* -- x(1), by the way, could be just a fixed number, in which case you donft even need this expected value. Itfs the same as before. Now wefre going to have the step sizes, theyfre going to be square-summable, but not summable. So, for example, one over k would do the trick. So youfre going to take a l2, but not l1. Little l2, but not little l1 sequence of steps. One over k is fine. Okay. Here are the convergence results. Okay, Ifll summarize this one. It works. This says that the -- that it converges in probability. And, in fact, you have almost sure convergence. Wefre not going to prove this one, although itfs not that hard to do, this one. We will show -- actually, wefll show this. This will follow immediately from that, since these are f(k) minus -- f(k) is bigger than f*. So thatfll follow immediately. So before we go on and look at all this, I just want to point out how ridiculous this is. So first of all, the subgradient method by itself I think is ridiculous enough. It basically says you want to minimize -- you want to do minimax problem. It says no problem. At each step go around and find out which of the functions is the maximum. If therefs more than one, arbitrarily break ties. Return, letfs say gradient of that one, and take a step in that direction. Thatfs totally stupid, because if youfre doing minimax, the whole point is when you finish, a lot of these things are going to be tied, and the whole point is you donft want to just step greedily to improve one of these functions when you have a bunch of them. It just says do it, and the one over k step size are going to take care of everything. Whatfs wild about this is that that method, though, is so robust that, in fact, your get subgradient algorithm can be so bad that it can actually, as long as, on average, itfs returning valid subgradients, itfs gonna work. So single to noise ratio can be minus 20 decibels. You can be getting the -- whenever you get a subgradient, you could be adding to that a noise ten times bigger than the actual subgradient. Everybody see this? The whole thing is completely ridiculous. Now, how -- will the convergence be fast? No. It canft be. I mean, it can hardly be fast if someonefs only giving you a subgradient, which is kind of a crappy direction anyway for where to go. But now if they give you a subgradient where the negative 20 decibels signal noise ratio, in other words with -- basically it says that you canft even trust the subgradient within a factor of ten. Youfd have to call -- youfd actually ask for a subgradient direction like ten or 100 -- youfd call it 100 times and average the answers. And thatfs the only time you could start getting some moderately sensible direction to go in. Everybody see what Ifm saying here? The whole thingfs quite ridiculous. And the summary is, it just works. These are kinda cool. What? Itfs also -- this is known and used in a lot of different things, signal processing and all sorts of other areas. Actually, therefs a big resurgence of interest in this right now in what people call online algorithm thatfs being done by people in CS and machine learning and stuff like that. So letfs look at the convergence groove. Itfs tricky. You wonft get the subtleties here. But you can look at the notes, too. Itfs not simple. I donft know. I got very deeply confused and you have to go over it very carefully. That itfs subtle you wonft get from this, but letfs look at it. It goes like this. Youfre going to look at the conditional expectation of the distance to an optimal point given x(k) -- the next distance here. Now this thing is nothing but that, so we just plug that in. And we do the same thing we did with the subgradient method. We split it out and take this minus this. Thatfs one term. And we get this term. Now that, and this is conditioned on x(k). So x(k) conditioned on x(k) is x(k). So this loses the conditional expectation condition on x(k). Thatfs a random variable, of course. But you lose the conditional expectation. Itfs the same. Then you get this. You get two alpha times the conditional expectation of now itfs the cross-product of this minus this and that term. And thatfs this thing conditioned on x(k). And the last term is you get alpha squared times the conditional expectation of the subgradient squared given x(k). And wefre just going to leave that term and leave it alone. Now this term in here, wefre going to break up into two things. Wefre going to write it this way. Itfs the -- I can take here the x* is a constant and so condition on x(k), that just x*. And then this term, g tilde k transposed x*, thatfs linear in this, so conditional expectation commutes with linear operators, so that comes around and you get this thing. Now that -- this thing here, the definition of being a subgradient noisy stochastic subgradient, or a stochastic subgradient, if you like, is that this thing here should be, I guess itfs bigger than or equal to whatever the correct inequality is this, to make this thing true. So thatfs how that works. And so you end up with this. Now if you go back and look at the proof of the gradient method, subgradient method, it looks the same, except therefs not the conditional expectations around. And therefs a few extra lines in here because of the conditional expectation. So letfs look at this. And this inequality here is going to hold almost surely. Everything here is a random variable. Thatfs a random variable. This entire thing over here is a random variable. So this inequality holds almost sure this thing is less than that. And now what you can do is the following: we can actually telescope -- I mean, we can actually now telescope stuff, the same as before. If we take -- I should say, if we take expectation of this now, then the expectation of this is just the same as the expected value of that. Thatfs a number, and thatfs less than the expected value of that minus, then, the expected value of that. Expected value of that, that just drops the conditional part there. And so herefs what you get. You end up, if you take expectation of left- and right-hand sides of the inequality above, which was an inequality that holds almost surely, you get this. The expected distance to the optimal point in the next step is less than the expected -- the current distance to the next point minus two alpha k times the expected value of your suboptimality here, plus alpha squared times the expected value of the subgradient squared. This thing will replace with just the number g squared here, and wefll apply this recursively and you end up with this, that the expected value of x(k) plus 1 squared minus x* is less than the expected value of x(1) minus x squared. This is going to be less than r squared here, this thing. Thatfs our assumption. And then again we get the good guy and the bad guy. Thatfs bad. This is good because this is -- I mean, this thing is always bigger than that by definition. So whatever this is here, itfs a number. Whatever this number is, itfs non-negative here. I guess itfs a positive, or something like that. Thatfs a positive number. Thatfs a negative sign here, so this is on our side. Itfs actually making this distance smaller. And the nice part about that is that goes in alpha, this goes in alpha squared, and so the bad guy, at least for small enough alpha, is gonna lose. And then you just simply turn this around and you get the following: you get the minimum of i equals 1 to k of the expected value of your suboptimality, which is less than or equal to r squared plus g squared and norm alpha squared. Itfs the same thing as before. So itfs exactly what we had before. Okay. Except now itfs the minimum of the expected value of the suboptimality. Thatfs actually a random variable here. So thatfs the difference. Okay. Now this tells us immediately that the expected value of -- that this sequence actually converges the min of these expected values converges to f*. Thatfs what it tells us. Actually, I believe that the fact is, you donft even need the min here. This converges. Thatfs a stronger result, but we donft need it. Wefll get to that. Now wefre going to apply Jensenfs inequality. So Jensenfs inequality says the following: this is -- Ifm gonna commute expectation and min. Min is a concave function, so what that says is the following: Here I have the expectation of the min and here I have the min of the expectation. And I guess the inequality goes this way. But this thing here is a random variable and itfs a random variable f(k) best. And so expected value of f(k) best is less than or equal to this. This thing goes to zero. So wefre done. By the way, I never remember which way Jensenfs inequality goes. So Ifm not ashamed to admit it. So I usually have to go to a quiet place or draw a little picture with this and some lines. Because every time you do it itfs something different. Itfs either a concave or whatever. Itfs important which way it goes, so I just go somewhere quiet and I draw a picture and then come back and see. Ifm trusting myself that itfs correct. I think it is. But once you know this, youfre done. But this is -- now youfre done with -- you can get convergence of probability very easily, because these random variables are non-negative. So these are non-negative. So the probability that a positive random variablefs bigger than epsilon is less than that, and we already know the numerator goes to zero, so for any epsilon, this goes to zero. And so you get convergence in probability. So by the way, this is not -- this is not simple stuff. I mean, itfs not complicated. It did fit on two slides with giant font size. But trust me, itfs not -- itfs not totally straightforward. So and I think the notes has this in more detail. Okay. Letfs do an example. So herefs an example. Least lines linear minimization. This is what wefre going to do. Wefre going to use the stochastic subgradient method, except that -- how do you get a subgradient of this thing? How do you get a subgradient of this? What do you do? Yeah, you evaluate like all -- you have to evaluate all of these. Because otherwise you donft know what the maximum is. You evaluate all of them, find the maximum value, then go back and find one of them that had maximum value. Break ties arbitrarily. Could be the last one that had maximum value. And then you return ai. So herefs what wefre going to do. We will actually artificially wefll just add zero mean random noise to it. Wefll just add noise. So wefll make, basically wefll make -- wefll add noise in our f dot get subgrad method. Thatfs what wefre gonna do. And herefs just a problem instance with 20 variables. M equals 100 -- you have 100 terms. Youfve seen this before. f* is -- the same example. One over k step size. And the noises are -- have about a four -- they have about 25 percent of the sizes of the subgradient, because the average size of the subgradient is about four and the noise is the average size is -- letfs see, each of these is about .7 or something like that. So thatfs about 25 percent, something like that. Okay. And herefs what happens. If you have -- herefs the noise-free case. So you get this. And so, indeed, so I should say what this means is youfre getting the subgradient at about how many bits of accuracy are you getting in your subgradient? When you call subgradient, how many bits of accuracy are you getting here? I mean, if your noise is on the order of a quarter, I just want a rough number. How many bits of accuracy are we talking here? Is this 20? Would you -- itfs not -- you have the same signal noise ratio, one to four -- no, four to one. Roughly what -- how many bits? Itfs not a hard -- itfs not a complicated question, so people are probably way over-computing or something like that. [Student:][Inaudible]. Itfs two, roughly. What? You said two and a half. You believe two? [Student:][Inaudible]. You think itfs 12? Itfs two, right? Basically means if I tell you a component -- if I tell you the subgradient is this, you can be off by as much as 25 percent. I mean, this is just all hand waving. It roughly means itfs about two bits of accuracy. Itfs not a whole lot of accuracy, right? So thatfs the point. These are really quite crude. You can see what happens is actually interesting. Therefs one realization and herefs another. Actually, in this one wefre really lucky. The errors in the subgradient were such that they didnft mess us up very much, and thatfs another one where it did, although it canft be stopped. These are big numbers here. Thatfs about -- that tends to -- the interesting thing is to get the 10 percent accuracy you probably multiplied your -- the number of steps by four or something like that. The really cool part is that you would get -- what would happen if the signal to noise ratio were inverted so what if the signal were four times as big as the -- suppose when you got subgradient we took this to be, I guess, whatever the .5 -- suppose the signal noise ratio were reversed and the signal to noise ratio was .25, not four, roughly? What do you think would happen here? Well, first of all we know the theory says itfs going to work. But think about how ridiculous that is, basically. It -- you calculate the worst g, that says go in that direction. And to that vector you add a galcion, which is four times bigger. Which means, basically, itfs all -- it would be very difficult to distinguish between your get subgrad method and this completely random, like, gWhich way should I go?h And youfre like, gOh, that way.h You know? And itfs like, gReally? Can you verify that?h And you go, gThat way.h Itfs just totally random. All youfd have to do that like 1,000 times and average them to even see lightly that therefs some signal there. Everybody see what Ifm saying? What would happen, of course, is that would now mess it up much more. These would be that. Thatfs what would happen. So this shows you what happens is 100 -- you do 100 realizations, you generate 100 stochastic processes, which is the stochastic subgradient method running forward in time. And this shows you the average here. And this shows you the standard deviation. Thatfs a long scale, so thatfs why these things look weirdly asymmetric. So on a linear scale this is plus/minus one standard deviation. This is also plus/minus one standard deviation, but itfs on a long scale. But thatfs what it is. By the way, itfs actually kind of interesting, these points down here correspond to cases where the noises, the noise was kinda bad. Sorry, the noise accidentally pointed you in the right direction, and as a result, you did actually quite well. And of course, these are cases where the noise kinda was hurting you as much as possible. Makes sense? So I guess the summary of this is that the subgradient method you can make fun of it, itfs very slow, and all that kinda stuff, but the wild part is actually any zero mean noise added to it does hurt it. And wefre not talking noise in the fifth digit. Wefre talking, if you like, noise in the minus fifth digit, if you want. So you can actually, I mean, which is quite ridiculous if you think about it. Donft try a Newton method when the -- when youfre calculating your gradients or your Hessians with 25 percent noise. For that matter, donft try it if your signal to noise ratio is one to four, so itfs off the other way around. Okay. So herefs a -- these are empirical distributions of your suboptimality at 250, 1,000, and 5,000 steps here. And they look like this, and you can actually see these would be the ones at the top of that plot, those arrow bars. And then these would be the ones at the bottom. But you can sort of see that the distribution is very slowly going down like that. So thatfs the picture. Let me ask one question about this problem. How would you deal with this in a 364a context? Suppose I told you, you need to minimize a piece-wise linear function, but unfortunately, the only method -- the source code I wonft let you look at. The only thing that calculates this thing only does it to two bits of accuracy. Or another way to say it is every time you call it, youfre going to get a subgradient plus a noise, which is as big as a quarter the size of the actual subgradient. How would you deal with that in 364a? I mean, we didnft really have a method to deal with this, but now just tell me what would you do? [Student:]Call that function 10,000 times. Right. Right. So 10,000. And, good. Ten thousand was a good choice of number, by the way. So you call the number 10,000 times, and then youfd average those subgradients and what would be the, roughly, how much error is in the one thatfs 10,000 times? I was hoping for you to say I was going to go down by square root of 10,000. Thatfs why I was complimenting your choice of 10,000, because it had a nice square root of 100. So instead of being an error being 25 percent, it would be .25 percent. So that might be enough to actually run a gradient method. It probably would work okay. So what would happen is at the end game it would kinda start being erratic or something, but youfd get a pretty good answer pretty quickly. By the way, if you evaluate it 10,000 times, I should point something out, this is beating you. So itfs not clear -- anyway, youfre right, thatfs what youfd do. Okay. So this is actually maybe a good time to talk about stochastic programming. Actually, at some point I want to make a whole lecture on this, because itfs quite cool. Everybody should know about it. And itfs this. In stochastic programming, youfre going to explicitly take into account some uncertainty in the objective in the constraints. So thatfs stochastic -- therefs something called robust programming, where you have uncertainty, but you problem it in a different way and you look for worst case type things. But for stochastic thatfs a very common, very old method, something like that. I should mention, itfs kinda obvious that this comes up in practice all the time. So anytime anybodyfs solving an optimization problem, you just point to any data as -- oh, by the way, I should mention this. If you were not at the problem session yesterday, you should find someone who was and ask them what I said. I donft remember what I said, but some of it is probably useful. So you take any problem, like a linear program, and what you do is you then ask the person solving the linear program, you point to a coefficient, not a zero, because zeros are often really zero. Also onefs, those are also not good choices, because a one is often really one. But you point to any other coefficient thatfs not zero or one and you ask them what is that coefficient. And that coefficient has a provenience. It traces back to various things. If itfs a robot it traces back to a length and a motor constant and a moment of inertia, or whatever. If itfs a finance problem, it traces back to a mean return, a correlation between two asset returns or -- who knows what? If itfs signal processing, it goes back to a noise or a signal statistics parameter, for example. Everybody see what Ifm saying? And then you look at them and you say, gDo you really know that thing to a significant figures?h And now if theyfre honest theyfll say, gOf course not.h And the truth is they really only know it to, it depend on the application, but it could be three significant figures. In a really lucky case it could be four. It could be two, could be one. And, actually, if you get some economist after a couple glasses of wine, theyfll look up and say, gWe get the sine right, wefre happy.h So anyway, but until then, they wonft admit that. So okay. So the point of all this is itfs kinda obvious that if youfre solving a problem, the data, if you point to a data value and -- it has a provenience and it traces back to things that probably you donft know better than like a percent. I mean, it depends on the application, but letfs just say a percent. By the way, if you donft know any of the data, if you barely know the sine of the data, my recommendation with respect to optimization -- well, my comment is real simple. Itfs why bother? So if itfs really true that you donft know anything about the model, then you might as well just do it by intuition and do your investments or your whatever you want. Just do it by intuition and guess. Because if you donft know anything, using smart methods is not going to really help. So typically youfll know one significant figure, maybe two, maybe three or something like that. And then, by the way, all the stuff from this whole year now starts paying off a lot. And there are weird sick cases where you know things through high accuracy. I mean, GPS is one, for example, where you point to some number and they go, gYou really know that to 14 decimal places?h And theyfre like, gYes.h I mean, I just find it weird. But anyway, normal stuff is accurate between one -- zero is like why bother for this. One, two, three, five, six, I guess in some signal processing things, you can talk about 15 bits or something like that, 20. But rarely more than that. Okay. So therefs a lot of ways of dealing with uncertainty. The main one is to do a posterior analysis. Thatfs very common. Let me tell you -- people know what a posterior analysis is? So posterior analysis goes like this: youfre making -- it doesnft really matter -- letfs make a decode -- youfre making a control system for a robot, I donft care, something like that. So you sit around and you work out a control system and when you work out the control system, you can trace your data back and there is a stiffness in there and therefs a length of the link to and therefs an angle and therefs all this stuff and therefs a motor constant. Therefs all sorts of junk in there. And they have nut values. And you have a robot controller and you get some controller and now you, before you implement it on the robot -- thatfd be the simplest way -- the first thing you do is something called a posterior analysis. Posterior analysis goes like this: you take the controller or the optimization variable, whatever it is, design on the basis of one particular value, like a nominal value of all those parameters. You take that and you resimulate it with those values, multiple instances of those values, generated according to plausible distributions. Everybody see what Ifm saying? And by the way, if you donft do this, then itfs called stupid, actually. This is just absolutely standard engineering practice. Unfortunately, itfs done in Ifd say about 15 percent of cases. Donft ask me why. So in other words, you design a robot controller, you optimize a portfolio, anything you do, you do machine learning -- actually, in statistics this is absolutely ingrained in people from when theyfre small children in statistics. You do this. Itfs the validation set or something like that. So herefs what you do. You design that controller on the basis of a length and a motor constant, which is this -- motor constant depends on temperature and all sorts of other crap. You ask somebody who knows about motors and you say, gHow well do you know that motor constant?h And theyfd say, gI donft know, plus/minus 5 percent, something like that.h You go to someone in finance and you say, gYou really believe that these two assets are 57.366 percent correlated?h And theyfd go, gNo, but itfs between 20 and 30 percent correlation, maybe.h And you go, gThank you.h Then what you do is you take that portfolio allocation and you simulate the risk in return with lots of new data, which are randomly and plausibly chosen. Everybody see what Ifm saying? You change the motor constant plus 5 percent, minus 5 percent. Moment of inertia, change it. The load youfre picking up, you donft know that within more than a few percent. You vary those. And then you simply simulate. And you see what happens. If you get nice height curves, so in other words, that means that your design is relatively insensitive, everythingfs cool, and now you download it to the actual real time controller. Or you drop it over to the real trading engine, or whatever you want to do. So thatfs how that works. So thatfs a standard method. Thatfs posterior analysis. Stochastic optimization is actually going to be dealing with the uncertainty directly and explicitly, and I guess wefll continue this next time. Let me repeat for those who came in late, plea, grovel, and Ifm not sure what the word is. At 12:50 to 2:05 today here wefre having the worldfs first, I believe -- I havenft been notified from SCPG thatfs itfs not true, the worldfs first tape-behind. Wefll have a dramatic reenactment of lecture one. So come if you can.