-- email -- for example, if you donft live in the Bay area, you should email us to let us know when you want the final emailed to you. Thatfs the first announcement. And I guess, even for people in the Bay area, sometimes traffic is a big pain or something and in which case this is an easier option. Second announcement is homework nine -- wefll post the solutions Thursday so Thursday evening after homework nine is due. And I think wefve now responded to maybe 10, and growing, inquiries. I guess there is a problem involving -- the title is something like time compression equalizer, does this strike a bell? Vaguely. You look worn out. No? Okay. Itfs just early. Okay. All right. So we fielded a bunch a bunch of questions about the convulsion, we didnft put the limits in the sum, in the convulsion, but youfre to interpret, I think itfs W and C as 0 when you index outside the range. So a bunch of -- maybe 10 people pointed this out to us or something like that. An important announcement, sadly, I have to leave tomorrow morning to go to Austin. I donft like doing that, but I have to go. So Ifm off to Austin, and that means that Thursdayfs lecture, which is the last lecture for this class will actually be given this afternoon. And I think itfs Skilling Auditorium 415 this afternoon, but whatever the website says thatfs what it is. And thatfs on the first page, the announcements page. So thatfs where. If you were are around this afternoon and want to come, please do come. You should know that it is every professors worst nightmare, maybe second or third worst, but itfs way up there on the list that you should give a tape ahead and no one would come. This would cause you to give a lecture to no one. Itfs never happened, but it doesnft work. So at least, statistically, some of you should come. My guess is someone will come. Wefve had long discussions about this. Several colleagues have suggested that we should do tape aheadfs from wherever we are, sort of like a nova show or something like that. So you could say hi, Ifm here in Rio and wefre gonna talk about the singular value decomposition or just something like that, but we havenft actually approached SCPD to see if they can pull that off, but I do want to do that sometime. Anyway, this afternoon is a tape ahead. Please come, statically. So as long as some of you come. My guess is that some people will come anyway. All right. Any questions about last time or administrative stuff? Oh, I have to say that one of the problems is because Ifm actually in between this lecture and then Thursdays lecture, which is this afternoon, I also have to give a talk at NASA Ames so Ifm gonna have to leave my office hours early today around noon. I have to be walking out the door by noon. So I feel quite bad about that. In fact, Ifll even be gone when you get your final. That might be a good thing. But Ifll be back Saturday morning. Ifll be on email and Ifll be contact, letfs put it that way. And Ifll be back Saturday. And we have a couple of Beta testers taking it; I think one in about an hour and a half. So someone is gonna debug it for you. Itfs already been debugged pretty well. Okay. Any questions? Then wefll continue on reachability. So last time we looked at this idea of just reachability. Reachability is the following state transfer problem. You start from zero and the question is where can you go? So itfs a special state transfer problem. You start from zero and you want to hit something like, in states base, at time T. And we said that our sub T is the reachable sub space. This is sub space. If you can hit a point in T, seconds, or epics, you can certainly hit twice the point and itfs a sub space, if you can hit one point or another, you can hit the sum. So itfs the sub space. And itfs a growing family of sub spaces. So wefll know exactly what the family is. Actually, we already know for discrete time. For discrete time itfs interesting, but itfs just nothing but an application of the material in the course. Itfs basically this. Our sub T is the range of this matrix, CT; this is the controllability matrix at time T. I think I mentioned last time that this matrix, you will see in other courses. I mean, it comes up in, for example, scientific computing, in which case RT is actually called a [inaudible] sub space. I may have mentioned that last time, but [inaudible] you will see that this matrix doesnft come up in just this context. It comes up in lots of others. So this matrix here and I think we discussed it last time, as you increase T it gets fatter and fatter, in fact, every time you increment time, the matrix gets fatter by the width of B. Thatfs the number of inputs, which is M, is what wefre using here. So what happens is you have a matrix, you start with B, thatfs where you the range of B in one step, then the range of B and AB is where you can get in two steps and that was parched very carefully and I guess I shouldnft have said it so quickly. When I said the range of B and AB, it means the matrix B space AB. So itfs the linear combination of columns of B plus columns of AB. Thatfs where you can get the two steps together. Okay. Now we noted by the Cayley Hamilton Theorem, once you get to N steps, A to the N is a linear combination of I, A, A squared up to A and minus one and so the rank of CT or the range, does not increase once you hit above N. So for example, the range of CN and plus 1 is also the range of CN. So it doesnft grow. Okay. Now that means we have a complete analysis of discrete time system where you can get starting from zero in T epics. The answer is just this. You can get to the range of CT for T less than N, and then after that, once you hit N, itfs the range of C. And C is just CN. Thatfs called the controllability matrix. And the system is called controllable if CN is onto. So in other words, if itfs range is RN. So thatfs the idea. And so you can say, you get something thatfs not totally obvious, itfs this, you have the following. In the discrete time system any state you can reach in any number of steps, can be reached, in T equals N steps. Now, that doesnft mean thatfs a good idea. We will see why very shortly, but nevertheless, as a mathematical fact, it says that if you canft reach a state in N steps then you canft reach it ever. So giving you more time to hit the step is not gonna help at all. Okay. In the reachable set, thatfs the set of points you can hit with no limit on time, is simply the range of C. Itfs the range of this matrix. Okay. Now a system is called controllable or reachable, now, unfortunately there are people who distinguish between reachable and controllable, sadly, so sometimes controllable means something slightly different, but donft worry about it for now. A system is controllable if you can reach any state in in steps or fewer, and thatfs if and only if this matrix C is full rank. So thatfs the condition. And wefll just do a little stupid example here is this. You have XT plus 1 is this matrix zero 1 1 0 X of T plus 1 1 U of T, now, we can just look at this and know immediately what it does. It does absolutely nothing but swap the roles. Thatfs the swap matrix, I mean, if you ask me to describe it in English, thatfs a swap matrix. It simply swaps X1 and X2. The input, and this is the important part, acts on both states the same way. So the point is therefs a symmetry in the system. Itfs just a stupid simple example. Therefs a symmetry in the system and it basically says that whatever you can do to one state, and Ifm arguing very roughly now, it will do the same thing to the other. So thatfs a hint right there that therefs gonna be some things you canft get to. Wefll wait and see what they are. The controllability matrix is B, thatfs AB, and sure enough, B AB is not on two. Itfs singular. And the reachable set is all states where X1 is equal to X2. So no matter what you do here, no matter how you wiggle, you will never reach a state that doesnft have the form of a number times the vector 1 1. It just canft happen. And itfs obvious here you certainly didnft need controllability analysis to see this here. And to be blunt about it, thatfs often the case in almost all examples. I mean, sometimes you donft know, you actually have to check, theyfll be something, and in fact, not only that, but most lack of controllability comes down to symmetries like this. They can do much more sophisticated in large mechanical systems and things like that or after the fact youfll realize that something symmetric in your actuator configurations is symmetric and of course, you couldnft do something after the fact. Wefll see actually therefs a much more interesting notion of controllability that wefre gonna get to of quantitative work. Okay. Now letfs look at general state transfers. So general state transfers, thatfs a general problem. Wefre gonna transfer from initial to a final time, from an initial state to a final state, and of course this is the formula that relates the final state to the initial state and of course, this is completely clear, thatfs simply the dynamics propagating the initial state forward in time. Thatfs nothing else. So this in fact what would happen if you did nothing, if you were zero over the interval? This is the effect, I stacked my inputs in a big M times TF minus T1 plus 1 vector and I multiply it by this controllability matrix here. And this gives you the effect of the input, how it changes your final state. Okay. So what this says is this equation holds, if and only if, Ifll take X desired to be the state you want XTF to be, so I take XTF minus this is in the range of that because this is in the range of that and therefs your answer. So it actually makes a lot of sense. Itfs actually quite beautiful. It basically says something like this. If you want to know if you can transfer from an initial state to a desired state, then itfs really the same as the reachability problem, what you want to reach is an interesting state. You donft want to reach X desired. You want to reach X desired minus what would happen if your initial state were propagated forward in time. Thatfs what it comes down to. Okay. So this is simple, but itfs quite interesting. So I guess another way of saying it is something like this. The U, if you want to transfer from T initial to X of T initial to some X desired, it says donft aim at X desired. What you do is pretend youfre starting from zero and aim for this point, which takes into account the drift dynamics. Okay. So thatfs kind of what you want you want to do. Okay. So general state transfer reduces to reachability problem, and now I believe last time somebody asked the following question. We talked about reachability and your ability to get from one state to another, letfs say over some fixed time interval. And the question is if we made the time interval longer, can you get to more points? Certainly if the initial state is zero, thatfs true. If the initial state is not zero, thatfs false. Itfs just wrong. So it is entirely possible in general reachability to be able to hit a state from one initial state in four steps, but then in five steps to be unable to hit it. Okay. Thatfs entirely possible. It does happen and so thatfs entirely possible. Now, therefs a very important special case. Some people think of it as the dual of reachability and sometimes people call this controlling, I mean, if you distinguish between reaching and controlling, that is driving a state to zero. So sometimes the problem of taking a state thatfs non-zero and finding an input that manipulates the state to zero is called regulation and sometimes itfs just called controlling. I can tell you the background there. The basic idea in regulation is that X represents some kind of -- your state actually represents what we call X here represents an error. Itfs an error from some operating conditions. So you have some chemical plant, you have a vehicles, you have whatever you like, X equals zero means youfre back in some state that you want to be in, in some target state or bias point in a circuit or trim for an aircraft or something like that and then regulating or controlling means therefs been a wind gust or somethingfs happened, youfre not in that state and you want to move it back to this standard state which is zero. This equilibrium position, which is zero. So thatfs why itfs called the regulation problem or control problem or something like that. And here you can work out exactly what that is, here it turns out this is just zero so it depends on whether or not, and of course, thatfs a sub space so I can remove the minus sign here. If I give you a non-zero state, letfs just even just check that. So how would we do the following? I give you a system, I give you A and B and I give you a non zero state and I ask, gWhat is the minimum number of steps required to achieve X of T equals zero?h Thatfs the minimum time control problem or whatever you want to call it. How do you solve that? So this is what youfre given. Ifm gonna give you A, Ifm gonna give you B and Ifm gonna give you this, X zero. How do we do it? How do I minimize T for which X of T is zero? Letfs handle a simple case. If X zero is zero, then wefre already done before we started and the answer is T equals zero in that case. Okay. How can you do it in one step? What do you do? [Student:] [Inaudible] Itfs interesting. What you want to do here is the following. You want to check whether A to the T times X0 is in the range of B up to A T minus 1 B. Thatfs it. I think. Make sense? This is what you need to check and you simply increment T now to check. You try T equals 0, we just did that. You try T equals 1, so you hit AX0; you want to check if thatfs in the range of this. Okay. Now, if you test this and you get out T equals N and the answer is still no, what do you say? [Student:][Inaudible] That is cannot be done. Actually, because of this term, that actually requires a little bit of argument, but thatfs correct. So thatfs the basic idea. We have a homework problem thatfs actually a more, itfs actually a more sophisticated version of this. I think. Good. Okay. All right. Okay. Now, again, just applying all the stuff we know, because this is nothing but applied linear algebra. Therefs nothing interesting here. Letfs look at least-norm input for reachability. Thatfs actually much more interesting. So letfs assume the system is reachable, although, now that you know about SVD it wouldnft matter if it werenft, but letfs assume it is. And letfs steer X of 0 to an X desired at time T with inputs user of the UT minus 1. Ifll stack them in reverse time. Thatfs just so I can use CT this way. So I stack them in reverse time and I get X desired is this matrix, thatfs a fat matrix times this is my control, my controls stacked or you could actually call this a control trajectory. Thatfs a good name for that vector. I want to put out one thing about that vector. It runs backwards in time. Thatfs just indexing. I couldfve run them forward in time, too, but then I wouldfve had to of turn CT around to start A T minus 1B, A T minus 2Bc.down to B. But everyone writes this as B, AB, A squared B. So time runs backwards in this vector. Okay. Now, in this state C is square or fat and itfs full rank so itfs on 2 and we want to find the least-norm solution of that. The norm of this by the way is the sum of the squares of the norms of the components. Thatfs true actually for any vector. If I take a big vector and I chunk it up, if I divide it up, any way I like, the sum of the norm squared of the partitioned elements is this norm squared to the original vector. So thatfs what this is and you just want to get the one that minimizes this. This makes a lot of sense. Some people would call this the minimum energy transfer. That would be one. Thatfs, generally speaking, a lie. It generally has nothing to do with that. Itfs extremely rare to find a real problem where the actual goal is to minimize the sum of the squares of something. They do come up, but theyfre very rare. Okay. Well, this is nothing. We know how to do this. So thatfs called the least-norm or the minimum energy input that affects the given state transfer. And if you write it out in terms of what CT is, you get something very interesting. CT of course is B AB A squared B and so on and when you line that up with C transpose C, you get B transpose on top of A transpose B transpose and so on and when you put all the terms together you get a formula that just looks like that. There it is. So thatfs the formula. And again, therefs nothing here. Youfre just applying least-norm from week three in the class. Thatfs nothing else. But itfs really interesting. First of all, notice that itfs just a closed form formula for the minimum energy input that steers you from zero to a desired point in T epics and it just looks like that. And everythingfs here. The only thing in here is a matrix inverse and you might ask, gWhy do you know that that matrix is invertible?h What makes that matrix invertible? This matrix in here is nothing but CT CT transpose. Itfs a fat matrix multiplied by its transpose. That is non singular if and only if C is full rank. And in that case, it corresponds to controllability. But in the case where it is controllable, C dagger is in fact this whole big thing here. By the way, itfs really interesting to see what some of these parts are. Letfs see what they are. Therefs actually one very interesting thing is you see something like this. Therefs sort of a transpose here and the really interesting part is that its running backwards in time. So we donft have any more time left in the class so Ifm not going to go into more detail here, but itfs just an interesting observation. By the way, this is related to things like you may have seen in other contexts, in filtering you may have seen single pluses, you may have seen matched filters, which is basically where the optimum receiver is sort of the same as the original signal but running backwards in time. If youfve seen that, this is the same thing. Itfs identical. So this is not exactly sort of unheard of. Okay. Now, this is the minimum input. By the way, these are the things that I showed on the first day, as I recall, you were completely unimpressed. So this is where wefre just making inputs to some, I donft know, 16 state mechanical system to take it from one state to another in a certain amount of time. They were pretty impressive. Wefre just using this formula. Absolutely nothing else. Just this. And all I was doing was varying T to see what the input would look like. To see what it would require to take you to a certain state. This is much more interesting. We can actually work out the energy, the actual two norm squared of this least-norm input. Now, if you work out what that is, I mean in general what the least-norm input is is actually itfs going to be a quadratic form. And the quadratic form is very simple. It turns out when all the smoke clears Ifll just go through all this. When the smoke clears, itfs this. Itfs a quadratic form. This makes perfect sense that the minimum energy -- let me explain what this is. This is the minimum energy, defined as the sum of the squares of the inputs. By the way, this is the minimum energy. So this is the energy if you apply the input to hit that target state if you do the right thing. You are welcomed to use inputs that use more energy than this and many exist. Well, actually, unless C is squared, in which case if you hit it, therefs only one way to hit it in that, and oh, Ifm sorry, C is squared which means therefs a single input and T equals N. If C is square therefs only one way to hit it so all inputs are minimum energy. But if square is fat, and real simple, therefs lots -- you can go on a joyride and burn up a lot of energy and still arrive at X desired. Thatfs it. This is the minimum. Itfs a quadratic form. And that quadratic form looks like this, and itfs actually quite pretty. Inside here itfs a sum of positive semi-definite matrixes. Now, I know theyfre positive semi-definite because each term looks like this. Itfs A to the tow B times A to the tow B transpose because this part is just that. But whenever you take a matrix and multiply it by its transpose, you get a positive semi-definite matrix. Thatfs what you get. So itfs a sum of positive semi-definite matrixes. Well, sums of positive semi-definite matrixes are positive semi-definite. And in fact, you can even say this and as a matrix fact, itfs correct. When you increment T you add one more positive semi-definite term to this positive definite matrix once T is bigger than N or at some point and that makes the matrix bigger. And I mean now in the matrix sense. So this is a matrix here, which is getting bigger with T, and I mean in the matrix sense. That means, by the way, the inverse is getting smaller. The inverse is getting smaller. That means that the minimum energy required to hit a target in T seconds, as a function of T can only go down. Well, it could be the same in there. It could be the same. Actually, normally it goes down. All right. So itfs actually quite interesting here. It says that we now have a quantitative measure of how controllable a system is or reachable. The reachable is sort of this platonic view that says, gCan you get there at all,h and this one is much more subtle. Itfs less clean but it says basically this. It says oh, I can get to that state, no problem. I can get there, but what itfll do it tells you if for some example, getting there is something that takes a huge amount of input, a very large input is required to get there and for all practical purposes, you can say, gI canft get there.h So thatfs the idea. Then we do beautiful things. I can ask you things like this. I can give a target state and I could say that the energy budget is 10 and I can say, gWhat is the minimum number of steps required to hit this target and stay within my input energy budget?h I could ask you that question and you could answer it by incrementing T until this goes below 10. One possibility is this will never go below 10. In which case, you announce that, well, you can announce several things. You can announce that is too little energy for me to get there no matter how long you let the journey be. So thatfs one option there. You can actually solve a lot of very sophisticated problems. So what this does it gives you a quantitative measure of reachability because it tells you how hard it is. It also allows you to say things like, gWhat points or directions in states base are expensive to hit,h and expensive means require a lot of control. Cheap means, you can get there with very little control. And itfs actually quite interesting. These are lipoids of course, and they basically show that the set of points in states based are reachable at time T with one unit of energy if thatfs a one. Actually, letfs go through the math first and then Ifll say a little bit about how this works. So as I said before, if I have T bigger than S then this matrix, thatfs a matrix in equality is better than that one because the difference between the two is the sum of a bunch of terms of the form, you know, FX transpose between time S and T. So thatfs what this happens here. Now, you know that if one matrix is bigger than another, the inverse actually switches them. So the inverse is less than the inverse here. Now wefre done because if this matrix is less than that, and anytime you put Z transpose Z here and Z transpose here and Z here, this inequality becomes valid. Itfs an ordinary scalar in equality and it works. And that says it takes less energy to get somewhere more leisurely. So thatfs the basic idea. It all makes perfect sense. Now, I should mention something here for general state transfer, the analog is false. Absolutely, or is it? Ewe. Wow, and I put all the intensifier up in front, didnft I. Well, I think itfs false. But all of a sudden I had this panic that -- I think itfs false. Letfs just say that. Thatfs what I think. I think itfs false. I retract my intensifier at the beginning. Itfs probably false. There we go. Wefll leave it that way. So I think with general state transfer, itfs false. Okay. All right. Ifm gonna have to think about that one for a minute. Ifm pretty sure itfs false. Okay. Letfs just look at an example. So herefs an example. Itfs a 2 x 2 example because thatfs the only states based I can draw anyway so herefs a 2 x 2 example. And herefs some system. It increments like this. Therefs an input, and I want to hit this target state 1 1. I just made it up. Therefs no significance to any of this. Itfs all just made up. And what this shows is the minimum energy required to hit the target point 1 1 as a function of time. And you see a lot of interesting things here. You can see that if you hit it in two samples it costs you an energy of over nine. If you say three, you can get there in almost half the energy. I guess itfs half the energy if you double, if you say, instead of two steps, do it in four, and so on and you can see. And it goes down. Now, whatfs interesting is it appears to be going to an asymptote here, which means that to get to that point, with infinite leisure, it still costs energy. Now, I can explain that. Thatfs actually reasonably easy to explain. If a system is stable -- someone have a laptop open. So anyway, no never mind, you donft even need a laptop. Can someone work out the item values of this for me? I need a volunteer. Can you do it? Do you have a pen? So hefs working on the item values, which hefll get back to us in a minute. I put him on the spot. Wefll let you work on that for a bit and then -- itfs just because you have to write out a quadratic or something like that. So the conjecture is that this is actually -- well, no. Cancel the item value thing. What ai was going to say is if this is stable, then in fact, you have to -- if a system is stable and you have to get somewhere, you actually have to fight the dynamics to take it out to some place because if you take your hands off the controls, this is very rough. If you do nothing, the state will just decay back to zero. So youfre swimming upstream when youfre doing reachability for a system that is stable. Okay. Now, if itfs unstable, letfs talk about reachability. Letfs say a system is violently unstable, so basically, all of the eigenvalues for a discrete time system have magnitude bigger than one. So what that means basically is if you do nothing, the state is gonna grow step by step anyway. Now, letfs talk about what happens when I give you more and more time to hit a state. Whatfs gonna happen? If I give you, like, a hundred steps and you have a system thatfs highly unstable or just unstable. If I give you a hundred steps to hit somewhere, what happens is all you have to do is push X of 0 away from the origin. All you do is you push X away from the origin the tiniest bit and then take your hands off the controls and you let the drift, which is the unstable dynamics, bring the system out to where you want to go. Does this make sense? So you kind of work with the different -- there, youfre not fighting the stream, itfs actually on your side for reachability. Does everybody see what Ifm saying? So what that suggests is that for an unstable system, as you give more and more time to hit a target, the energy is gonna go down, in fact, itfs gonna go down to zero. So wefll get to that now. It is very hard to hit an isotopic target point like that. It is very easy to hit a target point like that. Itfs very cheap to hit this one and very expensive to hit that one. So the controllability properties are not isotropic in this case. Okay so letfs examine this business of this energy going to zero. That is a sequence of a function of T, that is a sequence of increasing positive definite matrixes. And I mean increasing in the matrix order. That is a sequence of positive definite matrixes, which is getting smaller. Now, a sequence of positive definite matrixes that are getting smaller at each step converges just the way a sequence of non negative numbers that are monotone and decreasing converges. This converges to a matrix. That matrix has a beautiful interpretation. Itfs called P here, thatfs actually called the controllability gramian, this matrix. And actually itfs the inverse of the gramian, but that doesnft matter what itfs called. So this matrix comes up and actually itfs beautiful. Itfs a quadratic form that tells you how hard it is to hit any point in states based with infinite leisure. Thatfs what this matrix tells you. And by the way, if the system is violently unstable, P can be 0. Thatfs extremely interesting. So it takes, basically, 0 energy to hit anywhere in a system that is violently unstable. Let me just do a simple example. Letfs take B to I and letfs A be 1.01 times the identity. Itfs a very simple system. U just adds to the input. The dynamics is you just times equal the state at each step by 1.01. So basically it says, gIf you do nothing, the state just grows by 1 percent each step.h Thatfs all that happens. Itfs a violently unstable system. All the eigenvalues are outside the unit disc. Theyfre all equal to 1.01 and now itfs completely obvious that the longer you take, you name any point you want to hit, and what you do is if you take T samples you go back, I guess, by 1.01, you actually find out what input is required to hit that and you take that point and divide it by 1.01 to the T and thatfs the U that youfve set on the first input. Thatfs a sequence of inputs that just kick it out and then let the dynamics take it there. Those inputs will have, as T gets longer and longer, the energy will go to zero and the -- by the way, if P is zero, it does not mean that you can hit any point with zero energy. The only point you can hit with zero energy is the zero state. So when you interpret Z transpose PZ, youfd say that thatfs the energy required to hit it with infinite leisure. Itfs really a limit. It says that you can hit it. When this is zero it basically says that you can hit that point, not with zero energy, but with arbitrarily small energy by taking a longer and longer time interval. Thatfs what it really means. Okay. Now, it turns out that if A is stable then this matrix is positive definite. That follows up here. If a matrix is stable, well, what it means, is its power, thatfs A to the tow are going to zero geometrically. In fact, they go to zero at least as fast as the spectral radius, the largest magnitude and eigenvalue of A to the T. So that means this is a converging series. This thing converges to some positive definite matrix. The inverse of a positive definite makes a positive definite and you have this. So if A is stable, you canft get anywhere for free. But if A is not stable, then you can have a zero null space. Zero null space means just what we were just talking about. You can get to a point in the null space of P using the use of energy as small as you like so thatfs it. And all you do is just kick if a little bit and let the natural dynamics take you out where you want to go. You have to be careful doing this, obviously that this is way it works. So this is actually used in a lot of things. For example, itfs used in a lot of what people call statistically unstable aircrafts, so if you look at various sort of modern fighter aircraft, some of the really bizarre ones will actually have the wings swept forward slightly and it just doesnft look right. It just looks like itfs flying backwards actually, and it just doesnft look right, and sure enough, itfs not right because itfs open loop and unstable. Thatfs what they mean by statically unstable. Most other ones are stable. Commercial ones are, at least so far, stable. I think theyfre probably gonna stay that way, but who knows. So with forward swept wings or statically unstable aircraft, you might ask why would anyone build an airplane, which basically sitting at a trim position, in some flight condition, is unstable. So letfs think about what this means. It means things like your nose goes up and instead of there being a force or moment that pushes your nose down, when your nose goes up, actually, therefs an up torque and your nose goes up faster. First of all, why on earth would you ever do this, that is the first question. So and this is just for fun. Someone give me a guess. By the way, I made a guess and it was totally wrong when I talked to someone who knew what they were doing. [Student:] [Inaudible] Yes, thatfs the idea. You want to get a nice snappy ride. Okay. And you do. You get a very -- as you can imagine you do. Right. You pop your elevator down a little bit or whatever it is and your nose is now going to go very fast. So is the idea that you can just do it with a small U so itfs efficient? Okay. So whatfs the objective? Well, I assumed it was -- I donft know. I actually finally talked to someone who knew what they were talking about, at least on this topic, and they told me in fact why you do this. The main reason, actually, has nothing to do with efficiency or anything like that. Obviously. You want small control surfaces for smaller radar cross sections. So the reason you want small control surfaces, obviously if youfre flying at mock two or something like that, youfre not really worried about energy efficiency or anything like that. What you want is a small control surface because control surfaces reflect radar stuff. So thatfs the real reason. And I actually found out how they work. They have, like, five back up control systems because, letfs remember, you flip up, but you better be very careful with this, right, and you flip up with a tiny, very small, little subtle control surface that just goes like that. You flip up, and when you get to where you want, you better have just the right input to make you stabilize there and all that kind of stuff because if you lose it, I guess in this case, itfs all over in three seconds. Itfs in under three seconds that whether the pilot likes it or not that explosive bolts go and youfre out. So thatfs the way it works. And the way it works is I think that there were four redundant control systems. So I guess if the first one fails, the second one is all ready to go, if the fourth one fails, youfre out the top whether you push the button or not. And thatfs the way this is and they actually do this. And actually now therefs a move to do this for some chemical processes, too. By the way, therefs a name for a chemical process thatfs statically unstable. What would be the common name for it? [Student:][Inaudible] Yes, itfs called an explosive. Yes, thatfs correct. So I donft know if these things are good or bad or whatever, but thatfs the -- and people are doing it. They just said, no, we operate this process at an unstable equilibrium point because itfs more efficient in terms of the overall operation. So thatfs it. All of these obviously require active control to make sure everythingfs okay. Right. Everything will become -- thatfs the whole point of an unstable system. Things will become not okay very quickly. There was a question back there. [Student:]No. Maybe no? Just stretching. Okay. All right. So. Okay. Letfs look at the continuous time case and see how that works. Itfs a little bit different but therefs nothing here you wouldnft expect. And in fact, this allows me to kind of say something that I shouldfve said earlier but thatfs good. Now I get the excuse to say it. To make a connection between the conditions -- there is a question. [Student:][Inaudible] Right. [Student:][Inaudible] Really. Itfs a homework. I canft do the homework, generally, just like that. I had a discussion once. Some people came to my office and I started explaining something, 10 minutes, dead end. I tried again, dead end. And then after 25 minutes they said, gDo you think itfs fair to assign homework that you canft do? And I said, gYes, absolutely because I said at one point, clearly, I could do it, and at that point, it obviously was trivial then.h So all right. So letfs answer your question. What was it? I can try, but Ifm just -- I canft do it. Ifm not embarrassed in the slightest, but go on. [Student:][Inaudible] Thatfs a good problem. I wonder who made it up. No, Ifm kidding. All right. Okay. So youfre given an initial state and you want to steer it, not to the origin, but to within some norm of the origin with what, with a -- [Student:]Minimum amount of input. -- with a minimum amount of input. Thatfs a great problem. Is it continuous time? [Student:][Inaudible] Okay. Fine. All right. So I donft know. Can you solve that? I guess the answer is no. That was a rhetorical question. Letfs talk about it. Right. Itfs safer for me in case I canft solve it. So what happens is you want to -- letfs fix a time period. Okay. So then itfs a linear problem. Right. As to where you can get. So I guess itfs sounding, to me, like a bi-objective problem. Am I not wrong? Itfs sounding to me like one. Right. So the final state is what? Letfs just say if you go T seconds, itfs T epics, itfs A to the T X 0 plus and then something like CT times -- Ifll call it U, but everyone needs to understand U is really a stack of the times in reverse time. Is that cool? This is actually a sequence of U. The whole trajectory. Right. Thatfs what you got and then what did you want to do? The condition is that this should be less than some number. What was the number I gave? [Student:].1. .1. Good. A nice number. There we go. So we have that. And what did you want to do? You wanted to minimize the norm of U. And then your point is that we never did this, right? Is that your point? [Student:][Inaudible] It seems to be. So we didnft do this. Thatfs true. You can look through the notes and you wonft find this anywhere. Any comments? [Student:][Inaudible] What? [Student:][Inaudible] Yes, thank you. Okay. So yeah. We didnft do this. Absolutely true. This is a bi-objective problem. This is a perfect example of how these things go down in practice, right, because basically, you go back and look at like week four, it was all clean. It was, like, gYes, letfs minimize AX minus Y with small x and then we drew beautiful plots and all this kind of stuff, right?h Here, itfs clouded by the horrendous notation of the practical application. In this case, the practical notation is steering something from here to there so it doesnft look as clean. But it is the same. So you make a plot here trading off -- I donft remember how we did it before, but you would trade off these two things like that and therefs an optimal trade off curve here. There we go. I know one thing to do, you could set U equals zero, there, I got one. You could nothing and run up a very small bill here. So how do you solve this? How do you solve this? Anyway, Ifve already said enough. Are we okay now? So now what happens is you make the trade off curve here and then on this plot what do you look for? I find the point here, which is 0.1 and I go up here and Ifm looking for that point and that will solve it, right? Are you convinced? [Student:] Yeah. Okay. So thatfs it. All right. So itfs true. You didnft do that before. But we did things that allowed you do it. So. Okay. Are you happy now? Okay. Good. Okay. Letfs continuous time reachability. So how does this work? Well, itfs actually in some ways trickier and in some ways itfs actually much simpler. Itfs gonna be interesting, actually. So herefs the way it works. Actually, in some ways itfs gonna be uninteresting. Thatfs the interesting part about controllability in the continuous time case. Okay. So we X. is AX plus BU and the reachable set of time T is actually now an integral and this, itfs parameterized by an infinite dimensional set. Itfs the set of all possible input trajectories you could apply over the time period zero key. Absolutely infinite dimensional. Okay. Now, it turns out that this sub space is super simple. Itfs just this. Itfs actually much simpler than the discrete time case. In a discrete time case you can get weird things like this state you can hit it in five steps, but not four. This state you can hit in seven, but not three. You can get all sorts of weird stuff. I mean, all the weirdness stops. Once you hit N steps, you can hit anything youfre ever gonna hit, you can hit. Thatfs starting from zero in the discrete time case. In the continuous time case, it just bumps up to anything youfre ever gonna be able to hit, you can hit. You can hit anywhere, you can hit it in one nanosecond, at least according to the model. So itfs basically this. You form the matrix B AB up to A and minus 1B, thatfs the controllability matrix. And it basically says if this matrix is full rank, you can hit this set is all of RN for any positive T. And in continuous time, it says any place you can hit, any point you can reach in any amount of time, you can actually reach infinitely fast. Thatfs what it says. And this makes perfect sense. You have to have your input act over a smaller, and smaller time. And it really couldnft have been otherwise. I mean, it wouldfve been really weird if there was a state here you could reach in three seconds, but not two. That wouldfve been kind of weird because youfd think, gWell, like, what exactly happened?h And in fact, because thatfs a sub space, itfs dimension is an integer, so had this other thing happened, itfd be, like, you know, the dimension of the reachable set wouldfve gone up to equals, you know, T equals 2.237 it wouldfve jumped to three or four. And you think now, gWhat on earth would allow you, all of a sudden, at some time instance to manipulate the state into some other dimension?h I mean, it makes no sense at all. So in fact, it kind of had to be this way. So this is it. So thatfs the result. And wefll show it a couple of different ways. Actually, therefs a bunch of ways to connect it up here to the discrete time case and see how it works. Now, one way to see that youfre always in the range is C is simple. Letfs start from zero, E to the TA is a power series, but I could use the Cayley-Hamilton as a back substitute with powers of A starting at N, N plus 1 and so on. I can back substitute powers of smaller powers of A. And Ifll end up with this, it says that basically E to the TA is for sure, for any key, it is a polynomial in IA up to A and minus 1 period. [Inaudible] polynomial of A, a degree less than A. Okay. Now, X of T is just this integral, but now Ifm gonna plug that in and I get this thing and now I switch the integral and the sum and I get the following. Itfs the sum from I equals one to N of this. But that is just a number. You could actually work out how these are exactly, but it doesnft really matter for us because thatfs a number and thatfs our friend the controllability matrix. So what this says is if you have a continuous time system, no matter you do with the input, and you start from zero, you will never leave the range of the controllability matrix. Ever. Now, wefre gonna have to show the converse which is that any point in the range of the controllability matrix can be reached. First wefll cheat a little bit and wefll do that with impulsive input. If wefre gonna use impulsive inputs we have to distinguish between zero minus and zero -- well, T minus and T plus whenever T is a time when therefs an impulse put. So letfs just say before the impulse, wefll put zero and we apply an impulse, which is A. Itfs distributed across the inputs by a constant vector F, thatfs F1 through FM and itfs multiplied by this K differentiated delta function. Thatfs what it is. And here, the laplace transfer of that, is S to KF. The laplace transfer of the state is SI minus A inverse B is S to the KF. Ifll do a series expansion on this, I think thatfs a called a law expansion. Did I say that at the time? I donft think I did. No, I didnft think I did, but thatfs what it is. I think we used it to do the exponential. So if I expand this, I take out the powers that are going to multiply the S to the K and I get things like this. A bunch of them look like this and letfs look at this very, very carefully. When I take the inverse laplace transform these correspond to violent impulses in X of T. This S inverse is gonna be the first one. Thatfs sort of like a step term. This is all the stuff that happens between zero minus and zero plus. This is what happens right after zero plus. It makes perfect sense. It says that if you apply an input differentiated K times, it has an immediate effect on the state and the state is to move it to A to the K B. But now, you know how to transfer the state to anything in the range of C because if I make an input that looks like this, itfs a delta function times F 0 up to a delta function differentiator and minus [inaudible] and I multiply this if I apply this, then X of 0 plus is C times this vector and now wefre done. Now if it says that at least using impulsive inputs, I can reach anything in zero time using impulsive inputs. Thatfs what this says. So thatfs the picture there. And the question is can you maneuver the state anywhere starting from X equals zero. Is the system reachable? If not, where can you get it? Well, you can kind of figure out what it is, but to kind do some of the calculations we can actually work out what it is. You work out the controllability matrix. Itfs A AB A squared B and you get this matrix here and you look at it for a little bit and youfll quickly realize its rank two. All right. Letfs move on to a much more important topic, which is least [inaudible] reachability in the continuous case. Itfs gonna be very similar, except itfs gonna be kind of interesting now because itfs gonna be that wefll have this possibility of actually affecting a state transfer infinitely fast. And thatfs gonna come out of this. Letfs see how that works. Thatfs your minimum energy input. If you have X. is AX plus BU and you seek an input that steers X of 0 to X desired and minimizes this integral here. Now, this is not anything we did before. In fact, this has got a norm. People would call this, by the way, the two norm -- just the norm squared of U. Okay. But this is not anything youfve seen before and when this was discrete time, U was sort of a stacked version and it was big, possibility, but it was finite dimensional. Thatfs an integral, were in the infinite dimensional case here. Actually, itfs not anything you need to be afraid of. Some of you, depending on the field youfre in, will have to deal with infinite dimensional things. I might even just be in continuous time or something like that. My claim is if you actually understand all the material from 263, none of the infinite dimensional stuff has any surprises whatsoever. Absolutely none. I mean, a few details here and there, some technical details, everything we did has an analog. And a simple, elementary one. Now, donft dress it up and make it look very fancy to justify, I donft know, just to make it look fancy, right, but youfll see the concept for example [inaudible] so instead of calling it something symmetric youfll have a self a joint operator. Thatfs the other thing. Youfre then welcomed to call linear transformation an operator, which sounds fancy by the way. Or some people think of it as fancy. So you can talk about linear operator and you can find out, for example, a symmetric one can be diagonalized. There are some things that get more complicated, but if the operator is whatfs called compact, then itfs gonna be exactly the same. Itfs gonna look exactly the same. Itfs gonna be something to the SVD also works, at least for compact operators. Ifm just mentioning this because some of you will go on -- if you ever have to do that, I mean, it should be avoided of course, dealing with these things, but if you find youfve already chosen or are too deep into a field where these [inaudible] dimensional things do appear, donft worry because I claim if you understand 263 you can understand all of that just with some translations. There are a few additional things that come up that you donft -- youfll have continuous spectrum and things like that, but otherwise itfs fine. Has anyone actually already encountered these things? I think therefs a lot of areas in physics where you bump into these things, so okay. All right. This is your first fore ray into that. So letfs just discretize the system with an interval T over and. Okay. And later wefre gonna let N go to infinity so thatfs what wefre gonna do. So wefre actually not gonna look at first over all possible input signals. Wefre gonna look at input signals that are constant over consecutive periods of length H which is T over N. So thatfs what wefre gonna do. So wefre not solving the problem. So wefll let them be constant and wefll just apply our various formulas from various things. It turns out [inaudible] exactly what we had before. Now, itfs finite dimensional and this is now the controllability matrix of the discretized system. And remember, these have formulas, like, AD is E to the H A and BD is this integral here. Okay. And the least norm-input -- now, this is all finite dimensional so therefs no hand waving, nothing. Itfs week four of the class. The discrete least-norm input is given by this expression here. Now, if I go back and express this in terms of A using these powers of these things, after all, A is an exponential and powers of exponentials is just the same as multiplying the thing by that, you get something kind of interesting. What happens is BD turns into T over NB, so you get the following. Thatfs this expression here. Thatfs this first expression here. As N gets big, that converges to something that looks like that. Now, the sum is nothing but a ream on sum for an integral and the integral is that. Now, you put these together, in other words, you take this thing and then multiply by the inverse of that. Notice that the N conveniently drops out. That just goes away. So does the T for that matter. And I get a formula, and this is in fact different, itfs this, itfs B transposed times this [inaudible]. By the way, if you compare this to the discrete time case you will see that it is essentially the same, well, you have to change integrals and things like that. Now, whatfs really cool about this thing is the following. Now that itfs completely and horribly marked up and no one can read any of it, but imagining that you could read it, the cool part is this matrix is non singular as long as T is positive. I can make T 10 to the minus nine and this matrix will be non singular. By the way, itfs gonna be non singular, but if you integrate something -- again, you have to assume some reasonable time scale and things like that, if I integrate something from zero to 10 to minus 9, that integral is gonna be very small. So that says that this inverse is going to be absolutely huge. And so what this says is oh, I can steer the input, I can steer the state from zero to a desired state in any number of steps. Sorry. In any amount of time I can do it very, very quickly, but itfs gonna take a huge input. Thatfs what this says. It all makes perfect sense. It all goes together and it makes absolute perfect sense here. Now, in the discrete time case, you might want to know why breaks down and what breaks down is real simple and itfs for a simple reason. Letfs see if I can say this and not sound like an idiot. The problem in the discrete time case is the time is discrete. This is the problem. Here, time is continuous. I can make it as small as I like. But here, what happens is Ifll decrease T. When T equals N, Ifm still safe by Cayley-Hamilton, but the minute I drop T below N, then there will be -- I can take T down and at some point, this matrix can become non singular, in which the case, the inverse doesnft work. By the way, if I replace the inverse with a dagger, and make that a pseudo inverse, you get something very interestingly related to our famous homework problem. If I put a dagger in here, Ifll get something really interesting. Ifm gonna get you the least-norm input that will get you as close as you possibly can get to the desired target. Did this make sense? So thatfs what C dagger will do. And thatfs not the dagger from lecture four. Thatfs not CC transpose C inverse C. Sorry. C trans -- help me with this one. C transposed -- whichever it is. C transposed quantity CC transposed inverse. Yes, that was it. Itfs not that dagger. Itfs the general dagger that requires the SVD. So thatfs what happens. Okay. Now, the energy required to hit a state is give by this integral. This integral from zero to T. And the cool thing about the integral is no matter how small T is, Q is positive definite. Itfs invertible. And Ifm not gonna go over a lot of that, but thatfs sort of the basic idea. Letfs see. And Ifll just make the connection to the minimum energy [inaudible]. The same story happens. I have an integral, a positive semi-definite matrixes here. If I increase the time T that youfre allowed to use to hit a target, this matrix goes up, this one goes down, and thatfs the quadratic form that gives you the minimum energy so you have the same result again. Okay. Letfs quit for today. For those of you who just came in, I think I announced at the beginning of the class therefs a tape ahead. Itfs today. Itfs today, 4:15, Skilling Auditorium, but as usual, you cannot trust me. Whatever it says on the website is what it really is. And statically, some of you should come because otherwise Ifd be put in the terribly awkward position of giving a lecture to no one. Itfs never happened. Hopefully, this afternoon wonft be a first. Okay. Wefll quit here.