Let me first answer your -- there was a question about one of the homework exercises due today. Thatfs Homework 5? Right. Okay. And what was the question? [Student:][Inaudible]. 10.2? I havenft memorized them, so youfll have to tell me what it is. [Student:][Inaudible]. What did we ask [inaudible] -- Ifd have to see the context. [Student:][Inaudible]. Oh. Itfs your choice, but whatfs it for. Is it a simple one? [Student:][Inaudible]. Yeah. Okay. Do you know -- I believe that to sketch a -- not sketch, I guess to get a -- therefs a command. This could be completely wrong. I believe itfs gquiver.h Is that right? So some people are shaking their heads affirming that this function in MATLAB will actually plot -- I guess quiver is for a group of arrows or something. Donft ask me. Apparently thatfs what it does. That would sketch it, but youfre also absolutely welcome to sketch it by hand, just to draw -- take a couple of points, see where they go. Thatfs the important part. Okay. Any questions about last time? If not, wefll continue. Wefre looking at the idea of left and right eigenvectors. In our next example, very important, you go down [inaudible] pat is a Markov chain. So in a Markov chain, the way we do it, the state is a vector. Itfs a column vector, and it actually is a probability distribution. Itfs the probability that youfre in each of N states. So PFT is a column vector. Its entries are all positive or not negative, and they add up to one, and they represent a probability distribution. That probability distribution evolves in time by being multiplied by the state transition makers, capital P. I warned you last time that if you see this material in -- I guess, by the way -- is that coming out. That is straight there. My monitor has this kinda twisted at a five-degree angle or something like that. Itfs okay. So I warned you last time that if you take a course in statistics or in other areas that this is the transpose of what youfll see there. Their matrix P will be the transpose, and they actually donft propagate column vectors. They propagate row vectors. Yes? Yes, what? [Student:][Inaudible]. It is tilted. Ah ha. Well, then. I wonder if you guys can rotate your overhead camera a little bit to make this straight. There we go. Thank you. Okay. So these matrices have the sum over columns equal to one, so thatfs what this is. Every column sum is one. And a matrix like that is called stochastic. Now you can rewrite that this way. If I put a vector in front of all ones -- thatfs a row vector multiplied by P -- I get this row vector here. This is the matrix way of saying that the column sums of P are all one. So this also if you look at it -- if you like, I could put a lambda in there and say lambda is one. This basically says that P -- that the vector of all ones is a left eigenvector of P, associated with eigenvalue lambda equals one. It tells you in particular P has an eigenvalue of one. But if it has eigenvalue of one, it also has a right eigenvector associated with lambda equals one. And that means there some nonzero vector with PV equals V. Thatfs what it means to have a right eigenvector with eigenvalue one, so PV is V. Some people would say that V is invariant under P. Itfs invariant when multiplied by P. Now it turns out that because the entries in P are positive, this is not something wefre gonna look at now. This eigenvector here can actually always be chosen so that its entries are nonnegative, and that means we can normalize it so that the sum of them is one. Now the interpretation then is if you have PV equals V, the interpretation is just beautiful. Itfs an invariant distribution. It basically says if youfre in the distribution given by V, that probability distribution, and you update one step in time, you will remain in the same distribution, so itfs an invariant distribution. By the way, that obviously does not mean the state is invariant or anything like that. The state is jumping around. This describes the propagation of the probabilities of where the state is. The state is stochastically invariant. Thatfs what PV equals V means. Okay. Now wefll see soon in fact that in most cases, no matter how you start off the Markov chain -- and this depends on some properties of P -- you will in fact converge this equilibrium distribution. So thatfs something wefll see actually very soon. Okay. So letfs look at this idea of diagonalization. Letfs suppose that you can actually choose a linearly independent set of eigenvectors for N by N matrix A. So wefre gonna call those V1 through VN. By the way, wefre soon gonna see this is not always possible. It is not always possible to choose an independent set of eigenvectors of A, but assuming right now it is possible, you write this as AVI equals lambda IVI, and will express this set of matrix vector multiplies as -- wefll just concatenate everything and make it in matrix language. And it says basically AV -- A times a matrix formed -- thatfs a square matrix formed by just concatenating the columns -- is equal to V and to multiply each of these Vs, that is multiplication on the right by a diagonal matrix, so we have this equation. So you can write this this way. AT equals T lambda, so there you go: five ASCII characters which expresses the fact that the columns of T are eigenvectors associated with the eigenvalues of lambda I. So a very, very snappy notation to write this. So actually, now I can explain this business. I said mysteriously earlier that you can write an eigenvector this way. And in fact, thatfs the way most people write it, but in fact, if you wanna sort of follow this, and make this work for T being a single column or something like that, you can actually write it as this, and thatfs -- if you see somebody write that, itfs for one of two reasons. Either theyfre just kind of like weird, or being perverse by writing the scalar on the right of the vector not the left. Also of course, this requires -- to interpret, this requires loose parsing. Or theyfre actually quite sophisticated, and theyfre simply writing down the scalar version of this eigenvector eigenvector. I should call this the eigenvectors equation. Thatfs AT equals T lambda. Okay. Now T, because these are independent -- T, which we havenft used yet -- this matrix T is invertible because itfs a bunch of -- itfs N independent vectors. Thatfs nonsingular. It can be inverted. I multiply this equation on the left by T inverse, and I get T inverse AT is lambda. Okay? So thatfs the big -- This youfve seen before. That is a similarity transformation, and thatfs a diagonal matrix. Itfs the eigenvalues, and in fact some people would say the matrix A is diagonalized by a similarity transformation. And in fact, the diagonal matrix is the matrix of the diagonal. Diagonals are the eigenvalues, and the matrix that diagonalizes A is actually the eigenvector matrix, so it looks like that. Okay. Now this is just -- itfs really in some sense just a change of notation, but itfs okay. Suppose you had any matrix T that diagonalized A by similarity. So T inverse AT is diagonal. Well, letfs see. Ifll call these diagonal ones -- why donft I just call them lambda 1 through lambda N? And why donft I call the columns of V -- of T V1 through VN? If I do that, and I take this equation, I rewrite this as AT is T lambda. Now I examine these column by column, and column by column that says this. So basically, if you see an equation like T inverse AT equals lambda, or AT equals T lambda -- if you see either of these equations or these like that, it means the same thing. Theyfre just different ways of writing out in different matrix form what this means. Okay? So actually technically, these two make sense. These two make sense even if the VI are not independent. But for this last one to make sense, obviously you have to have a VI independent. Okay? So thatfs the idea. Okay. So wefll say that a matrix is diagonalizable if there is a T for which T inverse AT is diagonal, and thatfs exactly the same as saying A has a set of linearly independent eigenvectors. This is identical. Now sometimes youfre gonna hear -- I think itfs an old word. I hope itfs going away. It says that if A is not diagonalizable, itfs sometimes called defective. So I donft know where or how that came about, but thatfs simply what itfs called. You still do hear that occasionally. Otherwise, itfs called diagonalizable. And itfs very important to quickly point out not all matrices are diagonalizable. Herefs the simplest example, zero, one, zero, zero, characteristic polynomials S squared, eigenvalue -- only eigenvalue is lambda equals zero. There is no other eigenvalue here. Letfs try to find now two independent eigenvectors, each with an associated eigenvalue zero. Thatfs the only eigenvalue there is. Well, to say that youfre an eigenvector of A with eigenvalue zero, thatfs just a longwinded way to say youfre in the vel space. So letfs try to find vectors that satisfy zero, one, zero, zero, V1, V2 equals zero if you like times V1, V2, like that, same thing. Well, the first equation, if you look at the first row, it says V2 is zero. So V2 has to be zero so V1 is the only -- thatfs the only thing we have to mess with here. The second row is always satisfied. And therefs no way you can pick two vectors of this form that are gonna be independent. Itfs impossible. Okay? So herefs the canonical example of an A which is not diagonalizable. Now wefre gonna see actually later today that this issue -- itfs an interesting one. Itfs a bit delicate. Wefll get to it. Itfs gonna turn out nondiagonalizability -- hey, that came out right. I was about halfway through that and wondering whether it was gonna come out right, but I think it did. I wonft attempt it again. Anyway, that property is not -- itfs something that does -- it actually does come up. It actually has real implications, but in many cases matrices that you get are diagonalizable. Not all sadly because that would mean a whole chunk of -- a bit of the class we could cut out, which I would not only cut out, but cut out with pleasure. Sadly, canft do that. So there are a couple -- wefre gonna see the general picture very soon, but for now we can say the following. If a matrix has distinct eigenvalues, so if the matrix -- if theyfre all different, all N eigenvalues are different, A is diagonalizable. Okay? And wefll show this later, but thatfs just something to know. By the way, the converse is false. You can have repeated eigenvalues but still be diagonalizing. Actually, somebody give me an example of that. Whatfs the simplest -- just give a simple matrix with repeated eigenvalues and yet -- identity, there we go. So the identity in R -- whatfs your favorite number? Seven. Thank you. So thatfs an R seven by seven, and very nice choice by the way. I donft know if you chose that, but good taste. R seven by seven, so itfs got -- well, it depends if you count multiplicities -- well, I can tell you right now the characteristic polynomial is S minus one to the seventh. That is the characteristic polynomial. It has you would say seven eigenvalues at S equals one. So thatfs about as repeated as you can get. All the eigenvalues are the same. Does I have a set? Can it be -- well, I could ask stupid questions like this. Can I be diagonalizable? Can it be diagonalized? And the answer is it already is. So youfd say well what would be T? You could T equals five. So you could take as eigenvectors E1 through EN. By the way, is it true that any set of N eigenvectors of I will diagonalize I or as independent? Let me try -- Ifll try the logic on you one more time. You can choose a set of seven independent eigenvectors for I. For example, E1, E2, E3, up to E7. Now the other question is this. Suppose you pick seven eigenvectors of I. Are they independent? No. Obviously not, because I can pick E1, 2E1, 3E1 and so on, up to 7E1. Theyfre all eigenvectors. By no means are they independent, so okay. All right. Now wefre gonna connect diagonalization and left eigenvectors, and itfs actually a very cool connection. So wefre gonna write T inverse AT equals lambda. Before, we had multiplied by T and put it over here, AT is T lambda. But in this case, wefre gonna write it as T inverse A equals lambda T inverse. Now wefre gonna write this out row by row. Ifm gonna call the rows of T inverse, W1 transpose through WN transpose. Ifm gonna call those the rows. And then in this way, you can right matrix matrix multiply as a batch operation in which you multiply the rows here by all the rows of the first thing by this matrix here. In other words, this is nothing but this. This one is this. And now, if you simply multiply out on the left, the rows of the left hand side are simply this. And the rows of the right hand side are this because Ifm multiplying a diagonal matrix by a matrix. If you multiply a diagonal matrix on the left, it means youfre actually scaling the rows of the matrix. If you multiply a diagonal matrix on the right, youfre actually scaling the columns. You might ask how do I remember that? Well, I remember that when Ifm teaching 263 mostly, but then I forget. And then what I usually do is if I have to express row multiplication, I will secretly -- because this is not the kind of thing you wanna do in public, let anyone see you doing. I secretly sneak off to the side and I write down the two by two example to see if row multiplication is on the left or the right. And then itfs on the left, Ifll find out, and Ifll say I knew that, and thatfs how I do it sometimes. Just letting you know thatfs how I know. But at least for the purpose of -- while we are doing 263, I will likely remember. All right, so this says this. If you look at that equation, thatfs very simple. This says nothing more than the row -- than these WIs are left eigenvectors, period. Now in this case, the rows are independent. Why? Because T inverse is invertible. Its inverse is T, so the rows are independent, and this means that you have an independent set of left eigenvectors. And theyfre chosen -- in this case, their scaling is very important. Actually, let me just review what this says. It says that if you take a linearly independent set of eigenvectors for a matrix and concatenate them column by column into an N by N matrix, you invert that matrix and you look at the rows of the result. Those are left eigenvectors, so thatfs what that tells you. Those are left eigenvectors. But itfs not as simple as just saying that itfs any set of left eigenvectors. Theyfre actually normalized in a very specific way. Theyfre normalized so that WI transpose VJ is delta IJ. Right? Which is -- basically this is saying T inverse times T equals I. Okay? So what you would say is in this case, if you choose the left -- if you scale the left eigenvectors this way, theyfre dual bases. Now I should make a warning here, and that is that if a vector is an eigenvector, so is three times it, and so is seven times it. In fact, any scalar multiple, so when you actually ask for something like left eigenvectors or something like that in some numerical computation, they have to be normalized somehow. Generally speaking, I believe this is the case, most codes -- which by the way all go back to something called LAPACK -- will return an eigenvector normalized in norm. Notice itfll simply be -- have Norm 1. If you just walk up to someone in the street and say gPlease normalize my eigenvector,h or gIfm thinking of a direction. Wherefs the direction?h people will just normalize it by the two norm. Therefs other cases where they donft. I wanna point out that that will not produce this result. These are not normalized here, so be very careful. These are not normalized. Okay. Wefll take a look and see how this comes out. Thisfll come up soon. Okay. Now we can talk about the idea of modal form. So letfs suppose A is diagonalizable by T. From now on, you are to simply recognize this statement as saying this is exactly the same as suppose A has an independent set of eigenvectors V1 through VN. If you shove them together into a matrix, Ifm gonna call that T. Thatfs what this means. Well, if you take new coordinates to be X equals TX tilde, what this means is you are -- X tilde are the coordinates of X in the T expansion. Or if the columns of T are the Vs, itfs in the Vs. So you would actually say X tilde gives the coordinates of X in what people would call in this case the modal expansion, or the eigenvector expansion. In other words, instead of writing X, which is in the standard bases or the coefficients, X tilde are the mixture of V1 through VN that you need to reconstruct -- to construct X. Thatfs what X tilde is. Well, this thing is X dot, and thatfs A, X, X is TX tilde, so X tilde dot is T inverse AT. Thatfs diagonal, and you get this. So in that new coordinate system, you get X tilde dot equals lambda X tilde. And that means that by this change of coordinate that the autonomous linear system, X dot equals AX is decoupled. So in X dot equals AX, the way you should imagine that is a bank of integrators with a matrix A in a feedback loop. But a matrix A basically takes N inputs and produces N outputs, but itfs got all these horrible cross gains. I mean if A has all entries non-zero, it means every output depends on every input, and so if youfre mixing the dynamics [inaudible] basically whatfs happening. When you diagonalize like this, you have completely decoupled all the dynamics and it looks like this. Okay? And that says that at least in this coordinate system, itfs very simple. The trajectories give you N independent modes, and the modes are just simply -- well, obviously they have nothing to do with each other. Theyfre totally decoupled. And this is called modal form. Thatfs a very common form. Now these can become complex, in which case itfs a bit weird, and you have to explain a little bit, and make a story about how the real and the imaginary parts are separately solutions, and all that kind of stuff. Another very common form youfll see is real modal form, and youfll see this for example in mechanical engineering a lot as real modal form for example for a structure. Thatfs how they would describe the dynamics of a structure by giving real modal form. Now in this case, you can -- therefs actually a way to construct a real matrix S so that S inverse AS is not diagonal, but itfs diagonal with one by one or two by two blocks like this. Okay? So every time in T inverse AT, every entry in this which is complex -- that means for every complex eigenvalue, youfll actually collect that and its conjugate, and then actually you can take the real and imaginary apart and so on, and youfll actually get a form like that. Ifm not sure -- I donft remember if wefve actually -- hey, we donft tell you how to construct S. That would make an excellent homework exercise. Yeah, okay. So thatfs the [inaudible], so find S. Itfs a good exercise to see if any of this makes sense, mess with matrices and things like that. Okay. So you get all these little blocks -- by the way, these little blocks like this, you should by now start recognizing. So a little block that looks like sigma N, sigma N, omega N, omega N, the characteristic polynomial of this is S minus sigma N -- sorry -- squared, plus omega N squared like that. Now assuming here that omega is less than sigma -- I believe thatfs the condition here. Oh sorry. Absolute -- no, sorry -- let me keep this out of here. The roots of this thing are gonna be minus -- no, theyfre gonna be sigma N plus minus square root of -- letfs see -- I guess itfs omega N squared minus sigma N squared, something like that. Okay? So thatfs what itfs gonna look like. Now these things are the complex eigenvalues, so that is actually negative. Otherwise, this would be two real parts and can be split. And it should kinda make sense because this is the self-exciting component between X -- if this were a two-dimensional system with X1 and X2, and these are the cross components which actually give you the rotation. So this condition basically says you get enough rotation. Otherwise, it splits into two. I can tell by facial -- just a quick gross look at facial expressions, Ifve now confused almost everyone, except [inaudible]. Yes? [Student:][Inaudible]. How did I look at it and immediately say it? Well, it wasnft totally immediate, but letfs -- I -- two by two [inaudible] two by two matrices, thatfs where there are basically no excuses, right? So for two by two matrices you should be able to work out one of the eigenvalues with the inverse and things like that. Above that, no one could possibly hold you responsible for it. But two by two, letfs just two it. Itfs det SI minus ABCD. I mean these are the kind of thing I guess you should know, like that. And so I did this in my head, but I sort of knew what the answer was, so you get this times -- minus BC. And now if I like, I could write my quadratic formula which I donft dare do. That would be some horrible expression. This was easier because after all, B was equal to minus C, and A was equal to D, so fewer things to keep track of, but this is what I did in principle. Now one of these two by two blocks, which is a complex mode, they look like this, and theyfre really quite pretty. They are essentially cross-coupled. Itfs a pair of integrators cross-coupled. And by the way, if sigma is zero, you get the pure oscillation, and this is something wefve seen before. In fact, that matrix is zero omega minus omega zero. Thatfs when sigmafs zero, so you get this. You get that. And this you should recognize by now as a rotation matrix, I mean maybe -- sorry, this is not a rotation matrix. Well, it is, but in this case -- this you should recognize as a harmonic oscillator. By the way, you donft have to recognize this as a harmonic oscillator, but we talked about it, and the little bit of playing around with this would convince you that it is. Yes? [Student:][Inaudible]. Yeah. Sorry. Whatfs that? Oh, this is not it. Thank you. This is this? Like that? Sorry? With a square. [Student:][Inaudible]. Really? Good. Great. Fine. Sorry. Actually, youfre right. Of course, this is -- fine, so itfs sigma N plus minus J omega N. There we go. Itfs that? Thank you for fixing it. How come no one else caught that? Hm. Well, thatfs one more demerit for this class. Thatfs bad. All right. So diagonalization, it simplifies a lot of matrix expressions. So diagonalization, Ifll say a few things about it. Itfs mostly a conceptual tool. There are a few places where it actually has some actual use. Itfs widely used for example in mechanical engineering. There in fact is a very famous -- therefs very famous codes that will take a description of a mechanical thing, and then spit out the modal form, so itfll list eigenvalues, and itfll actual give you the eigenvectors, which in fact are these modal ones, not the complex ones, the two by two blocks. But in fact, mostly itfs a conceptual tool, and letfs see why. Itfs gonna simplify a lot of things. So if you have SI minus A inverse, thatfs the resolvent. Generally, if A is two by two, no problem, you can work out what this is. If A is three by three, this is already not a picture -- itfs nothing you -- you donft wanna see the expression for SI minus A inverse. Trust me. You can compute it easily for some particular A, but thatfs another story. However, we can do the following. Wherever you see A, you replace it with T lambda T inverse like that. So we do that here, and now Ifll show you a trick. This identity matrix you write as T T inverse. Okay? Now I pull a T out on the left, and a T inverse out on the right in this inner matrix. I invert a triple product thatfs the same as the inverse the other way around, so I get T SI minus lambda inverse T inverse. By the way, inverting a diagonal matrix, thatfs fine. Thatfs easy to do. You invert the diagonals, and you get this. Thatfs the resolvent. Okay? By the way, this is sometimes called the spectral resolution of the identity or some -- therefs some name for it. Therefs a name for this way to represent the resolvent. Actually, let me say a little bit about that. Some of you might know about the idea of residues in complex analysis. Then again, maybe none of you know about residues or partial fraction expansions. Partial fraction expansions? Come on. Somebodyfs heard of that. Did you learn about that in some stupid signals and systems course? Is that -- yes. Is that where you heard about it? Okay, great. So this says the partial fraction expansion of the resolvent is this. Itfs really quite cool. Let me try to get it right. Oh, I think I can get it right. Itfs this, and Ifm using informal syntax here. So thatfs the partial fraction expansion of this. Partial fraction expansion of a rational function is to write it out as a sum of terms each of which is one over one minus -- S minus a pol, so thatfs the partial fraction expansion. Okay? The other way to do is say that these Rank 1 matrices, VI WI transpose are the residues of this function at -- thatfs the residue of this function at the pol lambda I. Okay. So the diagonalization simplifies tremendously this, the resolvent. Itfs also true for the powers. For example, if you raise a matrix to power, if you know itfs eigenvectors and eigenvalues, itfs very straightforward because you simply write T lambda T inverse to the K. Now when you do this, you just line these K times, and the T inverse here annihilates the T to the left of it. And this happens in all of these, and you get T lambda to the K T inverse. Okay? So that means itfs very straightforward in fact to calculate powers of a matrix. And in fact, this already is a method perhaps not -- maybe not a good one, but at least conceptually this gives you a very good way for example of simulating a dynamical system. Or if someone walks up to you on the street, and asks whatfs A to the one million, probably the worst thing you could do would be to write a loop that keeps multiplying an N by N matrix by A and let it run a million times. That would be one way to do it. And the cost of that would be ten to the six times N cubed if you did that loop. Okay? Now you can also do the following. You could also calculate the eigenvectors and eigenvalues, and although Ifm not gonna get into the details, that could also be done in order N cubed, like five N cubed or something like it. This doesnft matter, but I just wanna make a point here. Once you calculate this and this, which costs N cubed, letfs say five, this is nothing but a bunch of calls. It costs basically zero -- it costs N, which is basically dominated by the N cubed. So this can be done in five N cubed. Okay? And the point is that is a whole lot smaller than that. So diagonalization in this case actually gives you a very serious advantage in how to compute something. Okay. Letfs look at exponential. You wanna look at E to the A in general. Calculating E to the A is a pain. Itfs not fun, unless A is two by two or has some other special structure like diagonal or something like that. Itfs actually quite difficult. Letfs write out the power series, and if you write out the power series here, not surprisingly when you make a -- wefve already seen when you have a power, itfs the same as simply putting the T and the T inverse here, and having these things -- these are all diagonal matrices, and thatfs nothing but this. Exponential of the diagonal matrix is ex and diag commute for a matrix exponential. You get this like that, and that gives you a very simple way to compute the exponential. Thatfs not quite how itfs done, but itfs one of the methods that can be used is this. Okay. Now in fact, this idea extends to analytic function, so if you have a function which is given by a power series, thatfs an analytic function -- you donft need to know this, but itfs just kind of cool, so it doesnft hurt. So if you have an analytic function thatfs a function given by a power series -- it could be a rational function. It could be exponential, anything else -- then itfs absolutely standard to overload an analytic function to be called on N by N matrices. So F of A is given by beta zero I where the power series expansion of F is this. Okay? Youfve seen one specific example. Youfve seen so far the exponential. This allows you to work out this thing. So here for example, we would have the following. We would have F of A is T times -- well, let me write it this way: diag of F of lambda I times T inverse. Therefs your formula like that. Okay? So it gives you a very quick way. This is actually something good to know about because there are a lot of times when you do see things that are polynomials of matrices, rational functions of matrices, and things like that. Those do come up a lot, and itfs good to know that if you diagonalize, they will simplify. Theyfll simplify actually just to this single analytic function applied to the eigenvalues. Okay? Actually, Ifm not quite sure why this part didnft get into the notes. Thatfs very strange. Okay. Letfs look at the solution of X dot equals AX. Well, we already know what it is. Itfs simply this. Itfs X of T is the X of TA times X of zero. Thatfs what it is. Thatfs the solution. And we already have a rough idea of what this looks like and things like that, or we have some idea of it anyway. Although, this is not the kind of thing a person can look at right of the bat -- just take a look at a four by four matrix or for that matter forty by forty matrix -- look at it and say, gOh, yeah. Thatfs gonna be a rough ride in that vehicle,h or something like that, or gYeah, thatfs gonna be some business cycles. I can see them.h I mean therefs just no way anybody can do that, unless A is diagonal or something like that. Okay. Well, letfs start with X dot equals AX. T inverse AT is lambda. X of T is E to the TA X of zero. Thatfs the solution, but this thing is this, and then this has really the most beautiful interpretation because T inverse X of zero, I write it this way: WI transpose X of zero. Thatfs a number. Then itfs multiplied by this thing, which actually tells you for that eigenvalue eigenvector whether itfs -- this tells you whether it grows, shrinks, if itfs complex, oscillates, and all that kind of stuff. And then this gives you the VI, reconstructs it. So the interpretation of this formula is really quite beautiful, and every single can be interpreted. Itfs this. What happens is you take the initial state X of zero, you multiply by WI transpose, and you get something very specific. That is the component of X of zero in the VI expansion. So for example, if W3 transpose X of zero is zero, that has a meaning. It says if you expand X of zero in a V expansion, you will not have any V3. Thatfs what this says. Okay? Because thatfs what this does. It decomposes it this way. In fact, let me write that down right now. X of zero is sum WI transpose X of zero times VI. Thatfs the expansion. And thatfs true for any X zero. Okay? This looks fancy. This comes from the fact that -- I mean this is actually quite straightforward. This basically is a restatement of this. Therefs nothing [inaudible]. Okay? So WI transpose, which are the left eigenvectors, they decompose a state into the modal components, if you wanna call the V1 and VN the modal components. Thatfs what this does. All right, thatfs fine, so that you decompose it. Then this thing, time propagates that mode. Thatfs what E to the lambda T does. It propagates the Ith mode in time, very simple formula. Why? Thatfs the point of a mode. A mode propagates in a very simple way. It grows. It shrinks. It oscillates. Thatfs about it so far, by the way. And then it comes out along a direction VI, so all the parts of this are supposed to make sense, so thatfs the interpretation of this. Okay, but now we can actually ask some really interesting questions and answer them. You might ask this. You have X dot equals AX. Now remember, we say by definition A -- the system is stable. By the way, you would normally say the system X dot equals AX is stable. Sometimes, as a matter of slang, youfll hear people talk about A being stable, but thatfs -- it should be understood thatfs slang. So this system is stable provided all solutions of X dot equals AX converge to zero, they all decay. But now wefre asking this. For a general A, for what initial conditions do you have X of T goes to zero as T goes to infinity? By the way, this one answer, you can always give, no matter what A is. Whatfs that answer? Zero. If X of zero is zero, then it stays zero, period. And therefore, it goes to zero. So the initial state zero, no matter what A is, gives you at least one trajectory that converges to zero. I mean converge is a little bit technical there. It is always zero, but that means it converges to zero. Okay. Now the way to answer this is you dived the eigenvalues into those with negative real part, so letfs say thatfs the first S of them, and the others, so these have nonnegative real part. Now we can answer the question lots of different ways. One is this: you say now thatfs just a formula for X of T. This thing will go to zero, provided the following holds. The first S terms in here shrink. The remaining S plus one through N do not shrink. Therefore, this will go to zero provided these numbers are zero for S plus one, S plus two, and so on. Thatfs one way to say. So one way to say it is that these numbers should be zero for S plus one, S plus two, up to S equals N. By the way, that is identical to saying that youfre in the span of V1 through VS. Why? Because X of zero is equal to sum WI transpose X of zero times VI like that. You have this. Therefore, to say that these are zero from S plus one to N means that X of zero is a linear combination of V1 through VS. Okay? So these are two different ways to say this. And therefd be all sorts of names people would call this. They would say that -- they would refer by the way to this span. They would call that the stable eigenspace or something like that. That would be one -- or some people would just call it the stable subspace, and the idea would be this. If you start in this subspace, the trajectory will go to zero. It might oscillate. Itfll go to zero. If youfre not in this subspace, you will not. So thatfs how that works. Okay, so thatfs the kind of question you can answer now. And finally wefll handle this issue of stability of discrete time systems. So suppose the matrix is diagonalizable, and you have the linear dynamic [inaudible] XT plus one is AX of T, then the solution is trivial. Itfs just powers of A. But if you write A as T lambda T inverse, then A to the K is this. Now I understand -- I know how powers of complex -- I know what powers of complex numbers do. That I can actually handle, and so you get this. Powers of complex numbers go to zero only if their absolute value is less than one. Their imaginary part tells you about how much of a rotation in degrees you get at each step, but their magnitude tells you how the magnitude scales, and you realize that XT plus one is AX of T is stable if and only if the eigenvalues are less than one in absolute value. Okay? And it turns out this is gonna be true even when A is not diagonalizable, so I donft mind stating it as a fact right now. XT plus one is AX of T is stable if and only if all the eigenvalues have magnitudes less than one, so thatfs the condition. Actually, as in the continuous time case, therefs a much more subtle statement. The spectral radius of a matrix is defined to be the maximum of the absolute values of the eigenvalues. Okay? And so one way -- this is called a spectral radius. Itfs denoted row. This is relatively standard notation. What this says is X -- the discrete time autonomous system XT plus one is AX of T is stable if and only if the spectral radius of the dynamics matrix A, or update matrix, or whatever you wanna call it -- the spectral radius is less than one. Thatfs the condition here. Now more generally, row of A gives you the growth or decay magnitude, asymptotic. So for example, if row is 1.05 -- in other words, there is at least one eigenvalue with a magnitude of 1.05, it says that X of T will grow asymptotically. It depends on the initial condition, but it can grow as fast as 1.05 to the T. If the spectral radius is 0.7, it says that after ten steps roughly, the state has decayed roughly by 0.7 to the ten. Thatfs a small number. Okay? So this is the continuous time analog of the maximum of the real part of the eigenvalues of a matrix. Thatfs what this gives you. Okay, so enough on all that. Wefll now do a bit on -- maybe Ifll even cover it, the Jordan canonical form. So here I actually have to ask, how many people have seen -- or perhaps right verb is been subjected to the Jordan canonical form? So a handful of people. Did it make any sense at the time? How should we interpret this? [Student:][Inaudible]. It did? Okay. Good. Letfs look at it. So the Jordan canonical form is essentially -- itfs as close as you can get to a diagonalization when you canft diagonalize the matrix. So let me explain that. Itfs this. Any N by N matrix -- any, no exceptions -- can be put in something called Jordan canonical form by a similarity transformation. So therefs a matrix T, obviously invertible because Ifm about to refer to T inverse, for which T inverse AT is J. J -- thatfs the Jordan form -- is a block matrix. Each of these blocks, which is called a Jordan block, looks like this. Itfs a bunch of lambdas with ones on the superdiagonal. So thatfs a so-called Jordan block. By the way, a one by one Jordan block is a little matrix that looks like this. Okay? So a diagonal matrix is a special case of a Jordan form. Itfs the special case when there are N Jordan blocks and each block is one by one. So basically, you have these little ones in the superdiagonal. Wefll see what that means soon, what the ones mean. So thatfs a Jordan block. Now a couple of comments about Jordan blocks. First of all, the Jordan form is upper bidiagonal. Thatfs a name meaning itfs got a diagonal, and itfs got -- one diagonal above it is nonzero. Itfs upper -- itfs much more than that because in fact therefs ones in the upper -- itfs in the zero one, the upper diagonal, and not only that, it can only be one if the lambdas are repeated there. So diagonal is a special case of N Jordan blocks. And itfs gonna turn out the Jordan form is unique. Now you have to interpret that very carefully. It is not of course on the details of the mathematics of linear algebra, so itfs not like wefre gonna get crazy with all this, but itfs important to understand what it means to say itfs unique. It says that basically if two people calculate a Jordan form for a matrix, they actually can be different. One difference is simply this. They might order the blocks in a different way. However, the following is true. If two people work out a Jordan form, they have different Ts here possibly, then therefs a permutation -- a block permutation which will change one Jordan form into the other. So the way you would say this is you would say the Jordan form is unique up to permutations of the blocks. So the things people can -- the types of things people cannot agree on is what is J1. No one can agree on that because it depends on what you chose to put as the first Jordan block. However, you canft -- no one can possibly disagree about the numbers of Jordan blocks for example, and the sizes of them, and the sizes associated with a certain eigenvalue. So for example, if you say this has three eigenvalues, this eigenvalue is associated with one Jordan block of size two by two, everyone computing a Jordan decomposition will actually -- will agree with that. Okay. Now I should also mention Jordan canonical form is -- itfs an interesting thing. It is almost strictly a conceptual tool. So itfs used to show things, to illuminate ideas, and things like that. It is actually not used in almost any numerical computations. Okay? So if you go to the web or something like that -- if you go to Google and type letfs say -- if you type something like gsource code Jordan canonical form,h you will get -- actually, what youfll mostly get is youfll get a bunch of harangues about how terrible it is, and no one should ever compute the Jordan canonical form by -- numerically, and so on and so forth. Thatfs probably what youfll get, but youfll find some strange things there, but itfs basically not done. Even when you do find algorithms for it, every paper will start this way. It will say, gItfs well-known that you basically -- it doesnft make any sense numerically to compute the Jordan form.h It goes on, and it says, gBut letfs suppose you did. You really had to. Then this paperfs about how you might do it, if you were to do it, but we donft recommend it.h So that would be the kind of abstract youfd find. Not that this matters. Ifm just mentioning it. Okay. Maybe itfs not never, but itfs awfully close, and boy do you have to justify yourself if you actually do anything like this in any numerical application. All right. Now the characteristic polynomial of A is -- of course, if J is block diagonal -- so the characteristic polynomial of -- actually, under a similarity transformation is the same. Wasnft that a homework problem? It wasnft? Thatfs terrible. Well, similarity -- wait a minute. Oh well. Thatfs -- maybe it shouldnft have to be. Was it one? Well, Ifm glad to see though that everyone thought about that problem a long time and really -- in fact, thatfs great because itfs actually below even your consciousness now. Itfs so ingrained -- [Student:]I think itfs the current homework. Itfs what? Oh, the current homework. Oh well, that would explain it because that wouldfve been terrible. I mean itfs a quick calculation, but the characteristic polynomial under a similarity transformation, it doesnft change the eigenvalues. So the eigenvalues of A are the eigenvalues of this thing. Thatfs a block matrix. Eigenvalues of a block matrix are the eigenvalues of all the blocks stuck together. Eigenvalues of that matrix, thatfs upper triangular. Eigenvalues of this matrix are lambda I with a multiplicity NI. The characteristic polynomial in fact of this is S minus lambda I to the NI here. Thatfs just -- SI minus JI. Okay. So basically, this tells you the following. If you wanna get the characteristic polynomial of the matrix, you take -- itfs the eigenvalues associated with the blocks raised to the block size. And now we immediately see the following. Once you believe in the Jordan canonical form, which I will not show how -- I will not go through the week long proof that any matrix has a Jordan canonical form, especially because the computational algorithmic payoff is -- to say dubious is putting it very nicely, so I wonft go through that, but assuming you believe it, and you should -- that is after all what we have mathematicians for, and they assure us that itfs true. Then it says immediately that if a matrix is diagonalizable, its Jordan form must be -- you can only have block sizes one by one. To say that a Jordan form has block sizes one by one says itfs diagonalizable. Thatfs basically what it says. Okay. Now this -- when you see repeated eigenvalues now -- so in fact, let me explain how this works. If you see repeated eigenvalues, it means maybe you have a nontrivial Jordan form. Oh, I should mention something here. If the Jordan blocks are all one by one, this is diagonal. People would call that a -- if any block is two by two or bigger, people call that a nontrivial Jordan form, meaning diagonal is just diagonalizable. So if you see -- what this says is the following. If the eigenvalues are distinct, your matrix is diagonalizable. And if someone says, gYeah? Whatfs the Jordan form?h Youfd say, gI just said itfs diagonalizable.h Okay. Jordan form is N Jordan blocks, each one by one. Thatfs the trivial Jordan form. If you see repeated eigenvalues, it does not guarantee that the Jordan -- youfre gonna have nontrivial Jordan form. In fact, somebody quickly give me an example of a matrix that has repeated eigenvalues and yet has a trivial Jordan form. [Student:][Inaudible]. I. What size? This is very important. We talked about this earlier. Seven. Thank you. So seven by seven -- the seven by seven identity matrix has seven absolutely equal eigenvalues. Its Jordan form is trivial, which is a pedantic way of saying itfs diagonalizable, and so -- on the other hand, you can have more -- in fact, letfs talk about seven by seven. Thatfs kinda big. Letfs talk about like four by four. So here, thatfs the identity. Eigenvalues all are one. Itfs got four Jordan blocks of size one by one. Okay? How about this one? Eigenvalues are all the same. The eigenvalues of that matrix are one, one, one, one, period. Whatfs the Jordan block structure? Well, therefs one block here, and it looks like that. So you can have a block of two, and then two separate blocks of one. Are there others for this one? Well, I could do this. I could have a block of -- letfs see if I can get it right. No. Yes. If Ifd given myself enough room, it wouldfve been right. How about that? That matrix -- whatfs the block -- what am I doing? My god. Okay, let me just get rid of that. I didnft do that. There we go. Okay, here. Since I canft even get it straight, Ifll show you the blocks. Okay? What about this one? Whatfs the Jordan -- I mean the block size here you would describe as itfs got two Jordan blocks, one three by three, one one by one. By the way, eigenvalues, this matrix, this matrix identity, all the same. Any others? One more one. So we could have a single Jordan block -- I donft know what Ifm doing. Here we go. One, one, one -- there we go. Itfs a single block of size four by four. And that would be the -- and any others? Letfs list all possible Jordan forms of a four by four matrix with four eigenvalues one. Therefs one we missed. What is it? [Student:]Two two by two. Two two by twos, so thatfs -- exactly. So you can have this like that, and thatfs it. These along with I are the five possible Jordan forms for a matrix with four eigenvalues of one. Okay? Of course, a natural question in your mind would be -- well, let me list some. The first might be who cares. So wefll get to that. And the second might be -- and itfs related -- is whatfs the implications. What would it mean? How would you know? How would this show up for example in dynamics or something like that of a system? How would you know you had one of these, and what would be any consequences of it? Okay. And wefll get to that. I promise. So the connection between the eigenvalues and the Jordan blocks and sizes is a bit complicated, but it all comes from this. It says that basically if you have -- the characteristic polynomial is a product of S minus lambda I to the block size I. And the null space for example of lambda I minus A is the number of Jordan blocks with eigenvalue lambda. And we can check that because what happens is if you look at the null space, if you look at lambda I minus A -- I will multiply by T inverse in T like this, and T inverse in T goes in there and annihilates itself. Itfs basically -- thatfs lambda I minus J, and that is equal to a block matrix that looks like this. Itfs lambda minus lambda one. And then therefs some minus ones on the superdiagonal like that. And I wonft draw the other blocks. Okay? Now if you wanna know whatfs the null space of this matrix, you have columns -- at the leading edge of each Jordan block, you have a column whose only nonzero entry -- possibly nonzero entry is lambda minus lambda I. So if lambda is equal to lambda I, you get a zero column, and that means that matrix is gonna drop rank. It is not gonna be invertible. So thatfs what -- this happens. So in fact, this will happen. Every match at the beginning of a Jordan block, you will get a zero column, and that says in fact the dimension of the null space of lambda I minus A is exactly equal to the number of Jordan blocks associated with lambda I. So over here, letfs look at that. What is the -- letfs look at the null space of lambda, which is one, minus the matrix A. And letfs look in different -- this is one times I minus A. Letfs look at the different cases. If you take I, and I ask you whatfs the null space of one times I minus A, thatfs the null space of the four by four matrix zero. Whatfs the null space of the four by four matrix zero? [Student:][Inaudible]. Itfs what? Itfs R4. Itfs all four vectors. So itfs four-dimensional in this case. What is the null space of I minus A for this matrix? What is it? [Student:][Inaudible]. Well, I wouldnft say R1 because that means -- thatfs just the set of real numbers. Itfs the set -- itfs one-dimensional, which is I think what you meant. Itfs one-dimensional, and itfs all vectors of the form something zero zero zero -- yes, [inaudible] something zero zero zero. Itfs one-dimensional in this case. Why? Therefs one Jordan block. It makes perfect sense. This case, if you take I minus this matrix, these become minus ones and these become zeroes, and then you ask what is the dimension of the null space of that matrix, and the answer is two. Thatfs two Jordan blocks. Same here, and in here itfs actually -- the dimension of the null space is three. Okay? So basically, the amount by which -- the amount of rank that lambda I drops -- lambda I minus A drops when lambda is an eigenvalue tells you something -- well, actually it tells you exactly the number of Jordan blocks. Thatfs not enough by the way to give you the full block structure. That comes out of lambda I minus A raised to various powers. And Ifm not gonna go into this, but we can -- in fact, Ifm not gonna go into that, except that we can actually just -- let me just ask a couple of questions. Suppose a matrix has eigenvalues minus one -- with multiplicities, minus one, three, and five -- five by five matrix. Letfs enumerate all possible Jordan forms for that matrix. Letfs start. What are the possible Jordan forms? Whatfs the simplest one? [Student:]Trivial. The trivial one which is just diagonal, minus one, minus one, minus one, three, five -- and if someone asked you how many Jordan blocks, what are the sizes, what would you say here? How would you describe the trivial -- itfs diagonalizable here. [Student:]Five one by one. Yeah, so youfd say itfs five one by one. But youfd also say by the way, itfs pedantic to talk about Jordan blocks when a matrix is diagonalizable. That should also -- that should be the second part of your reply when someone asks you about this. Okay. Are there any others? Any other possible Jordan forms? For example, could I have -- could the eigenvalue three correspond to a Jordan block of size two? No. Out of the question because its multiplicity is one. Same for five. So these two -- no matter what happens, this matrix has two Jordan blocks of size one by one, one associated with eigenvalue three, one with five, period. And the only one where therefs any ambiguity would be this little block of repeated ones, and what are all possible Jordan forms for three repeated eigenvalues of minus one? Wefve got one thatfs diagonal. What else? Two in one and what? [Student:][Inaudible] And three. Okay. In this case, if I asked you -- if I told you what the dimension of the null space of minus one minus I minus A is -- if I told you that number, could you then uniquely determine the Jordan form of A? Ifm getting this out and ready. What do you think? You could. And the reason is the dimension of the null space of minus I minus A can be either one, two, or three. If it is three, it means A is diagonalizable, end of story. If it is two, it means this -- there is one block of size two by two and one of size one by one. If it is one, it says therefs a single Jordan block, period. And therefore, you have determined it. Warning! That is not always the case. If I told you that a matrix is four by four, has four eigenvalues of one, and the dimension of the null space of one I minus A is two, that does not tell you the Jordan form of A. Why? Because you donft know if it is this one or this one. Each of these has two Jordan blocks, and you donft know which it is. Okay? So thatfs the idea. Okay. Letfs look at -- well naturally, the columns of T in T inverse AT equals J are called generalized eigenvectors. These -- you group these according to the blocks, and these are the columns that you associate to the Jordan blocks. If you call these -- if you split these out as columns, then youfll get something like this. The first one comes out just the way you think it does. Or sorry, not the -- yeah, the first one comes out just the way you think it does. Itfs AV1 is lambda times V1 here. Thatfs the first one. But the next ones because of that upper triangular -- sorry, that upper diagonal, you inherit this one. Each AVIJ is actually the previous one plus lambda times this, and so these are called generalized eigenvectors. You actually wonft need to know this. They donft come up that often, but you will see this every now and then. Youfll see people refer to generalized eigenvectors. Now for a -- if you have X dot is AX, if you put a change of coordinates, if you put it in Jordan block form, basically that splits the dynamics into -- it splits the dynamics into I guess K separate -- independent blocks. Each one is a Jordan block. Now you should have an overwhelming urge to draw a block diagram of a Jordan block system, and this is it. Itfs a chain of integrators. The chain of integrators by the way corresponds -- thatfs what that upper block of -- that upper diagonal of ones is this -- gives you a chain. So you should start thinking of an upper block of ones as giving you things like shift. Itfs a shift or itfs a chain in this case like that, and so on. And then the lambdas are simply wrapped around this way. So interestingly, people who do engineering and mathematicians both refer to sometimes Jordan blocks as Jordan chains for totally different reasons. People in engineering refer to it as a chain because itfs got this chain of -- itfs dynamics built around a chain of integrators. And in math, itfs a Jordan chain because itfs a chain of subspaces. So this only shows why if youfre in engineering, so thatfs the dynamics you see. By the way, when you see this, if you remember things like -- so actually, let me actually explain a little bit now because the main thing is to get a rough idea of what on earth does it mean if you have X dot equals AX and A has a Jordan block. This says that some of the dynamics is sort of connected in -- would you call that series, or cascade, or something like that? Thatfs what it means. It means that some of the dynamics feeds into the other. Remember what the diagonal system looked like. Itfs N boxes that look like this. So the Jordan blocks are actually gonna -- itfs gonna turn out itfs gonna have to do with dynamics blocks that cannot be decoupled. Thatfs what itfs gonna be. Itfs gonna be dynamics blocks that cannot be decoupled because theyfre connected in cascade, not in parallel. Okay. And we can look at things like the resolvent and the exponential of a Jordan block. If you look at the resolvent, you see you have this upper thing here, but if you take the inverse of that, the inverse of an upper triangular matrix is not that bad to work out, and it looks like this. Actually, itfs quite interesting because now you can see something else. You can see that when you take the resolvent of a Jordan block, youfre gonna get powers -- youfre gonna get S minus lambdas to negative higher powers. Didnft have that before in the resolvent. So it turns out itfs gonna correspond to -- repeated pols in the resolvent are gonna correspond to Jordan blocks. Could work out the Laplace transform, and this will actually at least give you part of the idea of what the meaning of these things is. When you work out the exponential of a Jordan block, it turns out sure enough you get this E to the T lambda part. Wefre hardly surprised to see that, but now you can see what a Jordan block does. It gives you polynomials. So I think what Ifll do is -- let me say a little bit here. This tells you what you needed to know. When you see X dot equals AX, and letfs make it simple -- letfs say all the eigenvalues are lambda, period. Okay? Now I can tell you what it means for this -- what the Jordan blocks in A -- if A is diagonalizable, the only thing you will see in the solution will be things that look like that, period. If therefs a Jordan block of size two by two, you will not only see exponentials, but you will see terms of this form like that. Thatfs only if therefs a Jordan block of size two by two or larger. If therefs a Jordan block of size three by three, you will see not only TE to the lambda T, but T squared E to the lambda T. Another way -- you can turn it around and say that if you see a solution that looks like that here, that says that there is a Jordan block of size K plus one there. Did I say that right? Yes, K plus one. Thatfs what it says. So Jordan blocks are basically gonna be the matrix attribute which youfre gonna associate with T to the K -- these terms which are a polynomial times an exponential. Okay? And letfs actually just look at one example just for fun, and then wefre gonna quit. Letfs look at X dot -- I believe this might have come up yesterday in your section, so there. I allocated on the page enough for a four by four. There you go. Thank you. Itfs fixed. Letfs look at that. What are the eigenvalues? What on earth have I done? My god. That was a terrible crime. There we go. Okay. But you didnft violate the eight-second rule or something like that. When you write something that stupid down, something should say something within some amount of time. Okay, fine. What are the eigenvalues? [Student:][Inaudible]. All zero. Okay. So just based on that, what do you expect to see on the solution when you look at X? Constants, right? And someone says, gAnything else?h Now in this case, what do the solutions look like? The solutions here -- thatfs a Jordan block of size -- a single one. You are gonna see solutions that look like this. E to the zero T, thatfs one. Youfre also gonna see T, T squared, and T cubed. The solutions of this X dot equals AX for this thing are gonna be polynomials. Everybody cool on that? And theyfre polynomials of up to degree three. Now letfs do one more thing. Letfs change that so that it looks like this. Herefs the block structure. What do you expect to see? Not expect. What would you see? [Student:][Inaudible]. You donft have this, and you donft have that, but you do have this. And finally, if it was just this, if itfs all zero, you donft even expect -- you just see constants. And of course, thatfs correct because X dot equals zero -- the solution is that everything is just constant. Okay? So the neural -- I mean you really wanna kinda understand all of this, but then the real neural connection you needed to make is dynamically Jordan blocks are these annoying terms of the form -- they correspond to these. Thatfs what they -- that tells you therefs a Jordan block somewhere in the neighborhood. Okay. So wefll quit here.