Welcome back. Ifve got to say, the schedule is a bit bizarre where youfre off for a week and then come back for a week and a half, basically. Let me go down to the pad here. Ifll start by making just a couple of announcements. You might want to zoom in a little bit here. The first one is, just to remind you, the final is going be next Friday to Saturday or Saturday to Sunday. And as always, if for some reason these donft work for you, just let us know and wefll figure out an alternative time for you to take it. Other administrative things are homework, so I guess homework eight, which you have now, right, and thatfs due this Thursday, I think. Is that right? Male Speaker: Yeah. So thatfs due Thursday. But wefre going to pipeline a little bit and wefre going to launch homework nine later today. Thatfs after we figure out what it is and all that sort of stuff. So wefll launch homework nine later today and thatfs due next Thursday, thatfs obviously, the last one. Except, of course, with you final youfll be given a homework ten, which you can turn in during the first week of January. Okay. Letfs, if anyone remembers what we were doing, it seems, this is the topic we were doing before our quarter was interrupted by the schedule. We were talking about the idea of the gain of a matrix in a direction. So this is actually very, very important. This is sort of at the heart of the overloading Y equals AX. If you have Y equals AX where A, Y, and X are scalars, itfs, you donft, I mean, itfs very easy, thatfs simply multiplication. When you have a matrix here, itfs much more interesting. Because in fact, the amount by which the size of Y divided by the size of X, thatfs the gain of the matrix A in the direction X, this varies with direction. Thatfs exactly what makes it interesting. Now there are cases when it doesnft vary with direction. Youfve seen one. One is when A is orthogonal. So if A is square and its columns are orthonormal, then Y equals AX, the norm of Y is the norm of X, the gain is one in all directions. So watch, youfre going to get is the complete picture today, on that. So last time we asked the question, what is the maximum gain of a matrix. And we answered it. And the answer was well, itfs, first of it itfs got a name we gave it a name. The name is the norm of a matrix. And it is given by the square root of the largest, whoops, there we go, there it is the square root of the largest eigen value of A transpose A. Now, you have to be very, very careful here, because A is not necessarily symmetric. In fact, itfs not even necessarily square. A transfers of A, of course, is always square, however, and positive semi definite. Itfs nothing norm of a matrix and that tells you exactly what the maximum amplification factor of a matrix is when you consider it the mapping of Y equals AX. But we know more. We know things like this, the maximum gain input direction is the eigen vector associated of A transpose A, associated with [inaudible]. So that would be the input, which is amplified by the largest factor. The input that is amplified by the smallest factor, input direction is QN. Thatfs the eigen vector of A transpose A associated with [inaudible]. Okay. And of course, this can be zero, which is just a longwinded way of saying A has a null space. Because if youfre in the null space, it means youfve come in non zero, you come out zero; it means the gain is zero. So this is a quantitative idea that generalizes the qualitative of null space. Okay. This brings us to the whole story. And the whole story is the singular value of decomposition, otherwise known as the SVD singular value of decomposition. Now, itfs been a while. Historically itfs been around for quite a while, maybe 100 years, thatfs certainly 100 years. It was certainly well known in the e20s and e30s. Itfs been used in statistics. Youfll hear other names for it. The one youfll hear most is PCA, which is principle component analysis. And therefs probably a couple other words for it in other strange fields. That came out the wrong way. That was not to imply that statistics is a strange filed, although, it is a bit odd, actually. So but anyway, so youfll hear other names for it. Certainly, in some areas of physics itfs got some name, and I guess, in other contexts itfs called things like the carbon and low ebb expansion, thatfs something else youfll hear about the KLX. Youfll hear all sorts of names. But the main ones are singular value of decomposition and principle component analysis. Okay. And the singular value of decomposition is a triple matrix decomposition. By the way, wefve seen a bunch of them by now. Well, not that many, actually, but wefve seen a bunch, and it is very important to keep your matrix decompositions clear in your mind. Wefve seen diagonalization, thatfs something like A equals T, land to T inverse. Wefve seen QR factorization. We have seen an orthogonal eigen decomposition. Thatfs like A equals Q land duct Q transpose, and this is yet, another. By the way, therefs lots of others as well, maybe not a lots but five or so other common ones. So here it is. It says that any matrix can be written out this way. Itfs A is U sigma V transpose where A is end by end with rank R. This intermediate matrix sigma is diagonal and its size is R by R. So itfs exactly equal to the rank its size. And the singular values, thatfs the sigma one through sigma R, these are ordered, always, from largest to smallest. So sigma one is the largest and sigma R is the smallest here and these are positive. U and V are both matrices with orthonormal columns. So U transpose U is I and V transpose V is I. So thatfs what these are. So it looks like this, U and then sigma, which is diagonal, and then V transpose can look any size, like that, for example look like, for example. Okay. That would be a typical way this might look. Okay. So this intermediate dimension is the rank of the matrix. Now, you can write this many other ways. If you write the columns of U as U one through UR and the columns of V as V one through VR, then you can light this new sigma transpose out as what people call as a dyadic expansion. So thatfs a dyadic expansion, which of course, means youfre writing the matrix as a linear combination of dyads and the dyads are UYVI transpose. Dyad is just a word for rank one matrix, also called an outer product, though, in other fields. Actually, there are no other fields. This is math. So -- So sigma I here, these are called the singular -- these are the non zero singular values of A. So therefs actually a bit of confusion and wefll get to that in a bit, and Ifll warn you about that, as to whether singular values can be zero. In the definition Ifve just given you, the singular values are all positive period. So it doesnft make any sense to say sigma five is zero. But wefll see that, in fact, therefs a lot of confusion on the streets about this and people do refer to zero singular values. But wefll get to that. Okay. Now, these VIs are called the right or input singular vectors of A, wefre going to see Y in a minute, and the UI are the left for output singular vectors. And it sort of makes sense. If A is U sigma V transpose, you can see that when you sort of operate, but when you form AX, the first thing you do is multiply it by V transpose X. But you know what that is. Thatfs expanding V in the VI basis. Thatfs the first thing you do. Then you scale and then you send the output along the UI basis. So these are really the basis sort of for the input that actually comes through the system and the output. So wefll get more into that in a minute. But itfs clear V is associated with input and U with output. Thatfs clear. Okay. Well, these things are, right at the moment, theyfre quite mysterious but I think we can clear up the mystery very quickly. If you form A transpose A and A is U sigma V transpose times U sigma V transpose, if you transpose this you get V, sigma is diagonal and therefore symmetric, so sigma transpose is sigma, then you get U transpose U sigma V transpose, that goes away, and you get V sigma squared V transpose over here. And what that says is the following: It says that V sigma square V transpose is an eigen vector decomposition of A transpose A. So A transpose A is a symmetric positive semi definite matrix, if you write out itfs eigen vector in respect to decomposition, you get V sigma squared V transpose, thatfs this thing, and so those are the input singular directions. And that says that these are eigen vectors of A transpose A. It also tells you exactly what\now we know what the singular values are. Theyfre no longer mysteries. The singular values are the square roots, sigma I is the square root of the I eigen value of A transpose A. Okay. And thatfs for I equals one to R. Those are only eigen values of A transpose A that are positive, then they become zero. And in particular, sigma one is nothing but the norm of A. Itfs the square root of the largest eigen value of A transpose A. So sigma one is the maximum gain. That sort of makes sense. Wefll see why thatfs the case in a minute. We can actually see it right now, but Ifll wait a minute. Now, if you multiple A by A transpose in the other direction, you get AA transpose, then you just multiply this out. This time the V goes away and you get U sigma square U transpose and what you find are the UI are eigen vectors of AA transpose. So thatfs what they are. These are the output directions. And the sigma IR, again, the square roots of the eigen values of AA transpose, and here a question comes up, and now you have two formulas of the sigma I, which is the right one? Well, theyfre both correct because it turns out that the eigen values of, letfs see, CD are the same as the non zero eigen values of CD, are the same as the eigen values of DC, whenever these are both square matrices. Was there a homework problem on that or something? I have a vague memory of this. Yes. Okay. There was. Good. So in fact, AA transpose and A transpose A, which by the way, generally have different sizes, but the non zero eigen values are the same. By the way, this is super useful. I can give an example. If I ask you what are the eigen values of PP transpose, or something like that, thatfs a symmetric -- I donft even need that, I can just do it the other way around. Letfs make a general thing PQ transpose, there you go. So PQ transpose is a rank one matrix and I asked what are the eigen values of PQ transpose. You would use this theorem to answer it. Where youfd say, well, theyfre the same as the non zero eigen values correspond to the eigen values of that. Thatfs a one by one matrix. I know what the eigen values of a one by one matrix are. Itfs just the number. So that says that the eigen values of PQ transpose are Q transpose P and N minus zeros if that M by M. Okay. So Ifm just giving you an example but anyway, that was just a little aside. Okay. Now, we can actually see what these Us are. Now, obviously, if you write A is U sigma V transpose, if you plug anything into A like AX and some vector here, and it gets multiplied by U. So anything of the form AX is a linear combination of the columns of U. Therefore, the columns of U, actually certainly, are a subspace of the range of A but in fact, theref exactly equal to the range of A, because I can make this thing be any vector I like, including like E one through ER here. Okay. So U one through UR are an orthonormal basis for the range of A. Now, V one through VR are an orthonormal basis for the null space of V orthonormal complement, not the null space of A. In fact, itfs sort of the opposite of null space of A. These are the input directions that basically are not in the null space. Thatfs one way to say it. So if youfre orthogonal to V one through VR youfre in the null space of A. So thatfs what this means. So this connects, once again, the all sorts of things youfve seen before. So letfs get some interpretations now of what this means, A equals U sigma V transpose. Well, it means this, you can write it out as a dyadic expansion or you can do it as a block diagram. As a block diagram, it works like this. It says take a vector N and the first thing you do is you partially expand it. Of course, the Vs are not a basis, because therefs only R of them. But it says you partially expand V, you calculate the V coefficients where I equals one to R, thatfs V transpose X, thatfs an R vector, this is R vector with the coefficients of X and a VI expansion, then you scale those by positive numbers. You scale this by positive numbers. Thatfs all you do. Thatfs diagonal, no interaction. Then you take those positive numbers and you reconstitute the output by multiplying by this matrixfs columns orthogonal. Now, by the way, when you move from here to here, thatfs an isometric mapping. In other words, distances going from here, this is distance in RR, that was big R, big bold R, with a superscript little r, and then this is in our, I canft remember now, M, is that right? Yes. This is NRM. So this mapping is isometric. The distances are preserved exactly, angles preserved, everything here, norms, everything. Okay. So this is kind of the idea. Now, this looks exactly the same as if A is square it looks just like an eigen value decomposition for symmetric A. Actually, itfs one difference. It is symmetric A, this is diagonal and real, but they can be negative, because you can have negative eigen values. And in that case, U and V are the same. The input and output directions are the same. Part of that sort of makes sense. Thatfs what symmetric means. Symmetric means something like you can swap the input and output and it looks the same. Because when you transpose a matrix thatfs what youfre doing. Youfre switching the roles of the input and output. So this kind of makes sense. So this is the picture. By the way, you can see all sorts of things now about you get a completed picture of the gain as a function of direction now. And letfs see how that works. If you want to find an input here for which the output is largest possible, thatfs here, thatfd be the maximum gain input direction, you would do this. Youfd say, well, look, norm here is the same as the norm here. So if you want to maximize the norm here, you donft even have to worry about the U, just make this as big as it can be. So it says what you really want is you want this vector, then scaled by these numbers sigma one through sigma R, to be as big as possible. Now, when you put in a vector here of unit norm, here youfre going to get a V transpose X is a set of numbers whose norm is less than one. I guess thatfs vessels theorem, or whatever, and I believe you had a homework problem on that, too. So because the partial expansion and the VI coordinates. How would you make this come out so that -- whatfs going to happen to this vector is this vectorfs going to come out, itfs going to be a bunch of numbers whose squares are going to add up to less than one, less than equal to one, and then theyfre going to be scaled by positive numbers. If you want that to come out as big as possible, you have to put all, all of this into the first component because it is going to get the greatest amplification factor. And that means you want V transpose X to be E one. And therefs only one way to do that. And thatfs to chose X equals V 1. Thatfs the first column of V. You can check this, you can verify this many other ways. Ifm just explaining how this works. Okay. So thatfs the idea. U one is the highest gain output direction. And of course, you have AB 1 and sigma one and U one. In fact, of course, you have AVI equals sigma I times UI. So it basically says that the Ifs singular input direction is amplified by gain factor sigma I. The sigma Ifs are gain factors. So letfs see, actually, if we can figure out how this works. Am I missing something here, Page 30? No, Ifm not. Here we go. All right, here we go. So letfs take this four by four matrix. Obviously, thatfs so small you can figure out what happens yourself. Actually, thatfs on the boundary, to tell you the truth. Even a four by four matrix, you canft look at it, no person can look at a four by four matrix, except in totally obvious cases like when itfs diagonal, and say something intelligent about the gain varies. I mean, you can say a few things, you can mumble a few things and wave your arms, but you really canft say much more. So you can imagine what happens when you have a 500 by 500 matrix. But anyway, letfs take A is in our four by four, and the singular values are 10, 7, .1, and .05. So the first thing, we can all ask sorts of questions about what does that mean about the matrix and its gain properties. Well, the first is this, the rank of A is four, because you have four singular values. So A is U sigma V transpose, U, V, and sigma are all four by four in this case. And what this says is that the gain varies from as small as .05 to it as big as 10. So that is a 200 to 1 ratio of gain. So if you want to know how isotropic is this mapping, in the sense of how much can the gain vary with direction, the answerfs 200 to 1. Wefll talk, actually later this lecture, about what that isotropy means and isotropy means, lack of isotropy. So the gain varies from .05 so it has no null space. However, there is an input direction, which is scrunched by factor of 20 to 1. And therefs another input direction, which is amplified by a factor of 20 to 1. Okay. So we can be much more specific about it. We could actually say that if you have an input, if X, mostly, is in the plane given span by V 1 and V two, these are orthogonal, thatfs the two most sensitive input directions, if youfre in that plane, then you get amplified by somewhere between seven and ten. Thatfs what it means. Okay. So therefs one plane of input directions for which this system roughly has a gain on the order of ten, you know, between seven and ten. By the way, the outputs come out along the directions span by U one and U 2. Those are the two most sensitive output directions, or the highest gain output directions. Now, at the other side, there is another plane, by the way, orthogonal to the original plane that requires, by the way, some serious visualization skills, advanced visualization skills, since this is an R 4. But you have another plane orthogonal to the original high gain plane, now you have a low gain plane inputs that along in this low gain plane they get scrunched by factor between zero, 5, and .1. Okay. And theyfre going to come out, well, theyfll hardly come out, but when they come out, I mean, theyfll be greatly attenuated, but they will come out along U three and U four span by that low gain [inaudible] plane. Thatfs the picture. Okay. Now, depending on the application, you might say that A is rank two. Thatfs going to depend entirely on the applicant. Obviously, itfs not rank two. Mathematically itfs rank four, right there. However, in some cases you might say that A is rank two. As someone said, itfs sort of the gain, if this is some kind of communication channel or wireless system, thatfs a gain of plus 20 DB, thatfs like minus 26 or something like that. You might say, you know what, thatfs below the noise floor. Thatfs worth nothing. Okay. Letfs look at another sort of baby example here. Letfs take a two by two matrix. Now, for a two by two matrix you do not need singular value composition to understand what it does, thatfs for sure. That much is for sure. Okay. With the singular value of 1 and .5, basically what it say is youfd take something like X here, therefd be an input coordinate system, thatfs V one and V two, and you would resolve X into its components. That would be this and this. This V one gets multiplied by sigma one and then comes out along U one. So whatever that is, if thatfs V one, letfs say that about .55. Then youfve got .55 times U one, wherefs U one, thatfs over here. This should be bigger, shouldnft it? A little bit. I didnft draw it right, I think. Well, anyway, itfs about right. That should come out about here and then for U two you get, maybe, I donft know, .7 V two, .7 V two should get scrunched by .5 and you should get .35 V two, and indeed, thatfs, at least visually, a third. So thatfs about right and thatfs the output. Thatfs the picture. Okay. Thatfs the singular value decomposition. It has spread, in fact, to almost all fields. Then any field that uses any math, any field that does any computation. So you will hear about it in different contexts, youfll hear about it in the statistics, you hear about it in the finance, youfll hear about it in basically anything. Signal processing control, wireless stuff, networking it goes on, and on, and on. So youfll hear all about it. So itfs actually a very good thing to know. Itfs probably itfs the last real topic for this class and itfs very important and youfll be seeing it unless you -- well, anyway, youfll be seeing it in many, many contexts. Therefs actually still some weird backward fields where it hasnft hit yet. Itfs coming to those fields. So therefs a bunch more. I think, actually, letfs see, the last couple of years itfs sort of hitting fluid mechanics. So thatfs the one I know about most recently. Which is amazing, because itfs this incredibly complicated very sophisticated field and they just didnft know about this and itfs a big deal now because it explains all sorts of stuff there. Anyway, itfs also used in tons and tons of other things. Itfs used in, letfs see, itfs used in the current standard for DSL, is basically straight out of the SVD, so all multi-input, multi-output wireless systems straight from SVD. Itfs a lot of methods for compression and things like that, straight out of the SVD. Another one, and this is -- itfs only a small stretch, but it turns out, in fact, things like Googlefs page rank. You may have heard this, straight from the SVD, the zero of order of the page rank, SVD. Itfs nothing else. So anyway, I just wanted to point out that this is actually an incredibly important topic. I want to say that because others, obviously, are much less important, for example, our friend, the Jordan canonical form. But we teach that just so youfd know about those things and can defend yourself, if needed. If youfre caught in a dark alley late at night with a bunch of mathematicians. So, okay, who are pissed off. Okay. So letfs do a couple of applications, but like I say, you go to Google and type this or some of itfs synonyms like PCA and therefll be literally millions and millions of hits. The whole fieldfs based on this so youfll be seeing a lot of this. The first thing is, actually, I can now tell you the whole story on the pseudo-inverse. This is for any A not equal to zero. Ifve already fixed the notes, I just havenft posted it, but I just noticed this mistake this morning. So for any A non zero has a singular value decomposition so people donft really say that the zero matrix doesnft have a SVD or something, or it gives you a headache, I donft know, something like that. I mean, V and U would be zero by, theyfd have to be zero by something. So any non zero matrix has a SVD. So if you have a non zero matrix and its SVD is U sigma V transpose, then the Moore-Penrose inverse is V sigma inverse U transpose. Thatfs it. And you can check this agrees exactly with the two formulas for the Moore-Penrose inverse or pseudo-inverse youfve seen so far. So youfve seen when A is skinny and full rank therefs this formula, and youfve seen when A is fat and full rank therefs this formula. But this formula, here, works for any matrix that is non zero, any at all, and it basically does what U think U do. If you want to sort of invert this, youfd sort of make a V transpose over there. Sigma square invertible diagonal can be, of course, inverted, and then you do something like inverting U, it would be inverting U, if U were square, youfd make it U transpose and you get this. You switch the role of the input and output directions, invert the gains along these directions and thatfs what you get. Okay. So thatfs the general Moore-Penrose inverse. It coincides with these, in these two cases, but itfs now generalized to other situations. And in fact, in the general case herefs the meaning of it. So letfs look at the general case when A is not full rank. Okay. So A is not full rank. If A is not full rank and you want to do least squares, least norm things get tricky. So letfs see how that works. If A is not full rank the first thing you do is you say, well, how close can you get to Y with something of the form AW. This a least squares problem except now A is not full rank. And so up until now, you actually did not know how to solve this. You inform your trusted A transpose, A inverse and right at that point you have a problem because A transpose A is not invertible if A is not full rank. Okay. However, if you want to minimize this, there are still -- you can form this least squares problem and ask how close can you get? Now, because A is not full rank the answerfs not going to be a single point, itfs going to be actually the whole set of points an affine set. So itfs going to be the set of points for which AZ minus Y, the deviation is as small as it can be. Thatfs an affine set. Itfs a set of least squares solutions. Now, in that set, you can then ask for the smaller, the minimum norm least squares approximately solution. And thatfs exactly what A dagger Y gives you. So this is quite complicated but itfs worth thinking about and you should be able to sort of see what it does now. So this is the picture. So X zero inverse is the minimum norm least squares, and I really should put approximate here because I donft like writing, I donft even like, Ifm going to fix that. I didnft even like saying least squares solution. Because the whole point is itfs not a solution. I know that thatfs what everyone says, but it irritates me. So Ifm going to change it. Itfs an approximate solution. Okay. Now, we can also do the pseudo-inverse of the regularization. So letfs see how that works. So letfs see how thatfs going to work. So letfs let U be positive and then this is the regularized, this is ticking of regularization, or ridge regression you call it in statistics, or something like that, and here you have two things. You have sort of your mismatch and then you have something that penalizes the size of X. By the way, itfs clear, this is going to be related to pseudo-inverse, because pseudo-inverse finds among the Xs that minimizes this, it finds the one of smallest norm. So itfs clearly related, these are clearly very close. Now, this is a perfectly good formula. Itfs simply A transpose A plus Mu I inverse, A transpose Y. This inverse is valid, period. A can be fat, skinny, full rank, it donft matter, it can be zero here, itfs fine, no problem, okay, because the Mu I takes care of everything and this is always positive definite. Okay. Now, as Mu goes to zero this limit gives you this pseudo-inverse. And in fact, it turns out if you want to get the pseudo-inverse this is another formula for it. Itfs the limit of A transpose A plus Mu I inverse times A transpose. So itfs the limit. Another way to understand what the pseudo-inverse is, if you donft like the SVD, you can think of it this way, itfs the limit of regularized least squares in the limit as Mu goes to zero. Now, in some cases this is super simple. If A is skinny and full rank, then this makes sense when Mu equals zero and it just converges to our old friend, this thing. But this formulafs quite different in other cases. If A in fat and full rank, A transpose A is definitely not invertible. Okay. But A transpose A plus Mu Y, as long as Mu is positive, is invertible. And this inverse makes sense and this will actually converge to this A value. In fact, in that case it would have converged, of course, to A transpose quality inverse, which is the other formula for it. But the other cool thing is that even if A is not full rank, this converges to that, to the pseudo-inverse. Thatfs what it is. So the pseudo-inverse is something like the limit of infinitely small regularization or something like that. As usual, the important part to know what it means not to know simply that it exists and so on. Okay. So now you know the full story on the pseudo-inverse. Now, like a lot of other factorizations, like the QR and I think wefve seen a couple others, maybe just QR, you can extend, in factorization, by zero-padding matrices on the left and right. You do that to make certain a zero-padding some and then padding out other things in convenient ways. One way is to like, for example, fill out an orthonormal basis. So you can do that with SVD, too. So if you take -- thatfs the original SVD, like this, so R is the rank, these are all positive numbers. What you do is you find a matrix U two, which is complimentary to U one, and you find a complimentary matrix to V one. So thatfs V two. Complimentary means that the columns of the V two are together with the columns of V one, form an orthonormal basis. And you know lots of ways to do that. One way is to run QR by appending, for example, the identity matrix after U one through UR that would do. So what youfre going to get now is U one U two is actually now a square matrix and itfs the output size M by M. And youfll get a matrix V one V two. By the way, these look, when written like this, theyfre fat, theyfre actually square because U is tall here. Right, so this is actually a square matrix, although visually when printed it doesnft look like it. Thatfs square as well. Thatfs the input direction and both are orthogonal. So now what you do is to make up for the fact that youfve just added a bunch of extra columns on U and V, no problem, you zero pad sigma and you get something like this. Now sigma is no longer square. Itfs diagonal in the weird sense that off the diagonal N equals zero. But generally people donft talk about diagonal matrices that are not square. I mean, like, I donft know why they donft, but they generally donft. And then these are just the zero padding the square. By the way, some of these could be zero, depending on A. So for example, if A is skinny and full rank, like this, it means itfs rank R and that says that when you write it as U sigma and V transpose, one of these is going to be square. I guess this one. Thatfs your V transpose. So in this case, if A is skinny and full rank, you donft have to actually add anything to V. For example, if A were fat and full rank, you wouldnft have to add anything to U. Okay. I hope I got those right. I might not of, but too bad. Okay. So you zero pad like this and now you can write, once again, A is U sigma V transpose. So you have A as this thing and it works out this way. And people call this the full SVD of the matrix. In this form what happens is these are actually orthogonal matrices U and V. But sigma now has exactly the same size, as the original matrix A is diagonal, I mean, in the sense thatfs the only place where it can have -- but it can also have zero entries. And the one we looked up before, if you want to distinguish it from this, is called the compact or the economy SVD. Oh, I should mention this, itfs actually quite important in matlab, for example, if you just do this I think you get the full thing. Although I donft remember. Is that correct? Okay. And if you want the compact one, I think you write something like this. Thatfs econ, the string econ, I think. Anyone have a laptop with them? I know Jacob was, hefs not here, so anyway. Okay. By the way, this is quite important. Let me mention a couple of reasons this is important. You donft want this -- if A is, for example, I donft know, 100 by 10,000 then just forget the computation, letfs talk storage, here. So you can be in big trouble if you type this. If you type this itfs going to return a V, which is 10,000 by 10,000. Okay. And if thatfs not big enough of a dense maker to make trouble, you know, make this 100,000. And now, for sure, youfre in trouble. So unless you want the big one, you could actually be in big trouble. Here, this one would do the right thing. I would return something that was, if this was full rank, it would return 100 by 100 matrix, no problem, 100 by 100 diagonal matrix, definitely no problem, and in this case 100,000 by 100 matrix, again, that would be no problem. Okay. But if you try this one it wonft work. Okay. So I just mention this. But it is important. Okay. Now, here, by the way, is where the confusion on zero singular values comes in. So itfs up there and let me explain it in this context. Now, letfs just take a matrix A thatfs in R like seven by seven. And letfs say itfs rank six. Okay. So the SVD, or the compact SVD, or economy SVD looks like this. Itfs U, thatfs six Y, and you get a six by six diagonal, and then you get a V transpose, which is going to be six by seven. Okay. V is seven by six. Okay. Now, this is what happens in that case. And the singular values are sigma one through sigma six. However, when a matrix is square a lot of people will talk about sigma seven, in this case, and theyfll say itfs zero. And there are actually applications where you want to talk about that. Because, in fact, what you might be interested in is what the minimum gain of this matrix. Well, itfs rank six. Itfs got a null space. The minimum gain is zero. And then people will talk about V seven. V seven is, of course, nothing but, in this case, describes null space of A. So it turns out while sigma one, V one, U one, these things are sort of unambiguous. But if someone writes something like sigma min, you have to be kind of careful. You mean the minimum of the positive ones or you mean the actual minimum, in which case it would be zero or something like this. So and things like this you just have to ask because it just depends on the context as to whether a personfs referring to the smallest positive one, or the smallest one period, one zero. Of course, if things are full rank or whatever therefs no issue here. Therefs no ambiguity. Okay. So letfs look at the geometric interpretation of how this works. So this is a full SVD. Thatfs to make it simple to do the interpretation. You have A equals U, sigma V transpose. We know exactly what orthogonal matrices do geometrically. They either rotate, or they reflect, or something more complicated, but equivalent. They preserve angles, they preserve distances, they preserve norms. Right. So itfs an isometry. Anything related to distances, or angles, or whatever, is preserved by V and U. Now, like rotations. Okay, so this says to operate by A on a vector you carry out three operations. First, you multiply by V transpose, then sigma, then U. So the first one is something like -- and I mean rotate loosely in the sense of -- of course, a rotation is given by an orthogonal matrix, but it could also be a reflection or some combination of the two. Okay. So the first thing you do is you rotate. Then you stretch along the axis by sigma I. Now, you put in the differing gains. So far therefs been no difference in gains anywhere because if you multiple by an orthogonal matrix itfs actually complete isotropic in the sense that the gain is one all directions. Now you put in the gain and isotropy. So you put in the unevenness in the gains now. By the way, some people call this, Ifve heard dilate and, I donft even know how to pronounce this, but it might be dilatate. Does anyone actually know how to pronounce that? For a long time I thought it wasnft a word. But then I found it in enough books that I realized it was a word, and then I thought it was like a weird variation. Maybe something like from the U.K. or something like that. You know just some weird thing. But it turns out no, it appears to be a word. I donft know if therefre exact synonyms or not, but anyway. But youfll hear this operation when you stretch or scrunch something along the axis is a dilation. That makes sense to me. Dilatation, actually, I heard people say it. Even people who seem to know what theyfre talking about say it, so I suppose it is a word. Okay. You also zero pad or truncate, because when this middle action, here, actually is going to map something of different -- itfs going to map of different sizes. You zero pad or truncate, depending on whether the output is bigger than then input or smaller. So thatfs what you do. And the final thing is you rotate by U. So this means that we can calculate what happens if you have a unit ball. You can calculate exactly what happens. Take a unit ball. First, multiply a unit ball by V transpose. If you take a unit ball and you apply an orthogonal matrix to it, nothing happens. I mean, the individual vectors move. This point might move over here. But the ball is invariant. The ball is the same. So anything thatfs in the unit ball, if the image of it is the same unit ball. Not at the detailed level, not on a vector-by-vector basis, but on the set basis, itfs the same. Okay. Now, you take this ball and you apply a dilation to it. And in this case, itfs a dilation along the axis because you multiply one here, therefs a singular value of two, and a singular value of .5. You take the two, thatfs a long the X one axis, or whatever, and you stretch it by two, and you scrunch it by a factor of two and X two. Okay. Now you take this thing and you rotate by this output, these output directions, and you get this. And you can see all sorts of cool stuff here. You can see, for example, that, indeed, you won with the largest gain output direction because the unit ball and all the things that went in here, were things of norm one. The biggest that any of them ever came out is that guy and that guy, by the way. So those are the two points that came out with the largest possible gain. And look at that, sure enough, theyfre aligned with U one. And what input direction, by the way, ended up here? What input direction was it? [Student:][Inaudible]. V one, precisely. So V one is some vector, here. Say that might be V one. V one got rotated to here or here, by the way. It wouldnft matter, it got rotated either to here or to here. And those are the two points, on this ball, that suffer, or on this sphere, that suffered the largest expansion here, and ended up here. Then it got rotated over here or here. So thatfs it. So it says that the image of the unit ball, image means you overload matrix vector multiply to apply to sets. You multiply a matrix by a set of vectors. You just multiple all the vectors and you get a new set. Thatfs what this is. And it says that the image of the unit ball is an ellipsoid and the principle axisfs are exactly sigma IUY. Okay. Now this allows you to understand exactly the game properties. And letfs do an example. For fun letfs do a couple of examples. In fact, letfs start with this. Suppose the singular values are 5, 4, and 0.01, therefs your singular values. What does the image of the unit ball look like? What does it look like? [Student:][Inaudible]. Itfs a pancake. Exactly! Slightly oblong pancake, but itfs a pancake for sure. Everybody agree with that? And this says something like this. There is an input plane, letfs say itfs a mapping from R three to R three, it says therefs an input plane, which is a plane where this thing kind of has a high-gain, like between four and five, and another gain, you might even call V three, you might even call that like an almost null space, because while AV three is not zero, that would qualify for full no space membership. It is not zero. But it comes out attenuated by a factor of 100. Okay. So itfs something like almost a null space. In fact, this is exactly the kind of ideas you want to have when you do this. In fact, youfll find that all of these ideas about null space, range, all that stuff, itfs wonderful, you have to understand all of it. However, in practice, these things are quite useless and quite silly, very silly, and very useless. What the SVD will do is it will give you quantitative ways of talking intelligently in the context of some application about things like null spaces, and ranges, and things like that. So here the range of this matrix is R three. Itfs on two period. Up until, maybe, the third week of the class itfs on two period. Now, in this context, depending on you know the details of the application, you could say that matrix actually has a range which is two dimensional. Itfs a lie. It has a three-dimensional range. But itfs third direction is so feeble, the gain is so feeble, that, again, depending on the context, you might say for practical purposes that matrix is rank two. Okay. So this is what this is going to allow you to do. Wefll see a lot of examples of this so this will become clear. Okay. So thatfs the pancake. Whatfs this? Whatfs the image? If the singular values were 5, 4, 3.9, what would the image look like? Just roughly. [Student:][Inaudible]. Egg, sure. Yeah. Maybe not even quite an egg. No, maybe an egg, I donft know. Something like an egg, right. So basically it says that therefs one cross section that looks kind of like an egg. Maybe not even as much as a football or something like that, but an egg. Thatfs that. Okay. How about this? [Student:][Inaudible]. Itfs what? [Student:][Inaudible]. A cigarette toothpick. Okay. Ifll take toothpick. Maybe I miss heard that. Did someone say cigarette? Okay. Yeah. No. Ifll take toothpick. Yeah. So itfs a toothpick. Thatfs what that is. And basically, actually, what you can say is, wefll see this quite precisely soon, you can actually say in this case that matrix is nearly rank one, depending on the application. I want to stress that. Matrix is nearly rank one depending on the application. So I want to stress that. So thatfs the idea. Okay. So letfs see how the SVD comes up in estimation and inversion. Y equals X plus V, where V is the measurement noise, like this, and X is something you want to estimate. Now, probably the best way to really get more deeply than wefve gotten into this, although, to tell you the truth, the stuff you know is going to work quite well in practice and go quite far. So what you would do is you might do something like this. You might do least squares, here. You might take Y, choose to estimate X, you would take something a dagger Y, okay, which would be A transpose A inverts A transpose Y, and you would say that thatfs the X that minimizes the discrepancy between what you actually measured and what you would have measured had the perimeter been X had, or something like that. Better models, well, maybe not better, I donft know, different models are obtained by looking at probability and statistics. So here you assume V has some distribution. But therefs another model, and the other model says this, that this noise, I donft know anything about this noise, but Ifm willing to give you a bound on its norm. Okay. So some people call that a norm bound model or something like that, and some people call it a worse case model or something, and itfs a lot more forgiving than a statistical model. Wefre not looking at statistical models so it doesnft matter, but a statistical model basically says V is random. So V is not out to get you in a statistical model. In this one, V can be out to get you. V can include someone intentionally trying to mess up your measurement. So V can actually include things like somebody intentionally trying to jam you. It can include something related to X. For example, V could include quantization error, in this case. So itfd be another model for quantization error. Something like that. But anyway, thatfs the model where you know nothing about the noise except you have a norm bound on it. Okay. Now, letfs put an estimator in, a linear estimator, and letfs make it unbiased, that mean BA equals I, and you know what that means, it means that if you form X minus BY, if there were no noise, youfd get prefect reconstruction no matter what X is. So this means itfs perfect reconstruction with no noise. Now, the estimation error is this. If you form X hat, thatfs what you guess, minus what X really is, itfs BV. Now this makes sense. Wefve seen this before. The key now, is to do this, among left inverses of A. What you want is a small one. Why does it want to be small? It wants to be small because B is what amplifies the noise. So B does two things. It maps a measured vector into an estimate, but it also maps measurement noise or error into estimation error. So you want B small. B canft be that small because, after all, BA is I. So for example, you canft take B as zero, as an extreme case, obviously. Okay. Now, we can then analyze, using this model, the set of possible estimation errors and thatfs an ellipsoid. Actually, therefs a more refined ellipsoid you can get thatfs smaller but wefll just do this one. So here you can get a more refined ellipsoid but for now, wefll just this. Wefll just say that if you get BV and V ranges in some unit ball, then in fact, therefs an ellipsoid, which is BV, as V ranges over all vectors of norm less than alpha. This defines an ellipsoid, and wefre going to call that B uncertainty ellipsoid. Note that has nothing to do with Y. Actually, thatfs the key to the more refined estimate. But this is fine for now to give you a rough idea. Then it says that this is where the estimation error lies, but an ellipsoidfs original model is symmetric, so you can say X is X hat minus X utility, thatfs X hat minus, itfs N, X hat minus the uncertainty ellipsoid, but minus and plus ellipsoid are the same because an ellipsoid is invariant under negation. And that says that the true X has to align an uncertainty ellipsoid, this E uncertainty, itfs centered at the estimate X hat, and you can say exactly what the insert into the ellipsoid is. B here, at the singular values of B gives you the semi-axis of uncertainty ellipsoid. So you want B, if, for example, these were the semi-axis of B, then we can talk about what it means. In this case, if these were the singular values of B, therefore, these are the semi-axis lengths, when you multiple them by alpha of the uncertainty ellipsoid, in this case, very different. This one says your ignorance is pretty much isotropic. It means that when you guess X hat, where X can be compared to where X hat is, itfs uniform, itfs kind of your ignorance in this direction is kind of, you know, itfs not too much different from your ignorance in that direction. And if you want, you can imagine a GPS system or something like that where youfre estimating position. And it basically says, I guess if someone forced me to guess where I am, Ifd say Ifm here. But if someone says how far off are you, you say, well, I can be, you know, plus/minus three millimeters this way, itfs about equal in all directions. Everybody see this? If, however, this is the story, it gets really much more interesting. This says well, in fact, this is in an estimation context this is singular values of B remember. This says, in that context you would say, well, in two directions I have uncertainty on the order of plus/minus four millimeters, letfs say. In another direction Ifve nailed it. I have way, way better. Better, Ifm much, much better, estimate of the uncertainty. So in this case the uncertainty ellipsoid is a pancake. Remember for an uncertainty ellipsoid small is good. This is better still. This says, in two orthogonal directions I have a very, very high confidence in my estimate. But in one direction, itfs really way, way worse. So thatfs what is says. Okay. So thatfs the picture. Okay. So thatfs the idea. Okay. So you can look out, for example, the worse error is the norm of B, for example, and it turns out, in fact, that the best thing, again, by this measure, is actually suing the least squares estimators. So if you use least squares estimator here, in fact, youfre uncertainty ellipsoid will be given by this formula will be as small as it can possibly be. So if what you care about is uncertainty ellipsoid, or if you believe this analysis and so on, then once again, our friend least squares is the best estimate you can come up with. In other words, if someone says, no, Ifm not going to use least squares Ifm going to use some other fancy thing, youfll get an uncertainty ellipsoid with some other left inverse of A. Youfll get an uncertainty ellipsoid thatfs actually bigger in all directions. Thatfs what will happen. And that comes out this way, thatfs B least squares, B least squares transpose less BV transpose. And thatfs that. So once again, we see that least squares is actually a good thing to do. So letfs look at this example, our navigation using range measurements. So here I have four measurements that estimate two positions and we look at two positions. One was the just enough measurements method where you take the first two measurements, you say, look, Ifve got two unknowns, all I need to two measurements, Ifll use the first two, you donft even need the last two range estimates, donft even need it. So you form this left inverse like this. Itfs got a two by two block and a block of zeros. And then wefll compare that to least squares, which is X hat is A dagger least squares. Thatfs a two by four matrix. A dagger is this matrix that blends your four ranges into your position of your X and your Y position. Okay. And from the same example, herefs what we get, this is where alpha equals one. In the first case, you get this. And in the second case, you get this. Now, what is correct here, is that if you were to translate this over so they have the same center, theyfd lie on top of each other, in fact. Oh, sorry. This one completely covers that one. Okay. So thatfs the picture. Now, there is something a little bit confusing about this and Ifll say what it is and it has to do with the fact that our analysis here can be sharpened a little bit once you have Y. This idea that this set, this uncertainty ellipsoid, the idea that this set contains X minus X utility, thatfs completely correct. However, this does not depend on Y. Note itfs independent of the actual measurement you made. It turns out, once you have the measurement you can actually get a smaller uncertainty ellipsoid. And itfs not very hard to show. I wonft do it now. But now wefre going to explain why, for example, if one estimator says you have to be in the shaded set, another one says you have to be in this shaded set, then for sure you have to be in the intersection. And the intersection of those two is some weird thing that looks like this. Itfs this little guy here. Everybody see that? And then youfd say, wow, so although this was not a good method, it actually did give us information because it ruled out, for example, all these points over here that this model allowed you to keep. It turns out that if you do the more subtle analysis of uncertainty ellipsoid, you get something thatfs the same shape as this but itfs smaller, like itfs scaled down. Something like that. So, okay. I just mention that because itfs, well, I confused myself this morning when I looked at this. Thatfs what it is. This doesnft matter. The main point is you can understand uncertainty and how a measurement system works now by looking at the singular values of B. Thatfs what you can do. Okay. I think what Ifll do is I might actually, Ifll skip this thing, which is just a completion of squares argument, that is a method to show, and if you have any left inverse itfs going to end up with BV transposed bigger than the B least squares B least squares transpose. So Ifm just going to skip that because itfs just a completion of squares argument, because I want to get on to one other topic. Now, what wefre going to do is -- I guess, I can make some other comments about, Ifll make just a couple more points about estimation and inversion. So in estimation and inversion, letfs go back to that and look how that works. You know Y equals AX say plus V, like that. Now in some cases, in fact, Ifll just tell you in a case where, I donft know, that I was involved in, and it was geophysical inversion. So X had a dimension around 10,000, something like that, and Y was a bunch of magnetic anomaly data, and it was something like 300,000 or something, if these numbers are not right, the storyfs going to make the right point. So 10,000 variables, 300,000 measurements like, no problem. In fact, that sounded like a 30 to 1 ratio of measurements to perimeters you want to estimate. Thatfs a pretty good ratio, right, 1.1 to 1 is not so great, 30 to 1, thatfs sounding pretty good. Youfve got about 30 times more measurements than you need. Everybody cool on this? Okay. So theyfd say, no problem. A is full rank, by the way. And so you would imagine you would form this. You would form that. Something like that. Okay. And in fact, you can do this and you would get things. Things would come out and it would be shown in beautiful color that would show you the substructure, the presumed substructure and things like that. So that would be it. Okay. No problem. Now, this matrix A, I can tell you what itsf singular values were. Okay, itsf singular values were this. Actually, how many singular values does it have if itfs 300k by 10k and itfs full rank? So how many singular values does it have? [Student:][Inaudible]. Itfs 10k, 10,000. So Ifll just jump into the story. It had about 20 significant singular values. After that they dropped off precipitously and were way small. They were like below the noise floor. Okay. Everybody see what Ifm saying? So the sad thing, at that point, people had to accept the following: They paid a lot of money to fly an airplane with this squid, you know, magnetic anomaly detector around. They got 300,000 measurements, 30 times more measurements than they had on those, 30 times. Which you would think would be way, thatfs a lot of measurement redundancy, 30 to 1. And it was very, very sad, because they thought they took 300,000 measurements and in fact, they didnft know it, but they actually took only 20. So they thought they were in the 30 to 1 measurement to data position, but in fact, they were in something like a 1 to 500. So what it meant was this X hat, though very pretty, very nice when rendered in color, basically, was completely meaningless. Now, you can estimate about 20 perimeters in X if you wanted, but certainly not 10,000. Okay. So this is an example of the kind of reasoning or understanding you would have when you fully absorb the idea of SVD. And itfs sort of this step beyond what happens when you first just sit around and just talk about rank, and range, and null space, and things like this. This matrix has no null space. There was no arrangement of subsurface, therefs no X for which AX is zero. None. None. So in fact, without the noise you can get X exactly, in fact, by many methods. Because therefs lots of left inverses of A. This is your understanding as of week 2.5. Okay. Now, itfs much more subtle and it basically says for all practical purposes you found A was full rank, it was actually like rank 20. You have a question? [Student:][Inaudible]. Ifd like to say yes, but the answer is no, I donft know why. You know, I can make up a story though, about it. Itfll be pretty good, too. Yeah, I could make up a story. I could probably even get the number 20, I mean, with a long enough story. Right. But no, I donft know why that is. It just is. So thatfs a very good question. Actually, itfs because of that that youfd want to then -- once youfve come to the realization that although you have 300,000 pieces of data, you actually only have 20 measurements, independent measurements. The next question is what are you going to augment it with? Do you want to drill a couple of exploratory wells, you want to do magnetic anomaly, gravitation anomaly, or do you want to set up some explosives. These are all getting other different measurements. And it would be interesting there to find out which one of those would augment these to give you. And all you do, completely trivial, is you add some more rows to A. Any measurement is a row to A. You add more rows to A and find out what happens to your singular values. And if you go from 20 non singular, you know, significant ones to 50, you just added 30 more measurements. [Student:][Inaudible]. Oh, absolutely! Yeah. The additional measurements, beyond the number of the perimeter youfre estimating, that goes into smoothing, blending, averaging, this kind of thing. I mean, that is, the matrix we just looked at, I donft know if you remember it, I remember it, the two by four matrix, I mean, that kind of says it all. You should think of a left inverse in that case and actually, I love the names, sensor fusion is one, itfs a great name, sensor blending. And basically, itfs a fat matrix kind of with all relatively small numbers. It takes all of those measurements in, scrunches them together, and gets very good measurements. So beyond any measurement beyond 10k, would have gone into averaging out the noise and stuff like that. I donft know if I answered your question. As a polite way, he said, no, thatfs what he said, but oh well, that happens. Okay. Our next topic is, also, a topic on which there are entire classes, like whole ten week classes, and itfs a huge area. Itfs the idea of a sensitivity of linear equations to data error. So Ifll say a little bit about that. Itfs not something wefve looked at in this class. In this class, when you use metlab, for example, which in turn uses LA pat, metlab does nothing, right. When you type A backslash V, it doesnft actually matter whatfs happening, but something like the equivalent of forming A transpose A inverse A transpose times B is happening. And you know that thatfs not done in perfect precision. Itfs done with floating point numbers or whatever, and that means that the numbers that come out might not be exactly right. And youfre use to that because, I guess, when you solve some problem and you look at the X is supposed to be equal to Y, youfd take the norm. Youfd look at X minus Y and then entries would be on the order of A minus 15, and thatfd be your hint that what youfre seeing is just the artifacts of round off errors and stuff like that. Thatfs a huge filed. Itfs call numerical analysis. And wefll see a little bit of it now, about how this works. But singular value becomes positional central to the whole thing. So letfs just do a square matrix, just for fun. Y equals AX. Well, of course, that means X equals A inversed Y, end of story. Not much more to say. Actually, therefs going to be more to say, and itfs actually quite interesting. Now, suppose you have a noise or an error in Y. For example, Y equals Y plus delta Y. Now, in numerical analysis, delta Y is like a floating point round off error, for example. So there itfs really small, right, it could be on the order of, you know, 1E minus 10, or, I mean, it depends on the context, but wefre talking errors out there in the 10th, 15th, 20th digit. Right. Actually, in an engineering context, you want to imagine delta Y, itfs generally much bigger. If Y is a measurement, for example, and you use a 12 bit A to D, thatfs only 4,000 numbers. So what that says is, the fourth digit, did I get that right, 4,000, yes, the fourth digit is absolute nonsense, if I have a 12 bit A to D. If I measure something with 12 bits, anything, it doesnft matter what it is, and then it says basically that whatever the true Y is, I donft know, but a delta Y was added to it that was on the order of one in 4,000. Okay. By the way, thatfs pretty good. A lot of high-speed stuff operates with much smaller precisions like 8-bits and things like that. Okay. So when youfre doing engineering, if youfre doing economics delta Y is probably on the order of Y, basically. I mean, itfs like friends in economics say, but never on record, you know, when we talk about accuracy of things, theyfll say things like, well, Ifll tell you the truth, if we get this sign right, wefre happy. But theyfll never say that on the record. I donft know why. It has to do with the sociology of the field or something. Anyway, so theyfre happy of they get the sign right. That means that this delta Y is on the order of Y. But anyway, all right. Well, if Y becomes Y plus delta Y, X, obviously, becomes X plus delta X, where delta X is A inverse delta Y. Thatfs simple enough. And thatfs says that how much X has changed, because Y changed, is simply less -- the worse case change is norm A inverse times norm delta Y. Thatfs it. And what that says is the following, if the inverse of a matrix is large, and I mean large in norm, then itfs says small errors in Y can lead to large errors in X. By the way, it does not say does lead. I guess, do would be English, actually, but anyway. It does not say it must lead. Thatfs not the case. For you to actually suffer the largest increases here, delta Y has to be in the largest gain input direction of A inverse. Thatfs what has to happen. Okay. But unless you can rule that out, itfs possible, and it says you can get larger. And now, this says, basically, you really canft solve for X given Y with small errors. And that means A can be considered singular in practice. Okay. So I mean, we can do a quick example here. And by the way, this clears up something a bit odd, which I will explain now. Take a matrix thatfs singular. So itfs N by matrix and itfs singular. Then you ask someone herefs a measurement, Y equals AX. Ifve measured it. You know, this Y equals AX, and X and Y plus V, so therefs a small noise. V is delta Y here. Thatfs it. So thatfs your measurement system. And then you say, please estimate X. And someone whofs taking the first two weeks of [inaudible] will say, you canft, your A is singular. Of course, I canft estimate X, itfs ridiculous. In fact, you know if itfs rank N minus one, therefs a whole line of things with no noise, give you the exact same answer. Itfs impossible, so forget it. Get yourself a better measurement setup. Okay. So you go back into the lab, and itfs a fact, actually, that if youfd take a matrix A and you add, if you pick sort of entry at random or if you add a small number to any random number to the entries, with probability of one, the matrix will be non singular. So you go to the lab. All you have to do is find some physical measurement thing, tap on it with a hammer. I guarantee you A is now non singular. So a very gentle tap, just tap it once, Ifd say, itfs taken care of, no problem. And they look at A, A changed in the fifth digit. And I say, check it, itfs non singular, and it will be. By the way, it will absolutely be non singular. Okay. Now, and Ifd say, go ahead, and then Ifd walk away. But actually I should walk away very quickly because itfs obviously not going to work. A inverse, A had a singular value just exactly zero. After I tapped the measurement system with a hammer, A had a singular value that was one E minus eight. Okay. A inverse now has a singular value thatfs ten to the eight. The norm of A inverse ten to the eight and that says that, if you go down the pad here, it says that small errors in Y can be amplified by as much as a factor of ten to the eight. Okay. So if you wanted say, a couple of digits of that, you know, if you wanted X to be accurate to like, you know, to .01, no problem, Y has to be accurate to ten to the minus ten. Okay. Which is not with a 16-bit A to D is going to give you or anything like that. Okay. So I mean, this all completely obvious. You know, it would be idiotic if taking a little rubber hammer and tapping on a measurement setup, you know, or if I told the pilot in the magnetic anomaly thing to just do another thing and Ifd pushed on the control stick or something like that, and that made it non singular. Anyway, it was non singular there, anyway. But that doesnft affect anything. In fact, itfs singular value analysis that tells you this. Okay. So what this says is that a matrix can be invertible, metlab, which is to say LA pack, will happily invert the matrix for you. By the way, if it gets too close to singular, then LA pack will issue a warning and say something like R con warning matrix is singular to working precision. At that point, youfre so far out of it, it means that basically, with double precisions itfs finding out. But the bad part for engineering, in fact, for all applications, is -- I guess itfs the good part, is that the really evil ones are when you invert it, you see entries like ten to the nine, ten to the twenty, you know somethingfs fishy. But the worse part is when itfs kind of plausible and itfs giving answers thatfs plausible but completely wrong. That can happen, actually. Okay. So -- all right. Now, I think Ifll mention one more thing and then wefll quit for today. What you really want to do is it doesnft really make an sense to say that the norm of delta Y is -- I mean, if I ask you, if norm delta X is .1, is big or small, well, it depends on what X is. If the norm of X is 100 then the norm of delta X is .1, itfs pretty good. Thatfs like a .1 percent error. If the norm of X is .01 and norm delta X is .1, thatfs terrible. That means that your delta X is ten times bigger than the X youfre interested in. So you should really be looking at relative things that develop errors. Now to do that you do this, you say, if Y is X and norm Y is less than norm A, norm X, and therefore, if you work out delta X over X this thing is the relative change in X. Thatfs less than equal to the norm of A times the norm of A inverse times the relative change in Y. And this number comes up all the time. The product of the norm of A and the norm of A inverse, thatfs called the condition number and itfs traditionally called kappa of A. And itfs the maximum singular value of A divided by the minimum, always bigger than one. And this is read informally something like this. It says that the relative error in solution X is less than the relative error in the input data times the condition number. So the condition number is what amplifies or can amplify relative error. So thatfs the picture. Or, again, therefs a misspelling here. But in terms of, if you take the log two of this, and again, very roughly, youfd say something like this. The number of bits of accuracy in the solution is about equal to the number of bits of accuracy in the data, thatfs the long two of this thing, minus log two kappa. So when you solve Y equals AX for X given Y, you lose accuracy. By the way, if the condition number is one you lose no accuracy. Matrices that have condition number one, Ifll give you an example, would be an orthogonal matrix. They have condition number one. All singular values are the same, gain factor all directions equal. Thatfs why people who do numerical work love them. And by the way, itfs also why a lot of transforms youfll see, like DCT, FFT, or DFT, a lot of transforms youfll see, actually, are orthogonal. I mean, therefs many reasons for it, but one nice part about it is when you use transform like that, and then the inverse, and stuff like that, you actually are not losing bits of accuracy. Okay. So this is the picture. So wefll continue from here next time.